threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nWe have a timestamp column in one table and we are getting the above problem\nwhen the timestamp column has a value upto milliseconds.\n\nWe are using PostgreSQL 7.2 version stable jdbc driver(pgjdbc2.jar) got from\nhttp://jdbc.postgresql.org/download.html. Does anyone know of latest\nproduction ready driver that fixes this problem?\nThanks\nYuva\n", "msg_date": "Tue, 18 Jun 2002 11:50:33 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "String index out of range: 23 problem with timestamp" } ]
[ { "msg_contents": "PostgreSQL 7.2.1...\nWe have:\nC:\\CYGWIN\\USR\\SRC\\POSTGRESQL-7.2.1-1\\src\\include\\utils\\catcache.h(84):ex\ntern MemoryContext CacheMemoryContext;\nC:\\CYGWIN\\USR\\SRC\\POSTGRESQL-7.2.1-1\\src\\include\\utils\\memutils.h(70):ex\ntern DLLIMPORT MemoryContext CacheMemoryContext;\n\nThey cannot both be correct. Which is correct?\n", "msg_date": "Tue, 18 Jun 2002 12:36:20 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Why is CacheMemoryContext declared DLLIMPORT in one place and not in\n\tanother?" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> They cannot both be correct. Which is correct?\n\nDLLIMPORT is correct. Patch committed --- thanks for catching it!\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Jun 2002 09:47:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why is CacheMemoryContext declared DLLIMPORT in one place and not\n\tin another?" } ]
[ { "msg_contents": "Hi,\n\nWe have a timestamp column in one table and we are getting the above problem\nwhen the timestamp column has a value up to milliseconds.\n\nWe are using stable PostgreSQL 7.2 jdbc driver (pgjdbc2.jar) got from\nhttp://jdbc.postgresql.org/download.html. Does anyone know of latest\nproduction ready driver that fixes this problem?\nThanks\nYuva\n\n", "msg_date": "Tue, 18 Jun 2002 12:36:45 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "String index out of range: 23 problem with timestamp milliseconds" } ]
[ { "msg_contents": "Hi,\n\nWe observed a \"String index out of range: 23\" problem when we tried to\nretrieve timestamp field value that has milliseconds. We are trying to find\na quick fix for the millisecond problem for Timestamp.\n\nWe notice there is a beta driver(devpgjdbc2.jar) that contains this fix\ncurrently, but was wondering if the fix is issolated to just one or a few\nclasses that we might get from the beta driver and insert into the\nproduction driver jar(pgjdbc2.jar).\n\nIs this an potential option or is the dependency risk too high?\n\nThanks\nYuva\nEbates Shopping.com (http://www.ebates.com)\n", "msg_date": "Tue, 18 Jun 2002 12:57:44 -0700", "msg_from": "Yuva Chandolu <ychandolu@ebates.com>", "msg_from_op": true, "msg_subject": "Milliseconds problem with PostgreSQL 7.2 jdbc driver (pgjdbc2.jar\n\t)" }, { "msg_contents": "Yuva,\n\nYour question would be more appropriate for the pgsql-jdbc mailing list.\n\nThe fix to your problem is in version 1.48 of \norg/postgresql/jdbc2/ResultSet.java. This happens to be the first \nchange after 7.2 which is version 1.47. Thus you should have no problem \napplying this fix to your 7.2 driver. (you can go to the webcvs \ninterface off of developer.postgresql.org to see the diffs for yourself).\n\nthanks,\n--Barry\n\n\nYuva Chandolu wrote:\n\n>Hi,\n>\n>We observed a \"String index out of range: 23\" problem when we tried to\n>retrieve timestamp field value that has milliseconds. We are trying to find\n>a quick fix for the millisecond problem for Timestamp.\n>\n>We notice there is a beta driver(devpgjdbc2.jar) that contains this fix\n>currently, but was wondering if the fix is issolated to just one or a few\n>classes that we might get from the beta driver and insert into the\n>production driver jar(pgjdbc2.jar).\n>\n>Is this an potential option or is the dependency risk too high?\n>\n>Thanks\n>Yuva\n>Ebates Shopping.com (http://www.ebates.com)\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> \n>\n\n\n", "msg_date": "Wed, 19 Jun 2002 12:03:36 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Milliseconds problem with PostgreSQL 7.2 jdbc driver" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Tuesday, June 18, 2002 11:13 AM\n> To: Michael Meskes\n> Cc: PostgreSQL Hacker\n> Subject: Re: [HACKERS] ECPG won't compile anymore\n> \n> \n> Michael Meskes wrote:\n> > On Tue, Jun 18, 2002 at 10:29:10AM -0400, Tom Lane wrote:\n> > > I'd be inclined to say that you don't commit until bison 1.49 is\n> > > officially released. Got any idea when that will be?\n> > \n> > No, that's the problem. ECPG and the backend parser are \n> running out of\n> > sync. After all bison's release may be later than our next one. \n> > \n> > I cannot commit even simple bugfixes anymore as my source tree\n> > already has the uncompilable bison file. So I would have to \n> work on two\n> > different source trees. I don't exactly like that.\n> \n> Are we the only ones up against this problem? Hard to imagine we are\n> the only ones up against this limit in bison. Are there \n> other options? \n> I don't see how we can distribute ecpg in 7.3 without some \n> kind of fix.\n\nThere are some other freely available parser/generators. I like the\nLemmon parser generator. Of course, I have no idea how traumatic it\nwould be to convert a Bison grammar into Lemmon. There is also PCCTS\nand some other free ones.\n\nhttp://www.hwaci.com/sw/lemon/\n", "msg_date": "Tue, 18 Jun 2002 13:50:50 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: ECPG won't compile anymore" } ]
[ { "msg_contents": "\nI received this via private email. Do we want CORRESPONDING added to\nthe TODO list?\n\n---------------------------------------------------------------------------\n\nDavid H. Johnson wrote:\n> Hi,\n> \n> I am writing you because you're listed as the TODO List maintainer. I noticed\n> that PostgreSQL does not support the CORRESPONDING BY clause for\n> UNION/INTERSECT/EXCEPT. The only reference I have found to this functionality\n> in the mailing list archives is this message by Tom Lane:\n> \n> http://fts.postgresql.org/db/mw/msg.html?mid=1273164\n> \n> In the message he discusses a rewrite of UNION/INTERSECT/EXCEPT for PostgreSQL\n> 7.1 to allow them to work with views and subselects. Tom also says that he\n> will not try to implement the CORRESPONDING option, but that it should be a\n> fairly straightforward extension when it is attempted.\n> \n> I was wondering if you (or the PostgreSQL development team) would consider\n> adding the CORRESPONDING option of UNION/INTERSECT/EXCEPT queries to the TODO\n> list.\n> \n> Thanks for reading.\n> \n> --David\n> \n> -- \n> David H. Johnson, Engineer I\n> University of Alabama at Birmingham\n> Center for Biophysical Sciences and Engineering\n> \n> E-mail:\tdhj@uab.edu\n> Phone:\t(205)934-6759\n> Fax:\t(205)934-0480\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 18 Jun 2002 16:53:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL SQL92: CORRESPONDING BY" }, { "msg_contents": "> I received this via private email. Do we want CORRESPONDING added to\n> the TODO list?\n\nSure. Though since we now have features.sgml which has the complete set\nof SQL99 itemized features we perhaps should shrink the ToDo entries\nregarding SQL99 features to only one:\n\n\"Support additional SQL99 features\"\n\nAnything more specific which does not itemize all the features which we\nmight want to see implemented probably does not help. And carrying the\nlist in two places is more trouble than it is worth imho.\n\nI've been working on separating the current single list of features into\n\"Supported\" and \"Unsupported\" lists; will commit a first cut this\nevening. Look in\n\n http://developer.postgresql.org/docs/postgres/features.html\n\nfor the most recent version available. I estimate that we have about two\nthirds of the feature set implemented. I'm looking at implementing some\nof them as (almost) trivial improvements or changes to our syntax; I've\ngot CREATE CAST... as one of the first ones to do.\n\n - Thomas\n", "msg_date": "Tue, 18 Jun 2002 22:50:21 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL SQL92: CORRESPONDING BY" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Dann Corbit \n> Sent: Tuesday, June 18, 2002 1:51 PM\n> To: Bruce Momjian; Michael Meskes\n> Cc: PostgreSQL Hacker\n> Subject: Re: [HACKERS] ECPG won't compile anymore\n> \n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > Sent: Tuesday, June 18, 2002 11:13 AM\n> > To: Michael Meskes\n> > Cc: PostgreSQL Hacker\n> > Subject: Re: [HACKERS] ECPG won't compile anymore\n> > \n> > \n> > Michael Meskes wrote:\n> > > On Tue, Jun 18, 2002 at 10:29:10AM -0400, Tom Lane wrote:\n> > > > I'd be inclined to say that you don't commit until bison 1.49 is\n> > > > officially released. Got any idea when that will be?\n> > > \n> > > No, that's the problem. ECPG and the backend parser are \n> > running out of\n> > > sync. After all bison's release may be later than our next one. \n> > > \n> > > I cannot commit even simple bugfixes anymore as my source tree\n> > > already has the uncompilable bison file. So I would have to \n> > work on two\n> > > different source trees. I don't exactly like that.\n> > \n> > Are we the only ones up against this problem? Hard to \n> imagine we are\n> > the only ones up against this limit in bison. Are there \n> > other options? \n> > I don't see how we can distribute ecpg in 7.3 without some \n> > kind of fix.\n> \n> There are some other freely available parser/generators. I like the\n> Lemmon parser generator. Of course, I have no idea how traumatic it\n> would be to convert a Bison grammar into Lemmon. There is also PCCTS\n> and some other free ones.\n> \n> http://www.hwaci.com/sw/lemon/\n> \n\nIt occurs to me that SQLite is a PostgreSQL clone {grammar wise, but a\nsubset}:\nhttp://www.hwaci.com/sw/sqlite/\nthat uses the Lemon parser generator. Therefore, the grammar for the\nSQL language itself should be extremely similar, and it might\n(therefore) be very easy to see what he has done to make the transition.\nOf course, the ECPG tool has its own grammar, so I am not sure how\nhelpful that would be.\n\nBy the way, there is a rather unflattering speed comparison with\nPostgreSQL on this page:\nhttp://www.hwaci.com/sw/sqlite/speed.html\n\nIt might be nice to use those tests with gprof to find out where the\nbottlenecks are. It also seems possible that he may have used an older\nversion of PostgreSQL.\n", "msg_date": "Tue, 18 Jun 2002 15:40:29 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: ECPG won't compile anymore" }, { "msg_contents": "Dann Corbit wrote:\n> [...]\n>\n> By the way, there is a rather unflattering speed comparison with\n> PostgreSQL on this page:\n> http://www.hwaci.com/sw/sqlite/speed.html\n> \n> It might be nice to use those tests with gprof to find out where the\n> bottlenecks are. It also seems possible that he may have used an older\n> version of PostgreSQL.\n\nIt also seems they forgot to vacuum analyze the database. \n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Wed, 19 Jun 2002 09:12:03 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: ECPG won't compile anymore" } ]
[ { "msg_contents": "Hi,\n\nWhen using the date_part function is it possible to get the month name \nand not the month number ?\nI'm using something like 'date_part('month', birth_date)' - birth_date \nbeing a date data type.\n\nCurrently I'm getting numbers from 1 to 12 whereas I'd like the full \nname like 'June' or 'July' etc.\n\nThanks\nRudi.\n\n", "msg_date": "Wed, 19 Jun 2002 09:19:45 +1000", "msg_from": "Rudi Starcevic <rudi@oasis.net.au>", "msg_from_op": true, "msg_subject": "date_part" }, { "msg_contents": "Rudi,\n\nselect to_char(date_column, 'Month');\n\nSee similar under \"Formatting Function\" in the docs.\n\n-- \n-Josh Berkus\n\n", "msg_date": "Tue, 18 Jun 2002 17:47:08 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: date_part" } ]
[ { "msg_contents": "I am working on the TODO item:\n\n\to Change syntax to WITH DELIMITER, (keep old syntax around?)\n\nand I have added syntax so COPY can now accept all parameters at the end\nusing WITH:\n\n\tCOPY table\n\t FROM { 'filename' | stdin }\n\t [ [ WITH ] \n\t [ BINARY ] \n\t [ OIDS ]\n\t [ DELIMITER 'delimiter' ]\n\t [ NULL AS 'null string' ] ]\n\n(COPY TO is similar.)\n\nFor portability, it still supports the old syntax of BINARY after COPY,\nWITH OIDS after 'table' and USING DELIMITERS after 'filename'. I have\nsent the patch to the patches list.\n\nI have not modified pg_dump so that 7.3 dumps can be loaded into <=7.2\ndatabases. I have modified psql \\copy to _only_ use the new syntax, and\nto only send the new syntax to the backends. (We don't usually support\nnew psql in to older databases anyway.) Not sure if I should document\nthe old syntax somewhere because pg_dump uses WITH OIDS with the old\nsyntax.\n\nI have not applied the patch. I am waiting for comments.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 19 Jun 2002 02:13:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "COPY syntax improvement" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have not modified pg_dump so that 7.3 dumps can be loaded into <=7.2\n> databases. I have modified psql \\copy to _only_ use the new syntax, and\n> to only send the new syntax to the backends.\n\nWhy not leave psql alone? Seems to me you are gratuitously breaking\nbackwards compatibility of psql, and gaining absolutely zero in return.\n\nI know that a lot of 7.3 psql's other backslash commands will not work\nagainst pre-7.3 servers, but there's no help for that (short of making\npsql version-aware like pg_dump is). I don't think that's a reason to\nbreak \\copy too, when there is no need for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Jun 2002 09:54:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY syntax improvement " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have not modified pg_dump so that 7.3 dumps can be loaded into <=7.2\n> > databases. I have modified psql \\copy to _only_ use the new syntax, and\n> > to only send the new syntax to the backends.\n> \n> Why not leave psql alone? Seems to me you are gratuitously breaking\n> backwards compatibility of psql, and gaining absolutely zero in return.\n\nYes, it was too late last night to think. I have made \\copy\nbackward-compatible, and it internally uses the old syntax.\n\n> I know that a lot of 7.3 psql's other backslash commands will not work\n> against pre-7.3 servers, but there's no help for that (short of making\n> psql version-aware like pg_dump is). I don't think that's a reason to\n> break \\copy too, when there is no need for it.\n\nYep.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 19 Jun 2002 12:32:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: COPY syntax improvement" }, { "msg_contents": "Bruce Momjian wrote:\n> I am working on the TODO item:\n> \n> \to Change syntax to WITH DELIMITER, (keep old syntax around?)\n> \n> and I have added syntax so COPY can now accept all parameters at the end\n> using WITH:\n> \n> \tCOPY table\n> \t FROM { 'filename' | stdin }\n> \t [ [ WITH ] \n> \t [ BINARY ] \n> \t [ OIDS ]\n> \t [ DELIMITER 'delimiter' ]\n> \t [ NULL AS 'null string' ] ]\n> \n> (COPY TO is similar.)\n\nActually, in looking at the grammar, I see no reason NULL should have AS\nwhile DELIMITER does not. New syntax has AS optional for both:\n\n\tCOPY table\n\t FROM { 'filename' | stdin }\n\t [ [ WITH ] \n\t [ BINARY ] \n\t [ OIDS ]\n\t [ DELIMITER [ AS ] 'delimiter' ]\n\t [ NULL [ AS ] 'null string' ] ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 19 Jun 2002 13:39:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: COPY syntax improvement" }, { "msg_contents": "Bruce Momjian writes:\n\n> \tCOPY table\n> \t FROM { 'filename' | stdin }\n> \t [ [ WITH ]\n> \t [ BINARY ]\n> \t [ OIDS ]\n> \t [ DELIMITER 'delimiter' ]\n> \t [ NULL AS 'null string' ] ]\n\nI'm not sure what was wrong with the old syntax except for fixing the\nDELIMITER plural. For example, the current\n\n copy mytable with oids from stdin using delimiter '|';\n\nreads very pleasantly, but\n\n copy mytable from stdin with oids delimiter '|';\n\nisn't nearly as good. (E.g., it's not the oids' delimiter, and it's not\n*with* delimiter because you don't actually copy the delimiter, you just\nuse it.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 19 Jun 2002 23:12:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: COPY syntax improvement" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > \tCOPY table\n> > \t FROM { 'filename' | stdin }\n> > \t [ [ WITH ]\n> > \t [ BINARY ]\n> > \t [ OIDS ]\n> > \t [ DELIMITER 'delimiter' ]\n> > \t [ NULL AS 'null string' ] ]\n> \n> I'm not sure what was wrong with the old syntax except for fixing the\n> DELIMITER plural. For example, the current\n> \n> copy mytable with oids from stdin using delimiter '|';\n> \n> reads very pleasantly, but\n> \n> copy mytable from stdin with oids delimiter '|';\n> \n> isn't nearly as good. (E.g., it's not the oids' delimiter, and it's not\n> *with* delimiter because you don't actually copy the delimiter, you just\n> use it.)\n\nI thought there were complaints that the old COPY syntax just had too\nmany features stuffed in too many unusual places, e.g. delimiter after\nfilename, oids after tablename, binary after COPY, NULL after\ndelimiter. It was just too weird. Now, all the options can be\nspecified after WITH, like the other SQL commands.\n\nHowever, the old syntax still works.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 19 Jun 2002 22:43:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: COPY syntax improvement" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > \tCOPY table\n> > \t FROM { 'filename' | stdin }\n> > \t [ [ WITH ]\n> > \t [ BINARY ]\n> > \t [ OIDS ]\n> > \t [ DELIMITER 'delimiter' ]\n> > \t [ NULL AS 'null string' ] ]\n> \n> I'm not sure what was wrong with the old syntax except for fixing the\n> DELIMITER plural. For example, the current\n> \n> copy mytable with oids from stdin using delimiter '|';\n> \n> reads very pleasantly, but\n> \n> copy mytable from stdin with oids delimiter '|';\n> \n> isn't nearly as good. (E.g., it's not the oids' delimiter, and it's not\n> *with* delimiter because you don't actually copy the delimiter, you just\n> use it.)\n\nNew supported syntax I posted is now DELIMITER [AS] ' '. I noticed that\nproblem myself, that NULL had AS but DELIMITER didn't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 19 Jun 2002 22:46:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: COPY syntax improvement" }, { "msg_contents": "Bruce Momjian writes:\n\n> I thought there were complaints that the old COPY syntax just had too\n> many features stuffed in too many unusual places,\n\nHaven't ever seen one. This command has no precedent in other products,\nonly years of going virtually unchanged in PostgreSQL. Changing it now\nand allowing countless permutations of the key words is going to be\nconfusing, IMHO.\n\n> e.g. delimiter after\n> filename,\n\nCOPY is the only command to use a delimiter, so this can hardly be\nqualified as an \"unusual\" place.\n\n> oids after tablename,\n\nThat's because the OIDs are in said table.\n\n> binary after COPY,\n\nWhich is consistent with DECLARE BINARY CURSOR.\n\n> NULL after delimiter.\n\nOK, that order should perhaps be more flexible.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Sun, 23 Jun 2002 23:49:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: COPY syntax improvement" }, { "msg_contents": "\nWell, good points. I know there were some people who wanted a clearer\nsyntax, so I supplied it. Seems you don't. I would like to hear from\nsomeone else who doesn't like the improved syntax before I consider\nchanging things back.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I thought there were complaints that the old COPY syntax just had too\n> > many features stuffed in too many unusual places,\n> \n> Haven't ever seen one. This command has no precedent in other products,\n> only years of going virtually unchanged in PostgreSQL. Changing it now\n> and allowing countless permutations of the key words is going to be\n> confusing, IMHO.\n> \n> > e.g. delimiter after\n> > filename,\n> \n> COPY is the only command to use a delimiter, so this can hardly be\n> qualified as an \"unusual\" place.\n> \n> > oids after tablename,\n> \n> That's because the OIDs are in said table.\n> \n> > binary after COPY,\n> \n> Which is consistent with DECLARE BINARY CURSOR.\n> \n> > NULL after delimiter.\n> \n> OK, that order should perhaps be more flexible.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 23 Jun 2002 17:51:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: COPY syntax improvement" } ]
[ { "msg_contents": "I've just updated features.sgml to have a list of supported features\nfollowed by a list of unsupported ones. There are some items in the\n\"unsupported list\" which look easy to do. I've got patches for a \"MATCH\nSIMPLE\" clause on referential integrity declarations, and am developing\npatches for CREATE CAST, DROP CAST, and probably a few more items which\ncould be simple parser additions or changes. Look at\n\n http://developer.postgresql.org/docs/postgres/features.html\n\nfor the most recent version (it is not there yet but I'd expect it to be\nthere soon).\n\nI'd like to look at supporting the various shorthand literal notations\nlike B'0101', X'ABCD', and N'national' in a better way in the lexer to\nmake sure that the information gets to the parser more robustly. Right\nnow, N'national' is not supported at all and the other two forms are\nsupported by transforming the input locally in the lexer which seems to\nwork but does not extend to the NATIONAL CHARACTER case. Comments and\nsuggestions are welcome.\n\nY'all may want to look at features.sgml to find projects if you are\nlooking for something to do; there are several items which look to be\nrelatively easy to accomplish and others at various levels of\ndifficulty...\n\n - Thomas\n", "msg_date": "Tue, 18 Jun 2002 23:23:17 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "SQL99 feature list" }, { "msg_contents": "On Wed, 2002-06-19 at 08:23, Thomas Lockhart wrote:\n> I've just updated features.sgml to have a list of supported features\n> followed by a list of unsupported ones.\n\nIt seems you have to move (T171, LIKE clause in table definition) to\nunsupported :\n\nhannu=# \\dt \n List of relations\n Name | Type | Owner \n---------+-------+-------\n t1 | table | hannu\n(1 row)\n\nhannu=# create table t11 (a int,like t1);\nERROR: parser: parse error at or near \"like\"\n\nhannu=# create table t11 (a int) inherits (t1);\nCREATE\n\n-------------------\nHannu\n", "msg_date": "19 Jun 2002 16:01:41 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: SQL99 feature list" }, { "msg_contents": "> It seems you have to move (T171, LIKE clause in table definition) to\n> unsupported :\n> hannu=# create table t11 (a int,like t1);\n> ERROR: parser: parse error at or near \"like\"\n\nAh, that's what that means! I'll move it to \"unsupported\", but it seems\nlike it might be fairly easy to implement, eh? Anyone interested in\nlooking at it?\n\n - Thomas\n", "msg_date": "Wed, 19 Jun 2002 07:05:47 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SQL99 feature list" }, { "msg_contents": "> It seems like a little more powerful version of PGs INHERITS\n\nWhat makes it \"more powerful\"? I'd guess that it is an attribute copy\nrather than a declaration of inheritance and could be based on our\nexisting \"create table as\" feature.\n\n...\n> I can see some features that are not listed in neither of your feature\n> lists, like\n> * ON COMMIT <table commit action> ROWS\n> * <subtable clause> ::= UNDER <supertable clause>\n\nHmm. I worked from the SQL99 draft document we have found on the web.\nHopefully the list is not completely orthogonal to the released\nstandard; it took several hours to transform the list and to look\nthrough it for a first cut :(\n\n - Thomas\n", "msg_date": "Wed, 19 Jun 2002 07:42:43 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SQL99 feature list" }, { "msg_contents": "On Wed, 2002-06-19 at 16:05, Thomas Lockhart wrote:\n> > It seems you have to move (T171, LIKE clause in table definition) to\n> > unsupported :\n> > hannu=# create table t11 (a int,like t1);\n> > ERROR: parser: parse error at or near \"like\"\n> \n> Ah, that's what that means! I'll move it to \"unsupported\", but it seems\n> like it might be fairly easy to implement, eh?\n\nIt seems like a little more powerful version of PGs INHERITS \n\n> Anyone interested in looking at it?\n\nThis is the full <table definition> BNF from ISO 9075\n\nI can see some features that are not listed in neither of your feature\nlists, like\n\n* ON COMMIT <table commit action> ROWS\n* <subtable clause> ::= UNDER <supertable clause> \n\n-----------------------------------\n\n11.3 <table definition> \n\n<table definition> ::=\n CREATE [ <table scope> ] TABLE <table name>\n <table contents source> \n [ ON COMMIT <table commit action> ROWS ]\n\n<table contents source> ::=\n <table element list>\n | OF <user-defined type>\n [ <subtable clause> ]\n [ <table element list> ]\n\n<table scope> ::= <global or local> TEMPORARY\n\n<global or local> ::= \n GLOBAL\n | LOCAL\n\n<table commit action> ::=\n PRESERVE\n | DELETE\n\n<table element list> ::= \n <left paren>\n <table element> [ { <comma> <table element> }... ] \n <right paren>\n\n<table element> ::=\n <column definition>\n | <table constraint definition>\n | <like clause>\n | <self-referencing column specification>\n | <column options> \n\n<self-referencing column specification> ::= \n REF IS <self-referencing column name> <reference generation> \n\n<reference generation> ::=\n SYSTEM GENERATED\n | USER GENERATED \n | DERIVED \n\n<self-referencing column name> ::= <column name>\n\n<column options> ::= <column name> WITH OPTIONS <column option list> \n\n<column option list> ::=\n [ <scope clause> ] \n [ <default clause> ] \n [ <column constraint definition>... ] \n [ <collate clause> ]\n\n<subtable clause> ::= UNDER <supertable clause> \n\n<supertable clause> ::= <supertable name> \n\n\n<supertable name> ::= <table name> \n\n<like clause> ::= LIKE <table name> \n\n------------------\nHannu\n\n", "msg_date": "19 Jun 2002 17:31:08 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: SQL99 feature list" }, { "msg_contents": "On Wed, 2002-06-19 at 16:42, Thomas Lockhart wrote:\n> > It seems like a little more powerful version of PGs INHERITS\n> \n> What makes it \"more powerful\"?\n\n\n> I'd guess that it is an attribute copy\n> rather than a declaration of inheritance and could be based on our\n> existing \"create table as\" feature.\n\nI said \"a little\" more powerful :)\n\nYou can specify where the \"inherited\" table fields are placed which you\ncant when using INHERITS.\n\nI think that 95% of current use of INHERITS is in cases where you don't\nactually want inheritance :)\n\nRegarding which you can probably move (S111: ONLY in query expressions)\nto supported, with a notice that we dont support (single-inheritance)\nUNDER but have our own notion of multiple inheritance.\n\nI have ideas how to implement UNDER (put everything in one table so that\nprimary keys et. al. are more easyly inherited) but it will be viable\nonly after the same obstacles that fast DROP COLUMN are overcome.\n\n> > I can see some features that are not listed in neither of your feature\n> > lists, like\n> > * ON COMMIT <table commit action> ROWS\n> > * <subtable clause> ::= UNDER <supertable clause>\n> \n> Hmm. I worked from the SQL99 draft document we have found on the web.\n\nI used the file ansi-iso-9075-2-1999.pdf from\n\nhttp://www.cse.iitb.ac.in:8000/proxy/db/~dbms/Data/Papers-Other/SQL1999/\n\nwhich has a fat red FINAL stamp on the front page, but I'm not sure how\nlegal it is to download it :)\n\n> Hopefully the list is not completely orthogonal to the released\n> standard; it took several hours to transform the list and to look\n> through it for a first cut :(\n\n--------------\nHannu\n\n", "msg_date": "19 Jun 2002 17:58:58 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: SQL99 feature list" } ]
[ { "msg_contents": "Hi,\nI've mailed this previously to the novice list, but got no response, so \nhopefully this is more the appropriate place.\n\nI'm trying to invoke the DirectFunctionCall1function as follows:\n\n \n BOX* __tmp;\n Datum d = DirectFunctionCall1(box_in, \nBoxPGetDatum(ret_bbox));\n __tmp = DatumGetBoxP(d); \n\n(where ret_bbox is a string representation of a box)\n\nfrom some client side code, but get linker errors stating that:\n\n-> undefined reference to `DirectFunctionCall1(unsigned long \n(*)(FunctionCallInfoData*), unsigned long)'\n\n--> undefined reference to `box_in(FunctionCallInfoData*)\n\nIs this because I'm calling the function incorrectly, or because I'm \nmissing the library in which these functions reside?\n\nMany thanks,\n\n\nTony\n\n", "msg_date": "Wed, 19 Jun 2002 10:37:20 +0100", "msg_from": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk>", "msg_from_op": true, "msg_subject": "Missing library files??" }, { "msg_contents": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n> I'm trying to invoke the DirectFunctionCall1function as follows:\n> ...\n> from some client side code, but get linker errors stating that:\n\n> -> undefined reference to `DirectFunctionCall1(unsigned long \n> (*)(FunctionCallInfoData*), unsigned long)'\n\n> Is this because I'm calling the function incorrectly, or because I'm \n> missing the library in which these functions reside?\n\nThere *is* no \"library in which those functions reside\". They are\ninternal to the server and are not designed to be included in\nclient-side code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Jun 2002 09:13:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing library files?? " }, { "msg_contents": "Tom Lane wrote:\n\n>\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n>\n>>I'm trying to invoke the DirectFunctionCall1function as follows:\n>>...\n>>from some client side code, but get linker errors stating that:\n>>\n>\n>>-> undefined reference to `DirectFunctionCall1(unsigned long \n>>(*)(FunctionCallInfoData*), unsigned long)'\n>>\n>\n>>Is this because I'm calling the function incorrectly, or because I'm \n>>missing the library in which these functions reside?\n>>\n>\n>There *is* no \"library in which those functions reside\". They are\n>internal to the server and are not designed to be included in\n>client-side code.\n>\nFair enough - that's what I half expected. I'm wanting to write some \nembedded sql that selects a box attribute from a relation - is there any \ndirect way of retrieving a BOX typed value from the select without \nhaving to go through decoding the retrieved string. I know I can do this \nusing a binary cursor, but this means moving from embedded sql.\n\nThanks,\n\nTony\n\n\n\n>\n>\n>\t\t\tregards, tom lane\n>\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\n\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n\nI'm trying to invoke the DirectFunctionCall1function as follows:...from some client side code, but get linker errors stating that:\n\n\n\n-> undefined reference to `DirectFunctionCall1(unsigned long (*)(FunctionCallInfoData*), unsigned long)'\n\n\n\nIs this because I'm calling the function incorrectly, or because I'm missing the library in which these functions reside?\n\nThere *is* no \"library in which those functions reside\". They areinternal to the server and are not designed to be included inclient-side code.\n\nFair enough - that's what I half expected. I'm wanting to write some embedded\nsql that selects a box attribute from a relation - is there any direct way\nof retrieving a BOX typed value from the select without having to go through\ndecoding the retrieved string. I know I can do this using a binary cursor,\nbut this means moving from embedded sql.\n\nThanks,\n\nTony\n\n\n\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 19 Jun 2002 14:31:36 +0100", "msg_from": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Missing library files??" }, { "msg_contents": "Gavin Sherry wrote:\n\n>On Wed, 19 Jun 2002, Tom Lane wrote:\n>\n>>\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n>>\n>>>I'm trying to invoke the DirectFunctionCall1function as follows:\n>>>...\n>>>from some client side code, but get linker errors stating that:\n>>>\n>>>-> undefined reference to `DirectFunctionCall1(unsigned long \n>>>(*)(FunctionCallInfoData*), unsigned long)'\n>>>\n>>>Is this because I'm calling the function incorrectly, or because I'm \n>>>missing the library in which these functions reside?\n>>>\n>>There *is* no \"library in which those functions reside\". They are\n>>internal to the server and are not designed to be included in\n>>client-side code.\n>>\n>\n>You can access this function using the Fast Path Interface. See the libpq\n>documentation. However, I am unsure why you would want to get access to\n>this function on the client side.\n>\n\nI'm retrieving a box value via embedded sql, and wanted to use the \nbox_in method to decode the retrieved string - unless there's a direct \nway of retrieving into a variable of the C type BOX??\n\n>\n>\n>Gavin\n>\n>\n\n\n\n\n\n\n\n\nGavin Sherry wrote:\n\nOn Wed, 19 Jun 2002, Tom Lane wrote:\n\n\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n\nI'm trying to invoke the DirectFunctionCall1function as follows:...from some client side code, but get linker errors stating that:\n\n\n-> undefined reference to `DirectFunctionCall1(unsigned long (*)(FunctionCallInfoData*), unsigned long)'\n\n\nIs this because I'm calling the function incorrectly, or because I'm missing the library in which these functions reside?\n\nThere *is* no \"library in which those functions reside\". They areinternal to the server and are not designed to be included inclient-side code.\n\nYou can access this function using the Fast Path Interface. See the libpqdocumentation. However, I am unsure why you would want to get access tothis function on the client side.\n\n\nI'm retrieving a box value via embedded sql, and wanted to use the box_in\nmethod to decode the retrieved string - unless there's a direct way of retrieving\ninto a variable of the C type BOX??\n\n\nGavin", "msg_date": "Wed, 19 Jun 2002 14:43:34 +0100", "msg_from": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Missing library files??" }, { "msg_contents": "On Wed, 19 Jun 2002, Tom Lane wrote:\n\n> \"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n> > I'm trying to invoke the DirectFunctionCall1function as follows:\n> > ...\n> > from some client side code, but get linker errors stating that:\n> \n> > -> undefined reference to `DirectFunctionCall1(unsigned long \n> > (*)(FunctionCallInfoData*), unsigned long)'\n> \n> > Is this because I'm calling the function incorrectly, or because I'm \n> > missing the library in which these functions reside?\n> \n> There *is* no \"library in which those functions reside\". They are\n> internal to the server and are not designed to be included in\n> client-side code.\n\nYou can access this function using the Fast Path Interface. See the libpq\ndocumentation. However, I am unsure why you would want to get access to\nthis function on the client side.\n\nGavin\n\n\n", "msg_date": "Wed, 19 Jun 2002 23:44:21 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Missing library files?? " }, { "msg_contents": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n> Fair enough - that's what I half expected. I'm wanting to write some \n> embedded sql that selects a box attribute from a relation - is there any \n> direct way of retrieving a BOX typed value from the select without \n> having to go through decoding the retrieved string. I know I can do this \n> using a binary cursor, but this means moving from embedded sql.\n\nThe binary representation of BOX wouldn't necessarily be the same on\nclient and server machines anyway (surely you don't want to wire in\nan assumption that client and server are on the same kind of hardware).\n\nI'd say bite the bullet and parse the ASCII string; it's hardly\ndifficult.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Jun 2002 10:06:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing library files?? " }, { "msg_contents": "Tom Lane wrote:\n\n>\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n>\n>>Fair enough - that's what I half expected. I'm wanting to write some \n>>embedded sql that selects a box attribute from a relation - is there any \n>>direct way of retrieving a BOX typed value from the select without \n>>having to go through decoding the retrieved string. I know I can do this \n>>using a binary cursor, but this means moving from embedded sql.\n>>\n>\n>The binary representation of BOX wouldn't necessarily be the same on\n>client and server machines anyway (surely you don't want to wire in\n>an assumption that client and server are on the same kind of hardware).\n>\n>I'd say bite the bullet and parse the ASCII string; it's hardly\n>difficult.\n>\n\nTrue, but if you guys decide to change the structure of a box string \ndown the line then not too good for backwards compatibility. sscanf here \nI come....\n\n>\n>\n>\t\t\tregards, tom lane\n>\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\n\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n\nFair enough - that's what I half expected. I'm wanting to write some embedded sql that selects a box attribute from a relation - is there any direct way of retrieving a BOX typed value from the select without having to go through decoding the retrieved string. I know I can do this using a binary cursor, but this means moving from embedded sql.\n\nThe binary representation of BOX wouldn't necessarily be the same onclient and server machines anyway (surely you don't want to wire inan assumption that client and server are on the same kind of hardware).I'd say bite the bullet and parse the ASCII string; it's hardlydifficult.\n\n\nTrue, but if you guys decide to change the structure of a box string down\nthe line then not too good for backwards compatibility. sscanf here I come....\n\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 19 Jun 2002 15:18:57 +0100", "msg_from": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Missing library files??" } ]
[ { "msg_contents": "A few months ago, there was a brief discussion about the convenience of porting a threaded version of postgres. I was wondering which is its status.\nPointing for its convenience I must say that multiprocess code does not work fine on ccNUMA systems. Shared memory is placed on a node and backends from another node must access remotely. OS is prepared for load balancing on multithreaded code, but not for multiprocess.\nThanks and regards\n\n\n\n\n\n\n\nA few months ago, there was a brief discussion \nabout the convenience of porting a threaded version of postgres. I was wondering \nwhich is its status.\nPointing for its convenience I must say that \nmultiprocess code does not work fine on ccNUMA systems. Shared memory is placed \non a node and backends from another node must access remotely. OS is prepared \nfor load balancing on multithreaded code, but not for multiprocess.\nThanks and regards", "msg_date": "Wed, 19 Jun 2002 17:13:21 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "multithreaded postgres status?" } ]
[ { "msg_contents": "I have a query which contains both a group by and a count, e.g:\n\nSELECT\n to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS delivery_date,\n pa_products.product_name AS product_name,\n pa_orders.order_state AS state,\n count(*) AS count\nFROM\n pa_shopping_cart,\n pa_products,\n pa_orders\nWHERE\n pa_shopping_cart.order_id = pa_orders.order_id AND\n pa_shopping_cart.product_id = pa_products.product_id\nGROUP BY\n pa_shopping_cart.delivery_date,\n pa_products.product_name,\n pa_orders.order_state\nORDER BY\n pa_shopping_cart.delivery_date, pa_products.product_name;\n\n\nThis query is really handy because it gives me the count of each\nproduct grouping by delivery within each possible order state.\n\nHere's the question - I would like to get the count of how many tuples are\nreturned total. With most queries, count(*) works great for this purpose,\nhowever I need something that will give me the total count of tuples\nreturned even when there is a grouping.\n\nAny ideas?\n\n\nRyan Mahoney\n\n", "msg_date": "Wed, 19 Jun 2002 15:19:25 -0400 (EDT)", "msg_from": "<ryan@paymentalliance.net>", "msg_from_op": true, "msg_subject": "count and group by question" } ]
[ { "msg_contents": "On Thu, 2002-06-20 at 02:02, Dann Corbit wrote:\n> > -----Original Message-----\n> > From: ryan@paymentalliance.net [mailto:ryan@paymentalliance.net]\n> > Sent: Wednesday, June 19, 2002 12:19 PM\n> > To: pgsql-hackers@postgresql.org\n> > Subject: [HACKERS] count and group by question\n> > \n> > \n> > I have a query which contains both a group by and a count, e.g:\n> > \n> > SELECT\n> > to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS \n> > delivery_date,\n> > pa_products.product_name AS product_name,\n> > pa_orders.order_state AS state,\n> > count(*) AS count\n> > FROM\n> > pa_shopping_cart,\n> > pa_products,\n> > pa_orders\n> > WHERE\n> > pa_shopping_cart.order_id = pa_orders.order_id AND\n> > pa_shopping_cart.product_id = pa_products.product_id\n> > GROUP BY\n> > pa_shopping_cart.delivery_date,\n> > pa_products.product_name,\n> > pa_orders.order_state\n> > ORDER BY\n> > pa_shopping_cart.delivery_date, pa_products.product_name;\n> > \n> > \n> > This query is really handy because it gives me the count of each\n> > product grouping by delivery within each possible order state.\n> > \n> > Here's the question - I would like to get the count of how \n> > many tuples are\n> > returned total. With most queries, count(*) works great for \n> > this purpose,\n> > however I need something that will give me the total count of tuples\n> > returned even when there is a grouping.\n> > \n> > Any ideas?\n> \n> Run two queries, the second with no group by.\n\n\nSomething like this should also work:\n\nSELECT\n to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS\ndelivery_date,\n pa_products.product_name AS product_name,\n pa_orders.order_state AS state,\n count(*) AS count\nFROM\n pa_shopping_cart,\n pa_products,\n pa_orders\nWHERE\n pa_shopping_cart.order_id = pa_orders.order_id AND\n pa_shopping_cart.product_id = pa_products.product_id\nGROUP BY\n pa_shopping_cart.delivery_date,\n pa_products.product_name,\n pa_orders.order_state\n \nUNION\nSELECT\n NULL,NULL,NULL, count\n from (\nselect count(*) AS count\nFROM\n pa_shopping_cart,\n pa_products,\n pa_orders\nWHERE\n pa_shopping_cart.order_id = pa_orders.order_id AND\n pa_shopping_cart.product_id = pa_products.product_id\n) total \n\nORDER BY\n pa_shopping_cart.delivery_date, pa_products.product_name;\n\nmake the NULL,NULL,NULL part something else to get it sorted where you\nwant.\n\n> \n> To make a really nice looking report with this kind of stuff, you can\n> use Crystal reports with the ODBC driver. Then you can set as many\n> break columns as you like.\n> \n> Which reminds me, it would be nice to have the cube/rollup sort of OLAP\n> stuff from SQL99 ISO/IEC 9075-2:1999 (E) in PostgreSQL:\n\nIt seems like simple ROLLUP and () (i.e. grandTotal) would be doable by\ncurrent executor and plans, i.e. sort and then aggregate, just add more\naggregate fields and have different start/finalize conditions\n\nCUBE and GROUPING SETS will probably need another kind of execution\nplan, perhaps some kind of hashed tuple list.\n\n> 7.9 <group by clause>\n> Function\n> Specify a grouped table derived by the application of the <group by\n> clause> to the result of the\n> previously specified clause.\n> Format\n> <group by clause> ::=\n> GROUP BY <grouping specification>\n> <grouping specification> ::=\n> <grouping column reference>\n> | <rollup list>\n> | <cube list>\n> | <grouping sets list>\n> | <grand total>\n> | <concatenated grouping>\n> <rollup list> ::=\n> ROLLUP <left paren> <grouping column reference list> <right paren>\n> <cube list> ::=\n> CUBE <left paren> <grouping column reference list> <right paren>\n> <grouping sets list> ::=\n> GROUPING SETS <left paren> <grouping set list> <right paren>\n> <grouping set list> ::=\n> <grouping set> [ { <comma> <grouping set> }... ]\n> <concatenated grouping> ::=\n> <grouping set> <comma> <grouping set list>\n> <grouping set> ::=\n> <ordinary grouping set>\n> | <rollup list>\n> | <cube list>\n> | <grand total>\n> <ordinary grouping set> ::=\n> <grouping column reference>\n> | <left paren> <grouping column reference list> <right paren>\n> <grand total> ::= <left paren> <right paren>\n> <grouping column reference list> ::=\n> <grouping column reference> [ { <comma> <grouping column reference> }...\n> ]\n> <grouping column reference> ::=\n> <column reference> [ <collate clause> ]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n", "msg_date": "20 Jun 2002 01:07:11 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: count and group by question" }, { "msg_contents": "> -----Original Message-----\n> From: ryan@paymentalliance.net [mailto:ryan@paymentalliance.net]\n> Sent: Wednesday, June 19, 2002 12:19 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] count and group by question\n> \n> \n> I have a query which contains both a group by and a count, e.g:\n> \n> SELECT\n> to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS \n> delivery_date,\n> pa_products.product_name AS product_name,\n> pa_orders.order_state AS state,\n> count(*) AS count\n> FROM\n> pa_shopping_cart,\n> pa_products,\n> pa_orders\n> WHERE\n> pa_shopping_cart.order_id = pa_orders.order_id AND\n> pa_shopping_cart.product_id = pa_products.product_id\n> GROUP BY\n> pa_shopping_cart.delivery_date,\n> pa_products.product_name,\n> pa_orders.order_state\n> ORDER BY\n> pa_shopping_cart.delivery_date, pa_products.product_name;\n> \n> \n> This query is really handy because it gives me the count of each\n> product grouping by delivery within each possible order state.\n> \n> Here's the question - I would like to get the count of how \n> many tuples are\n> returned total. With most queries, count(*) works great for \n> this purpose,\n> however I need something that will give me the total count of tuples\n> returned even when there is a grouping.\n> \n> Any ideas?\n\nRun two queries, the second with no group by.\n\nTo make a really nice looking report with this kind of stuff, you can\nuse Crystal reports with the ODBC driver. Then you can set as many\nbreak columns as you like.\n\nWhich reminds me, it would be nice to have the cube/rollup sort of OLAP\nstuff from SQL99 ISO/IEC 9075-2:1999 (E) in PostgreSQL:\n\n7.9 <group by clause>\nFunction\nSpecify a grouped table derived by the application of the <group by\nclause> to the result of the\npreviously specified clause.\nFormat\n<group by clause> ::=\nGROUP BY <grouping specification>\n<grouping specification> ::=\n<grouping column reference>\n| <rollup list>\n| <cube list>\n| <grouping sets list>\n| <grand total>\n| <concatenated grouping>\n<rollup list> ::=\nROLLUP <left paren> <grouping column reference list> <right paren>\n<cube list> ::=\nCUBE <left paren> <grouping column reference list> <right paren>\n<grouping sets list> ::=\nGROUPING SETS <left paren> <grouping set list> <right paren>\n<grouping set list> ::=\n<grouping set> [ { <comma> <grouping set> }... ]\n<concatenated grouping> ::=\n<grouping set> <comma> <grouping set list>\n<grouping set> ::=\n<ordinary grouping set>\n| <rollup list>\n| <cube list>\n| <grand total>\n<ordinary grouping set> ::=\n<grouping column reference>\n| <left paren> <grouping column reference list> <right paren>\n<grand total> ::= <left paren> <right paren>\n<grouping column reference list> ::=\n<grouping column reference> [ { <comma> <grouping column reference> }...\n]\n<grouping column reference> ::=\n<column reference> [ <collate clause> ]\n", "msg_date": "Wed, 19 Jun 2002 14:02:17 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": false, "msg_subject": "Re: count and group by question" } ]
[ { "msg_contents": "On Thu, 2002-06-20 at 03:15, Dann Corbit wrote:\n> > > Which reminds me, it would be nice to have the cube/rollup \n> > sort of OLAP\n> > > stuff from SQL99 ISO/IEC 9075-2:1999 (E) in PostgreSQL:\n> > \n> > It seems like simple ROLLUP and () (i.e. grandTotal) would be \n> > doable by\n> > current executor and plans, i.e. sort and then aggregate, \n> > just add more\n> > aggregate fields and have different start/finalize conditions\n> > \n> > CUBE and GROUPING SETS will probably need another kind of execution\n> > plan, perhaps some kind of hashed tuple list.\n> \n> Rollup can be simulated by a bunch of union all... Here is an example:\n> http://www.quest-pipelines.com/newsletter-v2/rollup.htm\n\nI guess that all groupings are, just it would be much more efficient\n(not to mention simpler for user ;) if they could be done in one pass.\n\nBut rewriting them to UNIONS seems a good stopgap solution.\n\nIIRC the OLAP supplement had also an option to tell wheather NULLS sort\nat the beginning or end.\n\n-------------------\nHannu\n\n\n", "msg_date": "20 Jun 2002 01:24:46 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: count and group by question" }, { "msg_contents": "On Thu, 2002-06-20 at 04:00, Ryan Mahoney wrote:\n> OK, so I tried both queries but they don't meet my requirement, I think\n> I wasn't clear. The methods suggested both return the aggregate count\n> as if the rows had not been grouped. What I am looking for is a count\n> of how many rows were returned *with* the grouping.\n> \n> So, suppose there are 1000 orders total, but when grouped by product 200\n> rows are returned. I am trying to find a way to get that 200 not the\n> original 1000 count.\n> \n> Does this make sense? The Union was really interesting, I haven't used\n> union very much - but I will now!\n\nyou could try:\n\nselect count(*) from (\n SELECT\n to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS\n delivery_date,\n pa_products.product_name AS product_name,\n pa_orders.order_state AS state,\n count(*) AS count\n FROM\n pa_shopping_cart,\n pa_products,\n pa_orders\n WHERE\n pa_shopping_cart.order_id = pa_orders.order_id AND\n pa_shopping_cart.product_id = pa_products.product_id\n GROUP BY\n pa_shopping_cart.delivery_date,\n pa_products.product_name,\n pa_orders.order_state\n) original_query\n\n\n\n----------------\nHannu\n\n\n", "msg_date": "20 Jun 2002 02:20:17 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: count and group by question" }, { "msg_contents": "> -----Original Message-----\n> From: Hannu Krosing [mailto:hannu@tm.ee]\n> Sent: Wednesday, June 19, 2002 1:07 PM\n> To: Dann Corbit\n> Cc: ryan@paymentalliance.net; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] count and group by question\n> \n> \n> On Thu, 2002-06-20 at 02:02, Dann Corbit wrote:\n> > > -----Original Message-----\n> > > From: ryan@paymentalliance.net [mailto:ryan@paymentalliance.net]\n> > > Sent: Wednesday, June 19, 2002 12:19 PM\n> > > To: pgsql-hackers@postgresql.org\n> > > Subject: [HACKERS] count and group by question\n> > > \n> > > \n> > > I have a query which contains both a group by and a count, e.g:\n> > > \n> > > SELECT\n> > > to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS \n> > > delivery_date,\n> > > pa_products.product_name AS product_name,\n> > > pa_orders.order_state AS state,\n> > > count(*) AS count\n> > > FROM\n> > > pa_shopping_cart,\n> > > pa_products,\n> > > pa_orders\n> > > WHERE\n> > > pa_shopping_cart.order_id = pa_orders.order_id AND\n> > > pa_shopping_cart.product_id = pa_products.product_id\n> > > GROUP BY\n> > > pa_shopping_cart.delivery_date,\n> > > pa_products.product_name,\n> > > pa_orders.order_state\n> > > ORDER BY\n> > > pa_shopping_cart.delivery_date, pa_products.product_name;\n> > > \n> > > \n> > > This query is really handy because it gives me the count of each\n> > > product grouping by delivery within each possible order state.\n> > > \n> > > Here's the question - I would like to get the count of how \n> > > many tuples are\n> > > returned total. With most queries, count(*) works great for \n> > > this purpose,\n> > > however I need something that will give me the total \n> count of tuples\n> > > returned even when there is a grouping.\n> > > \n> > > Any ideas?\n> > \n> > Run two queries, the second with no group by.\n> \n> \n> Something like this should also work:\n> \n> SELECT\n> to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS\n> delivery_date,\n> pa_products.product_name AS product_name,\n> pa_orders.order_state AS state,\n> count(*) AS count\n> FROM\n> pa_shopping_cart,\n> pa_products,\n> pa_orders\n> WHERE\n> pa_shopping_cart.order_id = pa_orders.order_id AND\n> pa_shopping_cart.product_id = pa_products.product_id\n> GROUP BY\n> pa_shopping_cart.delivery_date,\n> pa_products.product_name,\n> pa_orders.order_state\n> \n> UNION\n> SELECT\n> NULL,NULL,NULL, count\n> from (\n> select count(*) AS count\n> FROM\n> pa_shopping_cart,\n> pa_products,\n> pa_orders\n> WHERE\n> pa_shopping_cart.order_id = pa_orders.order_id AND\n> pa_shopping_cart.product_id = pa_products.product_id\n> ) total \n> \n> ORDER BY\n> pa_shopping_cart.delivery_date, pa_products.product_name;\n> \n> make the NULL,NULL,NULL part something else to get it sorted where you\n> want.\n\nVery clever. I like it! I'll have to remember that.\n\n> > To make a really nice looking report with this kind of \n> stuff, you can\n> > use Crystal reports with the ODBC driver. Then you can set as many\n> > break columns as you like.\n> > \n> > Which reminds me, it would be nice to have the cube/rollup \n> sort of OLAP\n> > stuff from SQL99 ISO/IEC 9075-2:1999 (E) in PostgreSQL:\n> \n> It seems like simple ROLLUP and () (i.e. grandTotal) would be \n> doable by\n> current executor and plans, i.e. sort and then aggregate, \n> just add more\n> aggregate fields and have different start/finalize conditions\n> \n> CUBE and GROUPING SETS will probably need another kind of execution\n> plan, perhaps some kind of hashed tuple list.\n\nRollup can be simulated by a bunch of union all... Here is an example:\nhttp://www.quest-pipelines.com/newsletter-v2/rollup.htm\n\n", "msg_date": "Wed, 19 Jun 2002 15:15:49 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": false, "msg_subject": "Re: count and group by question" }, { "msg_contents": "OK, so I tried both queries but they don't meet my requirement, I think\nI wasn't clear. The methods suggested both return the aggregate count\nas if the rows had not been grouped. What I am looking for is a count\nof how many rows were returned *with* the grouping.\n\nSo, suppose there are 1000 orders total, but when grouped by product 200\nrows are returned. I am trying to find a way to get that 200 not the\noriginal 1000 count.\n\nDoes this make sense? The Union was really interesting, I haven't used\nunion very much - but I will now!\n\nThanks for your suggestions!\n\n-r\n \n> > SELECT\n> > to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS\n> > delivery_date,\n> > pa_products.product_name AS product_name,\n> > pa_orders.order_state AS state,\n> > count(*) AS count\n> > FROM\n> > pa_shopping_cart,\n> > pa_products,\n> > pa_orders\n> > WHERE\n> > pa_shopping_cart.order_id = pa_orders.order_id AND\n> > pa_shopping_cart.product_id = pa_products.product_id\n> > GROUP BY\n> > pa_shopping_cart.delivery_date,\n> > pa_products.product_name,\n> > pa_orders.order_state\n> > \n> > UNION\n> > SELECT\n> > NULL,NULL,NULL, count\n> > from (\n> > select count(*) AS count\n> > FROM\n> > pa_shopping_cart,\n> > pa_products,\n> > pa_orders\n> > WHERE\n> > pa_shopping_cart.order_id = pa_orders.order_id AND\n> > pa_shopping_cart.product_id = pa_products.product_id\n> > ) total \n> > \n> > ORDER BY\n> > pa_shopping_cart.delivery_date, pa_products.product_name;\n> > \n> > make the NULL,NULL,NULL part something else to get it sorted where you\n> > want.\n\n", "msg_date": "19 Jun 2002 19:00:25 -0400", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": false, "msg_subject": "Re: count and group by question" }, { "msg_contents": "Make the whole thing a subselect in the from, and count that.\n\nselect count(*)\nfrom (<other query>) as tab\n--\nRod\n----- Original Message -----\nFrom: \"Ryan Mahoney\" <ryan@paymentalliance.net>\nTo: \"Dann Corbit\" <DCorbit@connx.com>\nCc: \"Hannu Krosing\" <hannu@tm.ee>; <pgsql-hackers@postgresql.org>\nSent: Wednesday, June 19, 2002 7:00 PM\nSubject: Re: [HACKERS] count and group by question\n\n\n> OK, so I tried both queries but they don't meet my requirement, I\nthink\n> I wasn't clear. The methods suggested both return the aggregate\ncount\n> as if the rows had not been grouped. What I am looking for is a\ncount\n> of how many rows were returned *with* the grouping.\n>\n> So, suppose there are 1000 orders total, but when grouped by product\n200\n> rows are returned. I am trying to find a way to get that 200 not\nthe\n> original 1000 count.\n>\n> Does this make sense? The Union was really interesting, I haven't\nused\n> union very much - but I will now!\n>\n> Thanks for your suggestions!\n>\n> -r\n>\n> > > SELECT\n> > > to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS\n> > > delivery_date,\n> > > pa_products.product_name AS product_name,\n> > > pa_orders.order_state AS state,\n> > > count(*) AS count\n> > > FROM\n> > > pa_shopping_cart,\n> > > pa_products,\n> > > pa_orders\n> > > WHERE\n> > > pa_shopping_cart.order_id = pa_orders.order_id AND\n> > > pa_shopping_cart.product_id = pa_products.product_id\n> > > GROUP BY\n> > > pa_shopping_cart.delivery_date,\n> > > pa_products.product_name,\n> > > pa_orders.order_state\n> > >\n> > > UNION\n> > > SELECT\n> > > NULL,NULL,NULL, count\n> > > from (\n> > > select count(*) AS count\n> > > FROM\n> > > pa_shopping_cart,\n> > > pa_products,\n> > > pa_orders\n> > > WHERE\n> > > pa_shopping_cart.order_id = pa_orders.order_id AND\n> > > pa_shopping_cart.product_id = pa_products.product_id\n> > > ) total\n> > >\n> > > ORDER BY\n> > > pa_shopping_cart.delivery_date, pa_products.product_name;\n> > >\n> > > make the NULL,NULL,NULL part something else to get it sorted\nwhere you\n> > > want.\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n>\n\n", "msg_date": "Wed, 19 Jun 2002 19:20:37 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: count and group by question" }, { "msg_contents": "Perfect! That's just what I needed!\n\nThanks so much\n\n-r\n\n> select count(*) from (\n> SELECT\n> to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') AS\n> delivery_date,\n> pa_products.product_name AS product_name,\n> pa_orders.order_state AS state,\n> count(*) AS count\n> FROM\n> pa_shopping_cart,\n> pa_products,\n> pa_orders\n> WHERE\n> pa_shopping_cart.order_id = pa_orders.order_id AND\n> pa_shopping_cart.product_id = pa_products.product_id\n> GROUP BY\n> pa_shopping_cart.delivery_date,\n> pa_products.product_name,\n> pa_orders.order_state\n> ) original_query\n> \n> \n> \n> ----------------\n> Hannu\n\n", "msg_date": "19 Jun 2002 22:07:50 -0400", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": false, "msg_subject": "Re: count and group by question" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Hannu Krosing [mailto:hannu@tm.ee]\n> Sent: Wednesday, June 19, 2002 1:25 PM\n> To: Dann Corbit\n> Cc: ryan@paymentalliance.net; pgsql-hackers@postgresql.org\n> Subject: RE: [HACKERS] count and group by question\n> \n> \n> On Thu, 2002-06-20 at 03:15, Dann Corbit wrote:\n> > > > Which reminds me, it would be nice to have the cube/rollup \n> > > sort of OLAP\n> > > > stuff from SQL99 ISO/IEC 9075-2:1999 (E) in PostgreSQL:\n> > > \n> > > It seems like simple ROLLUP and () (i.e. grandTotal) would be \n> > > doable by\n> > > current executor and plans, i.e. sort and then aggregate, \n> > > just add more\n> > > aggregate fields and have different start/finalize conditions\n> > > \n> > > CUBE and GROUPING SETS will probably need another kind of \n> execution\n> > > plan, perhaps some kind of hashed tuple list.\n> > \n> > Rollup can be simulated by a bunch of union all... Here is \n> an example:\n> > http://www.quest-pipelines.com/newsletter-v2/rollup.htm\n> \n> I guess that all groupings are, just it would be much more efficient\n> (not to mention simpler for user ;) if they could be done in one pass.\n> \n> But rewriting them to UNIONS seems a good stopgap solution.\n\nYes. It was to show the concept (not to you). Obviously, it would be\nmuch better to do it correctly internally (which is why I asked for the\nfeature).\n \n> IIRC the OLAP supplement had also an option to tell wheather \n> NULLS sort\n> at the beginning or end.\n", "msg_date": "Wed, 19 Jun 2002 15:31:43 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: count and group by question" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Ryan Mahoney [mailto:ryan@paymentalliance.net]\n> Sent: Wednesday, June 19, 2002 4:00 PM\n> To: Dann Corbit\n> Cc: Hannu Krosing; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] count and group by question\n> \n> \n> OK, so I tried both queries but they don't meet my \n> requirement, I think\n> I wasn't clear. The methods suggested both return the aggregate count\n> as if the rows had not been grouped. What I am looking for is a count\n> of how many rows were returned *with* the grouping.\n> \n> So, suppose there are 1000 orders total, but when grouped by \n> product 200\n> rows are returned. I am trying to find a way to get that 200 not the\n> original 1000 count.\n> \n> Does this make sense? The Union was really interesting, I \n> haven't used\n> union very much - but I will now!\n\nWarning -- totally untested and glommed from memory -- probably not\nquite right...\n\nSELECT count (distinct\n cast(to_char(pa_shopping_cart.delivery_date, 'FMMM/FMDD/YY') as\nvarchar) || pa_products.product_name || pa_orders.order_state)\nFROM\n pa_shopping_cart,\n pa_products,\n pa_orders\nWHERE\n pa_shopping_cart.order_id = pa_orders.order_id AND\n pa_shopping_cart.product_id = pa_products.product_id\n", "msg_date": "Wed, 19 Jun 2002 16:12:14 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: count and group by question" } ]
[ { "msg_contents": "Hi,\nI've a question about returning ADT values through embedded SQL. It's \nfine to retrieve values of PostgreSQL's build-in types in to C variables \nof roughly the same type, with only (possibly) a small impedance \nmismatch. However when we want to retrieve into values of the ADTs, \ni.e., the geometric types, it looks like we have to retrieve them into \nstrings, and then manually convert them into the correct type - if this \nis not the case then a posting of how to do this into variables of the \ncorrect type would be great. This means that:\n\na) The client-side programmer has to be responsible for parsing the \nreturned string, which could cause problems if the output format of the \nADT is changed, and\n\nb) The impedance mismatch is much greater than that of the built-in types.\n\nAre there any plans to extend the ADT registration process to allow \noutput of the actual type (not just the string) (I seem to remember that \nINGRES did this a while ago, with admittedly a much more complex \nregistration process), and also for ecpg to be able to correctly parse \nvariables other than those built into the database kernel.\n\nMany thanks,\n\nTony\n\n", "msg_date": "Thu, 20 Jun 2002 13:49:04 +0100", "msg_from": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk>", "msg_from_op": true, "msg_subject": "ADTs and embedded sql" }, { "msg_contents": "> a) The client-side programmer has to be responsible for parsing the\n> returned string, which could cause problems if the output format of the\n> ADT is changed, and\n\nSo we need a convention for building client-side libraries.\n\n> b) The impedance mismatch is much greater than that of the built-in types.\n\nA first step would be to make the structure definitions and i/o code for\nUDTs available to the client. Perhaps the next step could involve using\nthose definitions internally in ecpg. But without the first step we\ndon't have much to build on.\n\n - Thomas\n", "msg_date": "Thu, 20 Jun 2002 07:41:51 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: ADTs and embedded sql" }, { "msg_contents": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk> writes:\n> a) The client-side programmer has to be responsible for parsing the \n> returned string, which could cause problems if the output format of the \n> ADT is changed, and\n\nYou seem to be proposing that we instead expose the internal storage\nformat of the ADT, which seems to me to be much more likely to change\nthan the string representation. (Not to mention that it will open up a\nhost of platform compatibility issues --- endianness, struct packing,\nfloat format rules for example.)\n\nTo give just one example, as of 7.3 there will be two entirely different\ninternal formats for the datetime-related types. A client would have\nto be prepared to cope with that, on top of possible endian and float\nformat differences between server and client machines.\n\nParsing the string representation may be an annoyance, but I suspect\nit is the lesser evil.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Jun 2002 12:37:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ADTs and embedded sql " }, { "msg_contents": "At 01:49 PM 6/20/02 +0100, Tony Griffiths(RA) wrote:\n\n>a) The client-side programmer has to be responsible for parsing the \n>returned string, which could cause problems if the output format of the \n>ADT is changed, and\n>\n>b) The impedance mismatch is much greater than that of the built-in types.\n\nOne man's impedance mismatch is another man's layer of abstraction / \ninterface :).\n\nSorry - couldn't resist ;).\n\nCheerio,\nLink.\n\n\n\n\n", "msg_date": "Fri, 21 Jun 2002 02:28:47 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: ADTs and embedded sql" }, { "msg_contents": "> > a) The client-side programmer has to be responsible for parsing the\n> > returned string, which could cause problems if the output format of the\n> > ADT is changed, and\n> You seem to be proposing that we instead expose the internal storage\n> format of the ADT, which seems to me to be much more likely to change\n> than the string representation. (Not to mention that it will open up a\n> host of platform compatibility issues --- endianness, struct packing,\n> float format rules for example.)\n\nThat is one possibility, but I think the proposal is to expose the\n*support* for the data types to client-side apps. So we would have\nlibrar(ies) which allow parsing the stringified representation of a\nvalue into an acceptable internal format on the client, along with some\nsupport code for working with the values.\n\nThat is a Good Idea in principle. In practice, someone would need to\ntake ownership of the project and develop an style and technique for\npackaging support for data types in this way...\n\n - Thomas\n", "msg_date": "Fri, 21 Jun 2002 06:47:41 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: ADTs and embedded sql" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> That is one possibility, but I think the proposal is to expose the\n> *support* for the data types to client-side apps.\n\nAh, I see --- more or less make all of utils/adt/ available to be\nlinked into clients.\n\n> That is a Good Idea in principle. In practice, ...\n\nYeah, it'd be a huge amount of work. For starters, all that code\nrelies on the backend environment for error handling and memory\nmanagement...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 10:50:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ADTs and embedded sql " }, { "msg_contents": "> Ah, I see --- more or less make all of utils/adt/ available to be\n> linked into clients.\n> > That is a Good Idea in principle. In practice, ...\n> Yeah, it'd be a huge amount of work. For starters, all that code\n> relies on the backend environment for error handling and memory\n> management...\n\nIt would be a large amount of work to make *all* of utils/adt available.\nHowever, the initial work would be to support I/O to get values\nconverted to internal storage. Michael M. already has to do some of this\nfor ecpg, and presumably we could do this for more types (or maybe *all*\nbuiltin types are already supported in this way by ecpg, in which case\nMM has already done all of the hard work, and we might just repackage\nit).\n\nA first cut would seem to be appropriate, if someone would like to pick\nup the work. Tony?? ;)\n\n - Thomas\n", "msg_date": "Fri, 21 Jun 2002 07:56:51 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: ADTs and embedded sql" }, { "msg_contents": "\n\nThomas Lockhart wrote:\n> \n> > Ah, I see --- more or less make all of utils/adt/ available to be\n> > linked into clients.\n> > > That is a Good Idea in principle. In practice, ...\n> > Yeah, it'd be a huge amount of work. For starters, all that code\n> > relies on the backend environment for error handling and memory\n> > management...\n> \n> It would be a large amount of work to make *all* of utils/adt available.\n> However, the initial work would be to support I/O to get values\n> converted to internal storage. Michael M. already has to do some of this\n> for ecpg, and presumably we could do this for more types (or maybe *all*\n> builtin types are already supported in this way by ecpg, in which case\n> MM has already done all of the hard work, and we might just repackage\n> it).\n> \n> A first cut would seem to be appropriate, if someone would like to pick\n> up the work. Tony?? ;)\n\nI'd love to get involved in this, BUT... no time at the moment, although\nif I get a really good Masters student next semester - I could always\ndo this as their project. If this is still a requirement in about 3\nmonths then I can set someone on to it.\n\nTony\n\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nTony\n\n---------------------------------\nDr. Tony Griffiths\nResearch Fellow\nInformation Management Group,\nDepartment of Computer Science,\nThe University of Manchester,\nOxford Road,\nManchester M13 9PL, \nUnited Kingdom\n\nTel. +44 (0) 161 275 6139\nFax +44 (0) 161 275 6236\nemail tony.griffiths@cs.man.ac.uk\n---------------------------------\n\n\n", "msg_date": "Mon, 24 Jun 2002 08:40:23 +0100", "msg_from": "Tony Griffiths <tony.griffiths@cs.man.ac.uk>", "msg_from_op": false, "msg_subject": "Re: ADTs and embedded sql" } ]
[ { "msg_contents": "Hi,\n\nthere's (at least) one point I do not understand in bufpage.h.\nCould one of the \"older\" hackers explain it to me, please.\n\nIs od_pagesize in any way more or less opaque than pd_lower, pd_upper,\npd_special, etc? If it is, why?\n\nIf it's not, should I post a patch that puts pagesize directly into\nPageHeaderData?\n\nServus\n Manfred\n", "msg_date": "Thu, 20 Jun 2002 17:47:59 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Page OpaqueData" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Is od_pagesize in any way more or less opaque than pd_lower, pd_upper,\n> pd_special, etc? If it is, why?\n\nI surmise that there was once some idea of supporting multiple page\nsizes simultaneously, but it's not real clear why the macros\nPageGetPageSize/PageSetPageSize wouldn't be a sufficient abstraction\nlayer; the extra level of struct naming for pd_opaque has no obvious\nusefulness. In any case I doubt that dealing with multiple page sizes\nwould be worth the trouble it would be to support.\n\n> If it's not, should I post a patch that puts pagesize directly into\n> PageHeaderData?\n\nIf you're so inclined. Given that pd_opaque is hidden in those macros,\nthere wouldn't be much of any gain in readability either, so I haven't\nworried about changing the declaration.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 12:53:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Page OpaqueData " }, { "msg_contents": "On Mon, 24 Jun 2002 12:53:42 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>the extra level of struct naming for pd_opaque has no obvious\n>usefulness.\n>\n>> [...] should I post a patch that puts pagesize directly into\n>> PageHeaderData?\n>\n>If you're so inclined. Given that pd_opaque is hidden in those macros,\n>there wouldn't be much of any gain in readability either, so I haven't\n>worried about changing the declaration.\n\nThanks for the clarification. Here is the patch. Not much gain, but at\nleast it saves the next junior hacker from scratching his head ...\n\nCordialmente\n Manfredinho :-)\n\nPS: Please do not apply before \"Page access\" patch from 2002-06-20.\n\ndiff -ru ../base/src/backend/access/hash/hashutil.c src/backend/access/hash/hashutil.c\n--- ../base/src/backend/access/hash/hashutil.c\t2002-05-21 11:54:11.000000000 +0200\n+++ src/backend/access/hash/hashutil.c\t2002-06-21 16:43:24.000000000 +0200\n@@ -131,13 +131,13 @@\n \tHashPageOpaque opaque;\n \n \tAssert(page);\n-\tAssert(((PageHeader) (page))->pd_lower >= (sizeof(PageHeaderData) - sizeof(ItemIdData)));\n+\tAssert(((PageHeader) (page))->pd_lower >= SizeOfPageHeaderData);\n #if 1\n \tAssert(((PageHeader) (page))->pd_upper <=\n \t\t (BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData))));\n \tAssert(((PageHeader) (page))->pd_special ==\n \t\t (BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData))));\n-\tAssert(((PageHeader) (page))->pd_opaque.od_pagesize == BLCKSZ);\n+\tAssert(PageGetPageSize(page) == BLCKSZ);\n #endif\n \tif (flags)\n \t{\ndiff -ru ../base/src/include/storage/bufpage.h src/include/storage/bufpage.h\n--- ../base/src/include/storage/bufpage.h\t2002-06-20 12:22:21.000000000 +0200\n+++ src/include/storage/bufpage.h\t2002-06-21 16:38:17.000000000 +0200\n@@ -65,8 +65,7 @@\n * byte-offset position, tuples can be physically shuffled on a page\n * whenever the need arises.\n *\n- * AM-generic per-page information is kept in the pd_opaque field of\n- * the PageHeaderData.\t(Currently, only the page size is kept here.)\n+ * AM-generic per-page information is kept in PageHeaderData.\n *\n * AM-specific per-page data (if any) is kept in the area marked \"special\n * space\"; each AM has an \"opaque\" structure defined somewhere that is\n@@ -92,25 +91,18 @@\n \n \n /*\n+ * disk page organization\n * space management information generic to any page\n *\n- *\t\tod_pagesize\t\t- size in bytes.\n- *\t\t\t\t\t\t Minimum possible page size is perhaps 64B to fit\n- *\t\t\t\t\t\t page header, opaque space and a minimal tuple;\n- *\t\t\t\t\t\t of course, in reality you want it much bigger.\n- *\t\t\t\t\t\t On the high end, we can only support pages up\n- *\t\t\t\t\t\t to 32KB because lp_off/lp_len are 15 bits.\n- */\n-typedef struct OpaqueData\n-{\n-\tuint16\t\tod_pagesize;\n-} OpaqueData;\n-\n-typedef OpaqueData *Opaque;\n-\n-\n-/*\n- * disk page organization\n+ *\t\tpd_lower \t- offset to start of free space.\n+ *\t\tpd_upper \t- offset to end of free space.\n+ *\t\tpd_special \t- offset to start of special space.\n+ *\t\tpd_pagesize\t- size in bytes.\n+ *\t\t\t\t\t Minimum possible page size is perhaps 64B to fit\n+ *\t\t\t\t\t page header, opaque space and a minimal tuple;\n+ *\t\t\t\t\t of course, in reality you want it much bigger.\n+ *\t\t\t\t\t On the high end, we can only support pages up\n+ *\t\t\t\t\t to 32KB because lp_off/lp_len are 15 bits.\n */\n typedef struct PageHeaderData\n {\n@@ -124,7 +116,7 @@\n \tLocationIndex pd_lower;\t\t/* offset to start of free space */\n \tLocationIndex pd_upper;\t\t/* offset to end of free space */\n \tLocationIndex pd_special;\t/* offset to start of special space */\n-\tOpaqueData\tpd_opaque;\t\t/* AM-generic information */\n+\tuint16\t\tpd_pagesize;\n \tItemIdData\tpd_linp[1];\t\t/* beginning of line pointer array */\n } PageHeaderData;\n \n@@ -216,14 +208,14 @@\n * however, it can be called on a page for which there is no buffer.\n */\n #define PageGetPageSize(page) \\\n-\t((Size) ((PageHeader) (page))->pd_opaque.od_pagesize)\n+\t((Size) ((PageHeader) (page))->pd_pagesize)\n \n /*\n * PageSetPageSize\n *\t\tSets the page size of a page.\n */\n #define PageSetPageSize(page, size) \\\n-\t(((PageHeader) (page))->pd_opaque.od_pagesize = (size))\n+\t(((PageHeader) (page))->pd_pagesize = (size))\n \n /* ----------------\n *\t\tpage special data macros\n\nServus\n Manfred\n\n\n", "msg_date": "Wed, 26 Jun 2002 16:38:47 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Page OpaqueData " }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n Manfred Koizar wrote:\n> On Mon, 24 Jun 2002 12:53:42 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >the extra level of struct naming for pd_opaque has no obvious\n> >usefulness.\n> >\n> >> [...] should I post a patch that puts pagesize directly into\n> >> PageHeaderData?\n> >\n> >If you're so inclined. Given that pd_opaque is hidden in those macros,\n> >there wouldn't be much of any gain in readability either, so I haven't\n> >worried about changing the declaration.\n> \n> Thanks for the clarification. Here is the patch. Not much gain, but at\n> least it saves the next junior hacker from scratching his head ...\n> \n> Cordialmente\n> Manfredinho :-)\n> \n> PS: Please do not apply before \"Page access\" patch from 2002-06-20.\n> \n> diff -ru ../base/src/backend/access/hash/hashutil.c src/backend/access/hash/hashutil.c\n> --- ../base/src/backend/access/hash/hashutil.c\t2002-05-21 11:54:11.000000000 +0200\n> +++ src/backend/access/hash/hashutil.c\t2002-06-21 16:43:24.000000000 +0200\n> @@ -131,13 +131,13 @@\n> \tHashPageOpaque opaque;\n> \n> \tAssert(page);\n> -\tAssert(((PageHeader) (page))->pd_lower >= (sizeof(PageHeaderData) - sizeof(ItemIdData)));\n> +\tAssert(((PageHeader) (page))->pd_lower >= SizeOfPageHeaderData);\n> #if 1\n> \tAssert(((PageHeader) (page))->pd_upper <=\n> \t\t (BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData))));\n> \tAssert(((PageHeader) (page))->pd_special ==\n> \t\t (BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData))));\n> -\tAssert(((PageHeader) (page))->pd_opaque.od_pagesize == BLCKSZ);\n> +\tAssert(PageGetPageSize(page) == BLCKSZ);\n> #endif\n> \tif (flags)\n> \t{\n> diff -ru ../base/src/include/storage/bufpage.h src/include/storage/bufpage.h\n> --- ../base/src/include/storage/bufpage.h\t2002-06-20 12:22:21.000000000 +0200\n> +++ src/include/storage/bufpage.h\t2002-06-21 16:38:17.000000000 +0200\n> @@ -65,8 +65,7 @@\n> * byte-offset position, tuples can be physically shuffled on a page\n> * whenever the need arises.\n> *\n> - * AM-generic per-page information is kept in the pd_opaque field of\n> - * the PageHeaderData.\t(Currently, only the page size is kept here.)\n> + * AM-generic per-page information is kept in PageHeaderData.\n> *\n> * AM-specific per-page data (if any) is kept in the area marked \"special\n> * space\"; each AM has an \"opaque\" structure defined somewhere that is\n> @@ -92,25 +91,18 @@\n> \n> \n> /*\n> + * disk page organization\n> * space management information generic to any page\n> *\n> - *\t\tod_pagesize\t\t- size in bytes.\n> - *\t\t\t\t\t\t Minimum possible page size is perhaps 64B to fit\n> - *\t\t\t\t\t\t page header, opaque space and a minimal tuple;\n> - *\t\t\t\t\t\t of course, in reality you want it much bigger.\n> - *\t\t\t\t\t\t On the high end, we can only support pages up\n> - *\t\t\t\t\t\t to 32KB because lp_off/lp_len are 15 bits.\n> - */\n> -typedef struct OpaqueData\n> -{\n> -\tuint16\t\tod_pagesize;\n> -} OpaqueData;\n> -\n> -typedef OpaqueData *Opaque;\n> -\n> -\n> -/*\n> - * disk page organization\n> + *\t\tpd_lower \t- offset to start of free space.\n> + *\t\tpd_upper \t- offset to end of free space.\n> + *\t\tpd_special \t- offset to start of special space.\n> + *\t\tpd_pagesize\t- size in bytes.\n> + *\t\t\t\t\t Minimum possible page size is perhaps 64B to fit\n> + *\t\t\t\t\t page header, opaque space and a minimal tuple;\n> + *\t\t\t\t\t of course, in reality you want it much bigger.\n> + *\t\t\t\t\t On the high end, we can only support pages up\n> + *\t\t\t\t\t to 32KB because lp_off/lp_len are 15 bits.\n> */\n> typedef struct PageHeaderData\n> {\n> @@ -124,7 +116,7 @@\n> \tLocationIndex pd_lower;\t\t/* offset to start of free space */\n> \tLocationIndex pd_upper;\t\t/* offset to end of free space */\n> \tLocationIndex pd_special;\t/* offset to start of special space */\n> -\tOpaqueData\tpd_opaque;\t\t/* AM-generic information */\n> +\tuint16\t\tpd_pagesize;\n> \tItemIdData\tpd_linp[1];\t\t/* beginning of line pointer array */\n> } PageHeaderData;\n> \n> @@ -216,14 +208,14 @@\n> * however, it can be called on a page for which there is no buffer.\n> */\n> #define PageGetPageSize(page) \\\n> -\t((Size) ((PageHeader) (page))->pd_opaque.od_pagesize)\n> +\t((Size) ((PageHeader) (page))->pd_pagesize)\n> \n> /*\n> * PageSetPageSize\n> *\t\tSets the page size of a page.\n> */\n> #define PageSetPageSize(page, size) \\\n> -\t(((PageHeader) (page))->pd_opaque.od_pagesize = (size))\n> +\t(((PageHeader) (page))->pd_pagesize = (size))\n> \n> /* ----------------\n> *\t\tpage special data macros\n> \n> Servus\n> Manfred\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 2 Jul 2002 02:18:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Page OpaqueData" } ]
[ { "msg_contents": "I've got patches for the CREATE CAST/DROP CAST feature (just a\nrearrangement of our existing function declaration syntax). The SQL99\nform assumes that an existing function will be used for the cast\ndefinition, so I've extended the syntax to allow that and to have an\nalternate form which has more of our CREATE FUNCTION functionality.\n\nI'm also looking at the SQL99 INFORMATION_SCHEMA views. Is anyone\nalready defining these? Is someone interested in picking this up? I've\ngot some definitions in a contrib-style directory but have not yet\nmapped them to PostgreSQL.\n\nThe initdb folks may want to start thinking about the best way to\nsupport a larger number of views; currently they are embedded directly\ninto the initdb script but that would get unwieldy with more of them\n(and some of them are *really* fat definitions!).\n\n - Thomas\n", "msg_date": "Thu, 20 Jun 2002 09:14:55 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "SQL99, CREATE CAST, and initdb" }, { "msg_contents": "Thomas Lockhart writes:\n\n> I've got patches for the CREATE CAST/DROP CAST feature (just a\n> rearrangement of our existing function declaration syntax). The SQL99\n> form assumes that an existing function will be used for the cast\n> definition, so I've extended the syntax to allow that and to have an\n> alternate form which has more of our CREATE FUNCTION functionality.\n\nCould you provide more precise details? I've thought of this before, when\nthe new \"may be a cast function\" feature was added, and this feature\ndidn't match very well.\n\n> I'm also looking at the SQL99 INFORMATION_SCHEMA views. Is anyone\n> already defining these?\n\nYes. I'm done through section 20.18 (COLUMNS view).\n\n> The initdb folks may want to start thinking about the best way to\n> support a larger number of views; currently they are embedded directly\n> into the initdb script but that would get unwieldy with more of them\n> (and some of them are *really* fat definitions!).\n\nI think they can be loaded from an external file.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 20 Jun 2002 21:09:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "> I'm also looking at the SQL99 INFORMATION_SCHEMA views. Is anyone\n> already defining these? Is someone interested in picking this up?\nI've\n> got some definitions in a contrib-style directory but have not yet\n> mapped them to PostgreSQL.\n\nI have a few of the basics done, but nothing really significant.\n\nOnce I get domains fairly fixed (don't think I'll get check\nconstraints done), I was going to attempt to finish off the small\ngroup (triggeres, views, schemata, domains, etc.) for early July.\n\nIf your interested, send me a note and I'll forward the view\ndefinitions and the patch for functions -- against fairly old source\n(early 7.3).\n\n", "msg_date": "Thu, 20 Jun 2002 17:31:42 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "> > I've got patches for the CREATE CAST/DROP CAST feature (just a\n> > rearrangement of our existing function declaration syntax). The SQL99\n> > form assumes that an existing function will be used for the cast\n> > definition, so I've extended the syntax to allow that and to have an\n> > alternate form which has more of our CREATE FUNCTION functionality.\n> Could you provide more precise details? I've thought of this before, when\n> the new \"may be a cast function\" feature was added, and this feature\n> didn't match very well.\n\nIt doesn't match perfectly in that one field is ignored as being\n(afaict) redundant for us. The basic definition from SQL99 is\n\nCREATE CAST(from AS to) WITH FUNCTION func(args) [AS ASSIGNMENT]\n\nI can map this to something equivalent to\n\nCREATE FUNCTION to(from) RETURNS to AS 'select func($1)' LANGUAGE 'sql';\n\nwith another clause or two to get the implicit coersion enabled, and\nignoring the \"args\" field(s).\n\nThis supposes that a coersion function of some other name already\nexists, and if I define one it seems to work nicely. I defined two\nalternate forms, one resembling the SQL99 clauses and one resembling the\nexisting PostgreSQL CREATE FUNCTION clauses, as follows:\n\nCREATE CAST(from AS to) WITH FUNCTION func(args) AS 'path' WITH ...\n\nand\n\nCREATE CAST(from AS to) AS 'path' WITH ...\n\nand both of these latter forms allow one to eliminate a corresponding\nCREATE FUNCTION.\n\n> > I'm also looking at the SQL99 INFORMATION_SCHEMA views. Is anyone\n> > already defining these?\n> Yes. I'm done through section 20.18 (COLUMNS view).\n\nGreat. I'll stop looking at it then.\n\n> > The initdb folks may want to start thinking about the best way to\n> > support a larger number of views; currently they are embedded directly\n> > into the initdb script but that would get unwieldy with more of them\n> > (and some of them are *really* fat definitions!).\n> I think they can be loaded from an external file.\n\nSounds good.\n\n - Thomas\n", "msg_date": "Fri, 21 Jun 2002 18:30:27 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "I've gone ahead and committed patches for CREATE CAST/DROP CAST, as well\nas for a few other SQL99 clauses in other statements. Details below...\n\n - Thomas\n\nImplement SQL99 CREATE CAST and DROP CAST statements.\n Also implement alternative forms to expose the PostgreSQL CREATE\nFUNCTION\n features.\nImplement syntax for READ ONLY and READ WRITE clauses in SET\nTRANSACTION.\n READ WRITE is already implemented (of course).\nImplement syntax for \"LIKE table\" clause in CREATE TABLE. Should be\nfairly\n easy to complete since it resembles SELECT INTO.\nImplement MATCH SIMPLE clause for foreign key definitions. This is\nexplicit\n SQL99 syntax for the default behavior, so we now support it :)\nStart implementation of shorthand for national character literals in\n scanner. For now, just swallow the leading \"N\", but sometime soon let's\n figure out how to pass leading type info from the scanner to the\nparser.\n We should use the same technique for binary and hex bit string\nliterals,\n though it might be unusual to have two apparently independent literal\n types fold into the same storage type.\n", "msg_date": "Fri, 21 Jun 2002 19:11:32 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "Thomas Lockhart writes:\n\n> It doesn't match perfectly in that one field is ignored as being\n> (afaict) redundant for us. The basic definition from SQL99 is\n>\n> CREATE CAST(from AS to) WITH FUNCTION func(args) [AS ASSIGNMENT]\n>\n> I can map this to something equivalent to\n>\n> CREATE FUNCTION to(from) RETURNS to AS 'select func($1)' LANGUAGE 'sql';\n>\n> with another clause or two to get the implicit coersion enabled, and\n> ignoring the \"args\" field(s).\n\nI think this is wrong. When you call CREATE CAST ... WITH FUNCTION\nfunc(args) then func(args) must already exist. So the closest you could\nmap it to would be\n\nALTER FUNCTION to(from) IMPLICIT CAST\n\niff the name of the function and the target data type agree. (Of course\nthis command doesn't exit, but you get the idea.) The SQL99 feature is\nmore general than ours, but in order to use if effectively we would need\nto maintain another index on pg_proc. Tom Lane once opined that that\nwould be too costly.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Sun, 23 Jun 2002 23:50:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "Rod Taylor writes:\n\n> > I'm also looking at the SQL99 INFORMATION_SCHEMA views. Is anyone\n> > already defining these? Is someone interested in picking this up?\n> I've\n> > got some definitions in a contrib-style directory but have not yet\n> > mapped them to PostgreSQL.\n>\n> I have a few of the basics done, but nothing really significant.\n\nI guess I'll polish what I have and will commit it so that the group can\nfill in the rest at convenience.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Sun, 23 Jun 2002 23:51:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "> > It doesn't match perfectly in that one field is ignored as being\n> > (afaict) redundant for us. The basic definition from SQL99 is\n> > CREATE CAST(from AS to) WITH FUNCTION func(args) [AS ASSIGNMENT]\n> > I can map this to something equivalent to\n> > CREATE FUNCTION to(from) RETURNS to AS 'select func($1)' LANGUAGE 'sql';\n> > with another clause or two to get the implicit coersion enabled, and\n> > ignoring the \"args\" field(s).\n> I think this is wrong. When you call CREATE CAST ... WITH FUNCTION\n> func(args) then func(args) must already exist.\n\nRight. And that is what is required for SQL99 also afaict. There are not\nenough clauses in the SQL99 syntax to allow anything else!\n\n> So the closest you could\n> map it to would be\n> ALTER FUNCTION to(from) IMPLICIT CAST\n\nThat would require that the function to be used as the cast have the\nsame name as the underlying PostgreSQL conventions for casting\nfunctions. The implementation I've done does not require this; it\nbasically defines a new SQL function with a body of\n\nselect func($1)\n\nwhere \"func\" is the name specified in the \"WITH FUNCTION func(args)\"\nclause. It does hang together in the way SQL99 intends and in a way\nwhich is consistant with PostgreSQL's view of the world.\n\nBut, I've also implemented alternate forms which would allow one not\ndefine a separate function beforehand. So the nice PostgreSQL feature of\nallowing function names to be different than the entry points can be\nused.\n\n> iff the name of the function and the target data type agree. (Of course\n> this command doesn't exit, but you get the idea.) The SQL99 feature is\n> more general than ours, but in order to use if effectively we would need\n> to maintain another index on pg_proc. Tom Lane once opined that that\n> would be too costly.\n\nI don't follow you here, but the implementation I have is consistant\nwith SQL99 (or at least with the way I'm interpreting it :)\n\n - Thomas\n\n\n", "msg_date": "Sun, 23 Jun 2002 17:30:20 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> So the closest you could\n>> map it to would be\n>> ALTER FUNCTION to(from) IMPLICIT CAST\n\n> That would require that the function to be used as the cast have the\n> same name as the underlying PostgreSQL conventions for casting\n> functions. The implementation I've done does not require this; it\n> basically defines a new SQL function with a body of\n> select func($1)\n> where \"func\" is the name specified in the \"WITH FUNCTION func(args)\"\n> clause. It does hang together in the way SQL99 intends and in a way\n> which is consistant with PostgreSQL's view of the world.\n\nUrk. Do you realize how expensive SQL functions are for such uses?\n(I have had a to-do item for awhile to teach the planner to inline\ntrivial SQL functions, but it seems unlikely to happen for another\nrelease or three.)\n\nI see no real reason why we should not require casting functions to\nfollow the Postgres naming convention --- after all, what else would\nyou name a casting function?\n\nSo I'm with Peter on this one: make the SQL99 syntax a mere wrapper\nfor setting the IMPLICIT CAST bit on an existing function. Otherwise,\npeople will avoid it as soon as they discover what it's costing them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 16:43:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " }, { "msg_contents": "> I see no real reason why we should not require casting functions to\n> follow the Postgres naming convention --- after all, what else would\n> you name a casting function?\n\nWe do require casting functions to follow the Postgres naming\nconvention. istm to be a waste of time to have the CREATE CAST() feature\n*only* set a bit on an existing function, especially given the SQL99\nsyntax which implies that it can define a cast operation for an\narbitrarily named function. It also supposes that the only allowed casts\nare *implicit casts* (see below for a new issue) which is not quite\nright. I've defined alternate forms which draw on the general PostgreSQL\nfeature set and capabilities, but if we can fit the SQL99 model then we\nshould go ahead and do that too.\n\nI've got another issue with casting which I've run into while testing\nthis feature; afaict invoking an explicit CAST() in SQL does not\nguarantee that the function of the expected name would be called, if\nthat function does not have the implicit flag set. Seems that it should\nbe willing to do the conversion even if the function is not marked as\nallowing implicit casts; after all, this is an *explicit* cast!\n\nI'm pretty sure that this is the behavior I've been seeing, but will\npublish a test case to confirm it when I have a chance.\n\n - Thomas\n\n\n", "msg_date": "Mon, 24 Jun 2002 17:16:46 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I've got another issue with casting which I've run into while testing\n> this feature; afaict invoking an explicit CAST() in SQL does not\n> guarantee that the function of the expected name would be called, if\n> that function does not have the implicit flag set.\n\n[ scratches head ] Whether the flag is set or not shouldn't matter;\nif the cast function is needed it will be called. Were you perhaps\ntesting binary-compatible cases? Note the order of cases specified in\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/typeconv-func.html\n\nI recall we changed what is now case 2 to be higher priority than it\nused to be; I do not recall the examples that motivated that change,\nbut I'm pretty sure moving it down in the priority list would be bad.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 20:45:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " }, { "msg_contents": "I said:\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> I've got another issue with casting which I've run into while testing\n>> this feature; afaict invoking an explicit CAST() in SQL does not\n>> guarantee that the function of the expected name would be called, if\n>> that function does not have the implicit flag set.\n\n> [ scratches head ] Whether the flag is set or not shouldn't matter;\n> if the cast function is needed it will be called. Were you perhaps\n> testing binary-compatible cases?\n\nAnother possibility is that you got burnt by some schema-related issue;\ncf the updated conversion docs at\nhttp://developer.postgresql.org/docs/postgres/typeconv-func.html\n\nIIRC, a function is only considered to be a cast function if it matches\nby name *and schema* with the target type. So if you, for example,\nmake a function public.int4(something), it'll never be considered a\ncast function for pg_catalog.int4. I had some doubts about that rule\nwhen I put it in, but so far have not thought of an alternative I like\nbetter.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 20:55:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " }, { "msg_contents": "> Another possibility is that you got burnt by some schema-related issue;\n> cf the updated conversion docs at\n> http://developer.postgresql.org/docs/postgres/typeconv-func.html\n\nI'll bet that is it, though possible differences in CAST() behaviors are\nnot explained. I'll see if I can reproduce them...\n\n> IIRC, a function is only considered to be a cast function if it matches\n> by name *and schema* with the target type. So if you, for example,\n> make a function public.int4(something), it'll never be considered a\n> cast function for pg_catalog.int4. I had some doubts about that rule\n> when I put it in, but so far have not thought of an alternative I like\n> better.\n\nWell, istm that we should choose something different. The example I was\nusing might be a good use case for a situation we should handle: I\nimplemented a function to convert Unix system time to PG timestamp, and\nwanted it to respond to an explicit cast but *not* an implicit cast.\n\nI got it to work at some point (not sure how, given your description of\nthe schema, uh, scheme) but istm that we definitely do not want to\n*require* modifications to pg_catalog for any and every change in\nfeature or behavior for built-in types. The schema settings are\nimportant, and should have some influence over behavior; that is, if\nsomeone extends PG in one schema then if that schema is in the chain it\nshould be able to influence the session, and if it is not then it should\nonly be able to influence the session if there are side-effects from\nprevious definitions.\n\nbtw, how *do* I control the default schema? Is it always the schema at\nthe front of the search list, or are there other more direct knobs to\nhelp determine this other than explicitly qualifying names in queries?\n\n - Thomas\n\n\n", "msg_date": "Tue, 25 Jun 2002 07:58:49 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> IIRC, a function is only considered to be a cast function if it matches\n>> by name *and schema* with the target type. So if you, for example,\n>> make a function public.int4(something), it'll never be considered a\n>> cast function for pg_catalog.int4. I had some doubts about that rule\n>> when I put it in, but so far have not thought of an alternative I like\n>> better.\n\n> Well, istm that we should choose something different.\n\nWell, let's see an alternate proposal.\n\n> I got it to work at some point (not sure how, given your description of\n> the schema, uh, scheme) but istm that we definitely do not want to\n> *require* modifications to pg_catalog for any and every change in\n> feature or behavior for built-in types.\n\nIf we just look for \"anything named int4() in the current search path\"\nthen I think we will have some unpleasantnesses of a different sort,\nnamely unexpected conflicts between similarly-named types in different\nschemas.\n\n> btw, how *do* I control the default schema? Is it always the schema at\n> the front of the search list,\n\nIf you mean the default schema for creating things, yes, it's whatever\nis at the front of the search list. Should it be different?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 11:14:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " }, { "msg_contents": "> Well, let's see an alternate proposal.\n\nOK. Proposal:\n\nSchemas should be able to contain extensions to any and all features\navailable in the database. Features are resolved by searching the path,\nchoosing the first exact-match in the path. Under-specified cases should\nbe resolved from within the entire set of schemas, using the match\nclosest to the front of the schema list as a tie-breaker.\n\n> If we just look for \"anything named int4() in the current search path\"\n> then I think we will have some unpleasantnesses of a different sort,\n> namely unexpected conflicts between similarly-named types in different\n> schemas.\n\nYup. Schemas are a larger gun and can be loaded with larger bullets, and\nfeet should beware. We should get the expected behavior if they are not\nbeing explicitly used, and we should get the most powerful behavior if\nthey are imho.\n\n> > btw, how *do* I control the default schema? Is it always the schema at\n> > the front of the search list,\n> If you mean the default schema for creating things, yes, it's whatever\n> is at the front of the search list. Should it be different?\n\nWe should at least have some ability to incrementally add and subtract\nfrom the search list. Something like SET SCHEMA = 'foo' would be\nhelpful. Since we don't currently have the search list as a queryable\n(sp?) entity, it is difficult to manipulate on the fly afaict.\n\n - Thomas\n\n\n", "msg_date": "Tue, 25 Jun 2002 08:30:29 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "Tom Lane writes:\n\n> IIRC, a function is only considered to be a cast function if it matches\n> by name *and schema* with the target type. So if you, for example,\n> make a function public.int4(something), it'll never be considered a\n> cast function for pg_catalog.int4. I had some doubts about that rule\n> when I put it in, but so far have not thought of an alternative I like\n> better.\n\nPerhaps it wouldn't be such a terrible idea after all to store the casting\npaths separately, such as in a system table pg_cast (from, to, func,\nimplicit). This would implement the SQL99 spec fairly exactly.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 22:43:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Perhaps it wouldn't be such a terrible idea after all to store the casting\n> paths separately, such as in a system table pg_cast (from, to, func,\n> implicit). This would implement the SQL99 spec fairly exactly.\n\nWell, maybe. One question is how that would fit in with schemas.\nThomas appears to want your schema search path to have some effect on\nwhich casts you can see --- which I'm not at all sure I agree with,\nbut if that's the requirement then the above doesn't do it either.\n\nIf we just want to get out from under the coupling of function name to\ncast status, the above would do it ... and also break existing\napplications that aren't expecting to have to do something special to\nmake a function of the right name become a cast function. Perhaps there\ncould be a GUC variable to allow created functions matching the old\nnaming convention to be automatically made into casts? We could default\nit to 'true' for a release or two and then default to 'false'.\n\nBTW, the above would also provide a place to encode binary compatibility\nassociations in the DB, rather than hard-wired, which would be a Good\nThing. You could say that func == 0 means that no actual function call\nis needed to transform type 'from' to 'to'.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2002 20:10:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " }, { "msg_contents": "On Thu, 2002-06-27 at 02:10, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Perhaps it wouldn't be such a terrible idea after all to store the casting\n> > paths separately, such as in a system table pg_cast (from, to, func,\n> > implicit). This would implement the SQL99 spec fairly exactly.\n> \n> Well, maybe. One question is how that would fit in with schemas.\n> Thomas appears to want your schema search path to have some effect on\n> which casts you can see --- which I'm not at all sure I agree with,\n\nI hope that schema search path has some effect on other user-defined\nstuff like simple functions and operators.\n\n> but if that's the requirement then the above doesn't do it either.\n\nWhat is and what is not affected by schemas ?\n\nAre the docs on our schema usage already available someplace ?\n\n\n-------------\nHannu\n\n\n", "msg_date": "27 Jun 2002 14:31:34 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Are the docs on our schema usage already available someplace ?\n\nYes, although there's not a pulled-together introduction (someone needs\nto write a section for the tutorial, I think). Try\nhttp://developer.postgresql.org/docs/postgres/sql-naming.html\nand see the SEARCH_PATH variable at\nhttp://developer.postgresql.org/docs/postgres/runtime-config.html#RUNTIME-CONFIG-GENERAL\nas well as the schema-aware rules for resolution of overloaded functions\nand operators:\nhttp://developer.postgresql.org/docs/postgres/typeconv-func.html\nhttp://developer.postgresql.org/docs/postgres/typeconv-oper.html\nalso various new functions at\nhttp://developer.postgresql.org/docs/postgres/functions-misc.html\nhttp://developer.postgresql.org/docs/postgres/datatype-oid.html\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2002 10:55:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " }, { "msg_contents": "Tom Lane writes:\n\n> Thomas appears to want your schema search path to have some effect on\n> which casts you can see --- which I'm not at all sure I agree with,\n> but if that's the requirement then the above doesn't do it either.\n\nIf I understand this right, this would be nearly analogous to determining\nan operator's underlying function by schema path. That smells an awful\nlot like dynamic scoping, a.k.a. a bad idea, and completely inconsistent\nwith the rest of the system.\n\n> If we just want to get out from under the coupling of function name to\n> cast status, the above would do it ... and also break existing\n> applications that aren't expecting to have to do something special to\n> make a function of the right name become a cast function. Perhaps there\n> could be a GUC variable to allow created functions matching the old\n> naming convention to be automatically made into casts? We could default\n> it to 'true' for a release or two and then default to 'false'.\n\nSure. However, AFAIK, the current development progress has already broken\nthe previous expectations slightly by requiring that implicit casting\npaths be explicitly declared.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n\n", "msg_date": "Fri, 28 Jun 2002 00:48:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Thomas appears to want your schema search path to have some effect on\n>> which casts you can see --- which I'm not at all sure I agree with,\n>> but if that's the requirement then the above doesn't do it either.\n\n> If I understand this right, this would be nearly analogous to determining\n> an operator's underlying function by schema path. That smells an awful\n> lot like dynamic scoping, a.k.a. a bad idea, and completely inconsistent\n> with the rest of the system.\n\nI don't like it either. ISTM that the casting relationship between two\ntypes is a property of those types and should *not* be affected by your\nsearch path. Maybe you referenced one or both types by qualified\nschema names, rather than by finding them in your path, but should that\nkeep you from seeing the cast? Especially since there's no obvious\nplace in the CAST syntax to attach a schema qualification, if we try\nto insist that one might be needed to get at the desired cast function.\n\nAn extreme case is binary-equivalence, which as I mentioned maps nicely\ninto the sort of pg_cast table you suggested. It doesn't map nicely\ninto anything that involves schema visibility --- there is no cast\nfunction to hide or make visible. Even more to the point, if types A\nand B are binary-equivalent, should changing my search path make them\nstop being so? Doesn't make sense to me.\n\n> Sure. However, AFAIK, the current development progress has already broken\n> the previous expectations slightly by requiring that implicit casting\n> paths be explicitly declared.\n\nTrue. If we wanted to maintain the old behavior exactly then we could\nallow this hypothetical GUC variable to also cause old-convention cast\nfunctions to be automatically marked IMPLICIT CAST. (I suppose the\nIMPLICIT CAST bit would actually stop being a property of functions at\nall, and would become a column of pg_cast.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Jun 2002 00:12:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL99, CREATE CAST, and initdb " } ]
[ { "msg_contents": "I see in ri_triggers.c:\n\n * Portions Copyright (c) 1996-2002, PostgreSQL Global Development Group\n * Copyright 1999 Jan Wieck\n\nJan, are you holding copyright on this or is it dual, and what does dual\nmean in this case?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Jun 2002 16:51:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Copyright" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I see in ri_triggers.c:\n> \n> * Portions Copyright (c) 1996-2002, PostgreSQL Global Development Group\n> * Copyright 1999 Jan Wieck\n> \n> Jan, are you holding copyright on this or is it dual, and what does dual\n> mean in this case?\n\nSure do I, do I? Hmmm, I can't even tell what percentage of the\nsourcecode is mine. I guess the second line predates the\n\"Portions Copyright\" and the one who plastered that simply failed\nto removed my name. If memory serves I agreed to removal of all\nmy personal Copyright lines when we discussed the new License\ntext. Didn't I?\n\nSo reading Copyright lines is what you do at teatime? Man, don't\nyou have to tickle Catherine, apply some patches or do something\nelse that has value?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n", "msg_date": "Thu, 20 Jun 2002 22:23:36 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Copyright" }, { "msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > \n> > I see in ri_triggers.c:\n> > \n> > * Portions Copyright (c) 1996-2002, PostgreSQL Global Development Group\n> > * Copyright 1999 Jan Wieck\n> > \n> > Jan, are you holding copyright on this or is it dual, and what does dual\n> > mean in this case?\n> \n> Sure do I, do I? Hmmm, I can't even tell what percentage of the\n> sourcecode is mine. I guess the second line predates the\n> \"Portions Copyright\" and the one who plastered that simply failed\n> to removed my name. If memory serves I agreed to removal of all\n> my personal Copyright lines when we discussed the new License\n> text. Didn't I?\n\nRemoved.\n\n> So reading Copyright lines is what you do at teatime? Man, don't\n> you have to tickle Catherine, apply some patches or do something\n> else that has value?\n\n:-)\n\nGoing through my email, Vince mentioned it. It isn't glamorous, but it\nneeds to be done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Jun 2002 23:00:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Copyright" } ]
[ { "msg_contents": "$ cvs -z3 up -d -P\ncvs [server aborted]: read lock failed - giving up\n\nHmm, been this way for a while now, something need budging? :)\n\n-d\n\n\n", "msg_date": "Thu, 20 Jun 2002 17:30:49 -0400", "msg_from": "David Ford <david+cert@blue-labs.org>", "msg_from_op": true, "msg_subject": "cvs read lock" }, { "msg_contents": "David Ford wrote:\n> $ cvs -z3 up -d -P\n> cvs [server aborted]: read lock failed - giving up\n> \n> Hmm, been this way for a while now, something need budging? :)\n\nI don't see the problem here. Is that CVS or anonCVS? Can you show us\nthe 'waiting' lines above this that show the lock location?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Jun 2002 20:12:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs read lock" }, { "msg_contents": "anoncvs, the lock cleared in the last hour. It didn't actually do any \nwaiting, it aborted almost immediately.\n\nDavid\n\n\nBruce Momjian wrote:\n\n>David Ford wrote:\n> \n>\n>>$ cvs -z3 up -d -P\n>>cvs [server aborted]: read lock failed - giving up\n>>\n>>Hmm, been this way for a while now, something need budging? :)\n>> \n>>\n>\n>I don't see the problem here. Is that CVS or anonCVS? Can you show us\n>the 'waiting' lines above this that show the lock location?\n>\n> \n>\n\n", "msg_date": "Thu, 20 Jun 2002 22:23:29 -0400", "msg_from": "David Ford <david+cert@blue-labs.org>", "msg_from_op": true, "msg_subject": "Re: cvs read lock" } ]
[ { "msg_contents": "Hi,\n\nI am about to add code to postgresql that would allow IDENT\nauthentification with DES encryption (as seen in the pidentd package\nincluded with Redhat - not sure if same encryption is used by other\nident daemons). The code would allow for two types of IDENT\nauthentification:\n\nident - this is the same as before, accept now it will try to decrypt\nusername if IDENT response is surrounded in braces.\nident-des - this will only allow encrypted IDENT responses.\n\nKeys will be kept in a file: $PG_DATA/pg_ident.key.\n\nThe code will probably also require that UID's on the client machine and\nin postgresql all correspond. If not, a map could be used.\n\nDoes anyone have any suggestions about this? Has anyone done this?\n\nDavid\n\n\n\n", "msg_date": "Thu, 20 Jun 2002 15:48:09 -0700", "msg_from": "\"David M. Kaplan\" <dmkaplan@ucdavis.edu>", "msg_from_op": true, "msg_subject": "Adding encrypted identd authetification" }, { "msg_contents": "\"David M. Kaplan\" <dmkaplan@ucdavis.edu> writes:\n> I am about to add code to postgresql that would allow IDENT\n> authentification with DES encryption (as seen in the pidentd package\n> included with Redhat - not sure if same encryption is used by other\n> ident daemons).\n\nWhat's the point, exactly?\n\nFor local connections, you do not need encryption, and for remote\nconnections it's sheer folly to use IDENT anyway. So I'm having a\nhard time identifying the space of applicability...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 13:19:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Adding encrypted identd authetification " } ]
[ { "msg_contents": "Since it's currently all for collecting statistics on tables, why can't it\ncollect another type of statistic, like:\n\n- How often the estimator gets it wrong?\n\nAt the end of an index scan, the executor could compare the number of rows\nreturned against what was estimated, and if it falls outside a certain\nrange, flag it.\n\nAlso, the average ratio of rows coming out of a distinct node vs the number\ngoing in.\n\nFor a join clause, the amount of correlation between two columns (hard).\n\netc\n\nIdeally, the planner could then use this info to make better plans.\nEventually, the whole system could become somewhat self-tuning.\n\nDoes anyone see any problems with this?\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Fri, 21 Jun 2002 10:23:04 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": true, "msg_subject": "Idea for the statistics collector" }, { "msg_contents": "Martijn van Oosterhout wrote:\n> Since it's currently all for collecting statistics on tables, why can't it\n> collect another type of statistic, like:\n> \n> - How often the estimator gets it wrong?\n> \n> At the end of an index scan, the executor could compare the number of rows\n> returned against what was estimated, and if it falls outside a certain\n> range, flag it.\n> \n> Also, the average ratio of rows coming out of a distinct node vs the number\n> going in.\n> \n> For a join clause, the amount of correlation between two columns (hard).\n> \n> etc\n> \n> Ideally, the planner could then use this info to make better plans.\n> Eventually, the whole system could become somewhat self-tuning.\n> \n> Does anyone see any problems with this?\n\n[ Discussion moved to hackers.]\n\nI have thought that some type of feedback from the executor back into\nthe optimizer would be a good feature. Not sure how to do it, but your\nidea makes sense. It certainly could update the table statistics after\na sequential scan.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Jun 2002 22:50:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" }, { "msg_contents": "On Thu, 20 Jun 2002 22:50:04 -0400 (EDT)\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> I have thought that some type of feedback from the executor back into\n> the optimizer would be a good feature. Not sure how to do it, but your\n> idea makes sense. It certainly could update the table statistics after\n> a sequential scan.\n\nSearch the archives for a thread I started on -hackers called \"self-tuning\nhistograms\", which talks about a pretty similar idea. The technique there\napplies only to histograms, and builds the histogram based *only* upon\nthe data provided by the executor.\n\nTom commented that it's probably a better idea to concentrate on more\nelementary techniques, like multi-dimensional histograms, before starting\non ST histograms. I agree, and plan to look at multi-dimensional histograms\nwhen I get some spare time.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 20 Jun 2002 23:55:38 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" }, { "msg_contents": "Neil Conway wrote:\n> On Thu, 20 Jun 2002 22:50:04 -0400 (EDT)\n> \"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> > I have thought that some type of feedback from the executor back into\n> > the optimizer would be a good feature. Not sure how to do it, but your\n> > idea makes sense. It certainly could update the table statistics after\n> > a sequential scan.\n> \n> Search the archives for a thread I started on -hackers called \"self-tuning\n> histograms\", which talks about a pretty similar idea. The technique there\n> applies only to histograms, and builds the histogram based *only* upon\n> the data provided by the executor.\n> \n> Tom commented that it's probably a better idea to concentrate on more\n> elementary techniques, like multi-dimensional histograms, before starting\n> on ST histograms. I agree, and plan to look at multi-dimensional histograms\n> when I get some spare time.\n\nI was thinking of something much more elementary, like a table that\nreports to have 50 blocks but an executor sequential scan shows 500\nblocks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Jun 2002 23:59:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" }, { "msg_contents": "Martijn van Oosterhout <kleptog@svana.org> writes:\n> Since it's currently all for collecting statistics on tables, why can't it\n> collect another type of statistic, like:\n> - How often the estimator gets it wrong?\n> [snip]\n> Does anyone see any problems with this?\n\n(1) forced overhead on *every* query.\n(2) contention to update the same rows of pg_statistic (or wherever you\n plan to store this info).\n(3) okay, so the estimate was wrong; exactly which of the many\n parameters that went into the estimate do you plan to twiddle?\n What if it's not the parameter values that are at fault, but the\n cost-model equations themselves?\n\nClosed-loop feedback is a great thing when you understand the dynamics\nof the system you intend to apply feedback control to. When you don't,\nit's a great way to shoot yourself in the foot. Unfortunately I don't\nthink the PG optimizer falls in the first category at present.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 00:47:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Idea for the statistics collector " }, { "msg_contents": "On Fri, Jun 21, 2002 at 12:47:18AM -0400, Tom Lane wrote:\n> Martijn van Oosterhout <kleptog@svana.org> writes:\n> > Since it's currently all for collecting statistics on tables, why can't it\n> > collect another type of statistic, like:\n> > - How often the estimator gets it wrong?\n> > [snip]\n> > Does anyone see any problems with this?\n> \n> (1) forced overhead on *every* query.\nIf yo don't want it, don't use it. The current statistics have the same\nissue and you can not do those as well.\n\n> (2) contention to update the same rows of pg_statistic (or wherever you\n> plan to store this info).\n\nTrue, can't avoid that. Depends on how many queries you. Maybe only enable\nit for specific sessions?\n\n> (3) okay, so the estimate was wrong; exactly which of the many\n> parameters that went into the estimate do you plan to twiddle?\n> What if it's not the parameter values that are at fault, but the\n> cost-model equations themselves?\n\nFirstly, I was only thinking of going for the basic nodes (Index Scan, Seq\nScan, Distinct). Other types have far more variables. Secondly, even if you\nonly count, it's useful. For example, if it tells you that the planner is\noff by a factor of 10 more than 75% of the time, that's useful information\nindependant of what the actual variables are.\n\n> Closed-loop feedback is a great thing when you understand the dynamics\n> of the system you intend to apply feedback control to. When you don't,\n> it's a great way to shoot yourself in the foot. Unfortunately I don't\n> think the PG optimizer falls in the first category at present.\n\nUsing the results for planning is obviously a tricky area and should proceed\nwith caution. But just collecting statistics shouldn't be too bad?\n\nSee also -hackers.\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Fri, 21 Jun 2002 17:22:14 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": true, "msg_subject": "Re: Idea for the statistics collector" }, { "msg_contents": "Martijn van Oosterhout wrote:\n> Firstly, I was only thinking of going for the basic nodes (Index Scan, Seq\n> Scan, Distinct). Other types have far more variables. Secondly, even if you\n> only count, it's useful. For example, if it tells you that the planner is\n> off by a factor of 10 more than 75% of the time, that's useful information\n> independant of what the actual variables are.\n\nYes, only updating the stats if the estimate was off by a factor of 10\nor so should cut down on the overhead.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 21 Jun 2002 08:23:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Idea for the statistics collector" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Martijn van Oosterhout wrote:\n>> Firstly, I was only thinking of going for the basic nodes (Index Scan, Seq\n>> Scan, Distinct). Other types have far more variables. Secondly, even if you\n>> only count, it's useful. For example, if it tells you that the planner is\n>> off by a factor of 10 more than 75% of the time, that's useful information\n>> independant of what the actual variables are.\n\n> Yes, only updating the stats if the estimate was off by a factor of 10\n> or so should cut down on the overhead.\n\nAnd reduce the usefulness even more ;-). As a pure stats-gathering\nexercise it might be worth doing, but not if you only log the failure\ncases. How will you know how well you are doing if you take a\nbiased-by-design sample?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 09:39:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Idea for the statistics collector " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Martijn van Oosterhout wrote:\n> >> Firstly, I was only thinking of going for the basic nodes (Index Scan, Seq\n> >> Scan, Distinct). Other types have far more variables. Secondly, even if you\n> >> only count, it's useful. For example, if it tells you that the planner is\n> >> off by a factor of 10 more than 75% of the time, that's useful information\n> >> independant of what the actual variables are.\n> \n> > Yes, only updating the stats if the estimate was off by a factor of 10\n> > or so should cut down on the overhead.\n> \n> And reduce the usefulness even more ;-). As a pure stats-gathering\n> exercise it might be worth doing, but not if you only log the failure\n> cases. How will you know how well you are doing if you take a\n> biased-by-design sample?\n\nSure is it required to count all cases, success and failure. But I don't\nsee why it is required to feed that information constantly back into the\nstatistics tables. As long as we don't restart, it's perfectly good in\nthe collector. And it must not be fed back to the backend on every\nquery. \n\nMaybe ANALYZE would like to have some of that information? If memory\nserves, ANALYZE does a poor job when the data isn't well distributet,\nhas few distinct values and the like. That causes wrong estimates then\n(among other things, of course). The idea could be, to have ANALYZE take\na much closer look at tables with horrible estimates, to generate better\ninformation for those.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Fri, 21 Jun 2002 10:02:27 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Idea for the statistics collector" }, { "msg_contents": "\n>Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Martijn van Oosterhout wrote:\n> > >> Firstly, I was only thinking of going for the basic nodes (Index \n> Scan, Seq\n> > >> Scan, Distinct). Other types have far more variables. Secondly, even \n> if you\n> > >> only count, it's useful. For example, if it tells you that the \n> planner is\n> > >> off by a factor of 10 more than 75% of the time, that's useful \n> information\n> > >> independant of what the actual variables are.\n> >\n> > And reduce the usefulness even more ;-). As a pure stats-gathering\n> > exercise it might be worth doing, but not if you only log the failure\n> > cases. How will you know how well you are doing if you take a\n> > biased-by-design sample?\n\nPersonally, given that it seems like at least once or twice a day someone \nasks about performance or \"why isn't my index being used\" and other stuff - \nI think doing this would be a great idea.\n\nPerhaps not necessarily in the full-fledged way, but creating a sort of \n\"ANALYZE log,\" wherein it logs the optimizer's estimate of a query and the \nactual results of a query, for every query. This, of course, could be \nenableable/disableable on a per-connection basis, per-table basis (like \nOIDs), or whatever other basis makes life easiest to the developers.\n\nThen, when the next ANALYZE is run, it could do it's usual analysis, and \napply some additional heuristics based upon what it learns from the \n\"ANALYZE log,\" possibly to do several things:\n\n1) Automatically increase/decrease the SET STATISTICS information included \nin the analyze, for example, increasing it as a table grows larger and the \n\"randomness\" grows less than linearly with size (e.g., if you have 50 or 60 \ngroups in a 1,000,000 row table, that certainly needs a higher SET \nSTATISTICS and I do it on my tables).\n2) Have an additional value on the statistics table called the \n\"index_heuristic\" or \"random_page_adjustment_heuristic\" which when 1 does \nnothing, but otherwise modifies the cost of using an index/seq scan by that \nfactor - and don't ever change this more than a few percent each ANALYZE\n3) Flags in a second log (maybe the regular log) really bad query estimates \n- let it do an analysis of the queries and flag anything two or three std \ndeviations outside.\n\nNow, I suggest all this stuff in the name of usability and \nself-maintainability. Unfortunately, I don't have the wherewithal to \nactually assist in development.\n\nAnother possibility is to put \"use_seq_scan\" default to OFF, or whatever \nthe parameter is (I did my optimizing a while ago so it's fading), so that \nif there's an index, it will use it, regardless - as this seems to be what \nthe great majority of people expect to happen. And/or add this to a FAQ, \nand let us all reply \"see http://.../indexfaq.html.\" :)\n\nCheers,\n\nDoug\n\n", "msg_date": "Fri, 21 Jun 2002 11:07:09 -0400", "msg_from": "Doug Fields <dfields@pexicom.com>", "msg_from_op": false, "msg_subject": "Re: Idea for the statistics collector" }, { "msg_contents": "I was thinking of writing a command line tool like 'pgtune' that looks at\nthe stats views and will generate SQL code for, or do automatically the\nfollowing:\n\n* Dropping indices that are never used\n* Creating appropriate indices to avoid large, expensive sequential scans.\n\nThis would put us in the 'mysql makes my indices for me by magic' league -\nbut would be far more powerful and flexible. How to do multikey indices is\nbeyond me tho.\n\n*sigh* I'm recovering from a septoplasty on my nose atm, so I might have\nsome time to do some coding!\n\nChris\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Martijn van Oosterhout\" <kleptog@svana.org>\nCc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Friday, June 21, 2002 10:50 AM\nSubject: Re: [HACKERS] [GENERAL] Idea for the statistics collector\n\n\n> Martijn van Oosterhout wrote:\n> > Since it's currently all for collecting statistics on tables, why can't\nit\n> > collect another type of statistic, like:\n> >\n> > - How often the estimator gets it wrong?\n> >\n> > At the end of an index scan, the executor could compare the number of\nrows\n> > returned against what was estimated, and if it falls outside a certain\n> > range, flag it.\n> >\n> > Also, the average ratio of rows coming out of a distinct node vs the\nnumber\n> > going in.\n> >\n> > For a join clause, the amount of correlation between two columns (hard).\n> >\n> > etc\n> >\n> > Ideally, the planner could then use this info to make better plans.\n> > Eventually, the whole system could become somewhat self-tuning.\n> >\n> > Does anyone see any problems with this?\n>\n> [ Discussion moved to hackers.]\n>\n> I have thought that some type of feedback from the executor back into\n> the optimizer would be a good feature. Not sure how to do it, but your\n> idea makes sense. It certainly could update the table statistics after\n> a sequential scan.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n\n", "msg_date": "Sat, 22 Jun 2002 15:48:23 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> I was thinking of writing a command line tool like 'pgtune' that looks at\n> the stats views and will generate SQL code for, or do automatically the\n> following:\n> \n> * Dropping indices that are never used\n> * Creating appropriate indices to avoid large, expensive sequential scans.\n> \n> This would put us in the 'mysql makes my indices for me by magic' league -\n> but would be far more powerful and flexible. How to do multikey indices is\n> beyond me tho.\n\nThis is a great idea. I have been wanting to do something like this\nmyself but probably won't get the time.\n\nDoes MySQL really make indexes by magic?\n\nAlso, I had to look up the contraction for \"will not\" because I always\nget that confused (won't). I just found a web page on it:\n\n\thttp://www.straightdope.com/mailbag/mwont.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Mon, 24 Jun 2002 10:39:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I was thinking of writing a command line tool like 'pgtune' that looks at\n> the stats views and will generate SQL code for, or do automatically the\n> following:\n\n> * Dropping indices that are never used\n> * Creating appropriate indices to avoid large, expensive sequential scans.\n\nDropping unused indices sounds good --- but beware of dropping unique\nindexes; they may be there to enforce a constraint, and not because of\nany desire to use them in queries.\n\nI'm not sure how you're going to automatically intuit appropriate\nindexes to add, though. You'd need to look at a suitable workload\n(ie, a representative set of queries) which is not data that's readily\navailable from the stats views. Perhaps we could expect the DBA to\nprovide a segment of log output that includes debug_print_query\nand show_query_stats results.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 11:34:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector " }, { "msg_contents": "\nAdded to TODO list:\n\n\t* Log queries where the optimizer row estimates were dramatically\n\t different from the number of rows actually found (?)\n\n---------------------------------------------------------------------------\n\nDoug Fields wrote:\n> \n> >Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Martijn van Oosterhout wrote:\n> > > >> Firstly, I was only thinking of going for the basic nodes (Index \n> > Scan, Seq\n> > > >> Scan, Distinct). Other types have far more variables. Secondly, even \n> > if you\n> > > >> only count, it's useful. For example, if it tells you that the \n> > planner is\n> > > >> off by a factor of 10 more than 75% of the time, that's useful \n> > information\n> > > >> independant of what the actual variables are.\n> > >\n> > > And reduce the usefulness even more ;-). As a pure stats-gathering\n> > > exercise it might be worth doing, but not if you only log the failure\n> > > cases. How will you know how well you are doing if you take a\n> > > biased-by-design sample?\n> \n> Personally, given that it seems like at least once or twice a day someone \n> asks about performance or \"why isn't my index being used\" and other stuff - \n> I think doing this would be a great idea.\n> \n> Perhaps not necessarily in the full-fledged way, but creating a sort of \n> \"ANALYZE log,\" wherein it logs the optimizer's estimate of a query and the \n> actual results of a query, for every query. This, of course, could be \n> enableable/disableable on a per-connection basis, per-table basis (like \n> OIDs), or whatever other basis makes life easiest to the developers.\n> \n> Then, when the next ANALYZE is run, it could do it's usual analysis, and \n> apply some additional heuristics based upon what it learns from the \n> \"ANALYZE log,\" possibly to do several things:\n> \n> 1) Automatically increase/decrease the SET STATISTICS information included \n> in the analyze, for example, increasing it as a table grows larger and the \n> \"randomness\" grows less than linearly with size (e.g., if you have 50 or 60 \n> groups in a 1,000,000 row table, that certainly needs a higher SET \n> STATISTICS and I do it on my tables).\n> 2) Have an additional value on the statistics table called the \n> \"index_heuristic\" or \"random_page_adjustment_heuristic\" which when 1 does \n> nothing, but otherwise modifies the cost of using an index/seq scan by that \n> factor - and don't ever change this more than a few percent each ANALYZE\n> 3) Flags in a second log (maybe the regular log) really bad query estimates \n> - let it do an analysis of the queries and flag anything two or three std \n> deviations outside.\n> \n> Now, I suggest all this stuff in the name of usability and \n> self-maintainability. Unfortunately, I don't have the wherewithal to \n> actually assist in development.\n> \n> Another possibility is to put \"use_seq_scan\" default to OFF, or whatever \n> the parameter is (I did my optimizing a while ago so it's fading), so that \n> if there's an index, it will use it, regardless - as this seems to be what \n> the great majority of people expect to happen. And/or add this to a FAQ, \n> and let us all reply \"see http://.../indexfaq.html.\" :)\n> \n> Cheers,\n> \n> Doug\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 19 Apr 2005 22:44:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Idea for the statistics collector" }, { "msg_contents": "\nAdded to TODO:\n\n\t* Add tool to query pg_stat_* tables and report indexes that aren't needed\n\t or tables that might need indexes\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> I was thinking of writing a command line tool like 'pgtune' that looks at\n> the stats views and will generate SQL code for, or do automatically the\n> following:\n> \n> * Dropping indices that are never used\n> * Creating appropriate indices to avoid large, expensive sequential scans.\n> \n> This would put us in the 'mysql makes my indices for me by magic' league -\n> but would be far more powerful and flexible. How to do multikey indices is\n> beyond me tho.\n> \n> *sigh* I'm recovering from a septoplasty on my nose atm, so I might have\n> some time to do some coding!\n> \n> Chris\n> \n> ----- Original Message -----\n> From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> To: \"Martijn van Oosterhout\" <kleptog@svana.org>\n> Cc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\n> Sent: Friday, June 21, 2002 10:50 AM\n> Subject: Re: [HACKERS] [GENERAL] Idea for the statistics collector\n> \n> \n> > Martijn van Oosterhout wrote:\n> > > Since it's currently all for collecting statistics on tables, why can't\n> it\n> > > collect another type of statistic, like:\n> > >\n> > > - How often the estimator gets it wrong?\n> > >\n> > > At the end of an index scan, the executor could compare the number of\n> rows\n> > > returned against what was estimated, and if it falls outside a certain\n> > > range, flag it.\n> > >\n> > > Also, the average ratio of rows coming out of a distinct node vs the\n> number\n> > > going in.\n> > >\n> > > For a join clause, the amount of correlation between two columns (hard).\n> > >\n> > > etc\n> > >\n> > > Ideally, the planner could then use this info to make better plans.\n> > > Eventually, the whole system could become somewhat self-tuning.\n> > >\n> > > Does anyone see any problems with this?\n> >\n> > [ Discussion moved to hackers.]\n> >\n> > I have thought that some type of feedback from the executor back into\n> > the optimizer would be a good feature. Not sure how to do it, but your\n> > idea makes sense. It certainly could update the table statistics after\n> > a sequential scan.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 19 Apr 2005 22:48:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Good god - how old was that email? 2002???\n\nYep, and been in my mailbox since then, waiting for me to process it\ninto a TODO entry.\n\n---------------------------------------------------------------------------\n\n\n> \n> Chris\n> \n> Bruce Momjian wrote:\n> > Added to TODO:\n> > \n> > \t* Add tool to query pg_stat_* tables and report indexes that aren't needed\n> > \t or tables that might need indexes\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Christopher Kings-Lynne wrote:\n> > \n> >>I was thinking of writing a command line tool like 'pgtune' that looks at\n> >>the stats views and will generate SQL code for, or do automatically the\n> >>following:\n> >>\n> >>* Dropping indices that are never used\n> >>* Creating appropriate indices to avoid large, expensive sequential scans.\n> >>\n> >>This would put us in the 'mysql makes my indices for me by magic' league -\n> >>but would be far more powerful and flexible. How to do multikey indices is\n> >>beyond me tho.\n> >>\n> >>*sigh* I'm recovering from a septoplasty on my nose atm, so I might have\n> >>some time to do some coding!\n> >>\n> >>Chris\n> >>\n> >>----- Original Message -----\n> >>From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> >>To: \"Martijn van Oosterhout\" <kleptog@svana.org>\n> >>Cc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\n> >>Sent: Friday, June 21, 2002 10:50 AM\n> >>Subject: Re: [HACKERS] [GENERAL] Idea for the statistics collector\n> >>\n> >>\n> >>\n> >>>Martijn van Oosterhout wrote:\n> >>>\n> >>>>Since it's currently all for collecting statistics on tables, why can't\n> >>\n> >>it\n> >>\n> >>>>collect another type of statistic, like:\n> >>>>\n> >>>>- How often the estimator gets it wrong?\n> >>>>\n> >>>>At the end of an index scan, the executor could compare the number of\n> >>\n> >>rows\n> >>\n> >>>>returned against what was estimated, and if it falls outside a certain\n> >>>>range, flag it.\n> >>>>\n> >>>>Also, the average ratio of rows coming out of a distinct node vs the\n> >>\n> >>number\n> >>\n> >>>>going in.\n> >>>>\n> >>>>For a join clause, the amount of correlation between two columns (hard).\n> >>>>\n> >>>>etc\n> >>>>\n> >>>>Ideally, the planner could then use this info to make better plans.\n> >>>>Eventually, the whole system could become somewhat self-tuning.\n> >>>>\n> >>>>Does anyone see any problems with this?\n> >>>\n> >>>[ Discussion moved to hackers.]\n> >>>\n> >>>I have thought that some type of feedback from the executor back into\n> >>>the optimizer would be a good feature. Not sure how to do it, but your\n> >>>idea makes sense. It certainly could update the table statistics after\n> >>>a sequential scan.\n> >>>\n> >>>--\n> >>> Bruce Momjian | http://candle.pha.pa.us\n> >>> pgman@candle.pha.pa.us | (610) 853-3000\n> >>> + If your life is a hard drive, | 830 Blythe Avenue\n> >>> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >>>\n> >>>---------------------------(end of broadcast)---------------------------\n> >>>TIP 6: Have you searched our list archives?\n> >>>\n> >>>http://archives.postgresql.org\n> >>>\n> >>\n> >>\n> >>\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 4: Don't 'kill -9' the postmaster\n> >>\n> >>\n> >>\n> > \n> > \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 19 Apr 2005 22:56:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" }, { "msg_contents": "Good god - how old was that email? 2002???\n\nChris\n\nBruce Momjian wrote:\n> Added to TODO:\n> \n> \t* Add tool to query pg_stat_* tables and report indexes that aren't needed\n> \t or tables that might need indexes\n> \n> ---------------------------------------------------------------------------\n> \n> Christopher Kings-Lynne wrote:\n> \n>>I was thinking of writing a command line tool like 'pgtune' that looks at\n>>the stats views and will generate SQL code for, or do automatically the\n>>following:\n>>\n>>* Dropping indices that are never used\n>>* Creating appropriate indices to avoid large, expensive sequential scans.\n>>\n>>This would put us in the 'mysql makes my indices for me by magic' league -\n>>but would be far more powerful and flexible. How to do multikey indices is\n>>beyond me tho.\n>>\n>>*sigh* I'm recovering from a septoplasty on my nose atm, so I might have\n>>some time to do some coding!\n>>\n>>Chris\n>>\n>>----- Original Message -----\n>>From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n>>To: \"Martijn van Oosterhout\" <kleptog@svana.org>\n>>Cc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\n>>Sent: Friday, June 21, 2002 10:50 AM\n>>Subject: Re: [HACKERS] [GENERAL] Idea for the statistics collector\n>>\n>>\n>>\n>>>Martijn van Oosterhout wrote:\n>>>\n>>>>Since it's currently all for collecting statistics on tables, why can't\n>>\n>>it\n>>\n>>>>collect another type of statistic, like:\n>>>>\n>>>>- How often the estimator gets it wrong?\n>>>>\n>>>>At the end of an index scan, the executor could compare the number of\n>>\n>>rows\n>>\n>>>>returned against what was estimated, and if it falls outside a certain\n>>>>range, flag it.\n>>>>\n>>>>Also, the average ratio of rows coming out of a distinct node vs the\n>>\n>>number\n>>\n>>>>going in.\n>>>>\n>>>>For a join clause, the amount of correlation between two columns (hard).\n>>>>\n>>>>etc\n>>>>\n>>>>Ideally, the planner could then use this info to make better plans.\n>>>>Eventually, the whole system could become somewhat self-tuning.\n>>>>\n>>>>Does anyone see any problems with this?\n>>>\n>>>[ Discussion moved to hackers.]\n>>>\n>>>I have thought that some type of feedback from the executor back into\n>>>the optimizer would be a good feature. Not sure how to do it, but your\n>>>idea makes sense. It certainly could update the table statistics after\n>>>a sequential scan.\n>>>\n>>>--\n>>> Bruce Momjian | http://candle.pha.pa.us\n>>> pgman@candle.pha.pa.us | (610) 853-3000\n>>> + If your life is a hard drive, | 830 Blythe Avenue\n>>> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 6: Have you searched our list archives?\n>>>\n>>>http://archives.postgresql.org\n>>>\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Don't 'kill -9' the postmaster\n>>\n>>\n>>\n> \n> \n", "msg_date": "Wed, 20 Apr 2005 10:57:17 +0800", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" }, { "msg_contents": "Bruce Momjian wrote:\n> Christopher Kings-Lynne wrote:\n> \n>>Good god - how old was that email? 2002???\n> \n> \n> Yep, and been in my mailbox since then, waiting for me to process it\n> into a TODO entry.\n\nExciting what one can find wiping the floor of the mailbox :-)\n\nRegards,\nAndreas\n", "msg_date": "Wed, 20 Apr 2005 11:05:13 +0000", "msg_from": "Andreas Pflug <pgadmin@pse-consulting.de>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" } ]
[ { "msg_contents": "OK, I have finally decided that our archive searching stinks. I have\nemails in my mailbox that don't appear in the archives.\n\nOur main site, http://archives.postgresql.org/ doesn't archive the\n'patches' list. (It isn't listed on the main site, and I can't find\npostings via searching.) Also, why does it open a separate window for\neach email. That doesn't make any sense to me.\n\nMy backup is Google,\nhttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql,\nbut that seems to be missing emails too. Our email/news link regulary\ndrops messages and therefore Google can't see them.\n\nIt isn't one thing, but a general lack of quality in this area. Heck,\nwe had no usable archives for _months_. Is this really only important\nto me?\n\nOh, I see FTS is back working at http://fts.postgresql.org/db/mw/. I\nlike the output format, but all three are give me different results. \nHowever, fts is invisible because I can't find a link to it from\nanywhere on our web pages.\n\nI guess I am asking:\n\n\tCan our main archive start doing the patches list?\n\tCan it stop opening a new window for every email?\n\tCan we find out why the email/news gateway drops messages?\n\tCan we link to the fts site?\n\nSearch for 'dbmirror' on all three and you will see the differences.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Jun 2002 20:40:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Our archive searching stinks" }, { "msg_contents": "On Thu, 20 Jun 2002, Bruce Momjian wrote:\n\n> OK, I have finally decided that our archive searching stinks. I have\n> emails in my mailbox that don't appear in the archives.\n>\n> Our main site, http://archives.postgresql.org/ doesn't archive the\n> 'patches' list. (It isn't listed on the main site, and I can't find\n> postings via searching.) Also, why does it open a separate window for\n> each email. That doesn't make any sense to me.\n>\n> My backup is Google,\n> http://groups.google.com/groups?hl=en&group=comp.databases.postgresql,\n> but that seems to be missing emails too. Our email/news link regulary\n> drops messages and therefore Google can't see them.\n>\n> It isn't one thing, but a general lack of quality in this area. Heck,\n> we had no usable archives for _months_. Is this really only important\n> to me?\n>\n> Oh, I see FTS is back working at http://fts.postgresql.org/db/mw/. I\n> like the output format, but all three are give me different results.\n> However, fts is invisible because I can't find a link to it from\n> anywhere on our web pages.\n>\n> I guess I am asking:\n>\n> \tCan our main archive start doing the patches list?\n\nYes ...\n\n> \tCan it stop opening a new window for every email?\n\nYes ...\n\n> \tCan we find out why the email/news gateway drops messages?\n\nEmail->News gateways aren't the best to start with, since they rely on far\ntoo many variables ... main one coming to mind is if the news server is\ndown for any length of time ... this should be improved now, as we moved\nit over to the new server pretty much as soon as it came online, and so\nfar has been *alot* more stable there then on the old one ...\n\n> \tCan we link to the fts site?\n\nShould be, but it was down for a long time there, and nobody informed that\nit was back up and running ... I have to talk to Theo about moving it tho,\nsince when we got the postgresql.org server upgraded to 4gig (from a\nmeasely 512MB), I moved everything except for fts to v7.2 with a 1.5gig\nshared memory buffer :) disk I/O still sucks, mind you, but nothing I can\ndo about *that* at this time ...\n\n", "msg_date": "Fri, 21 Jun 2002 11:07:08 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Our archive searching stinks" }, { "msg_contents": "\nThis is all great news. Thanks.\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> On Thu, 20 Jun 2002, Bruce Momjian wrote:\n> \n> > OK, I have finally decided that our archive searching stinks. I have\n> > emails in my mailbox that don't appear in the archives.\n> >\n> > Our main site, http://archives.postgresql.org/ doesn't archive the\n> > 'patches' list. (It isn't listed on the main site, and I can't find\n> > postings via searching.) Also, why does it open a separate window for\n> > each email. That doesn't make any sense to me.\n> >\n> > My backup is Google,\n> > http://groups.google.com/groups?hl=en&group=comp.databases.postgresql,\n> > but that seems to be missing emails too. Our email/news link regulary\n> > drops messages and therefore Google can't see them.\n> >\n> > It isn't one thing, but a general lack of quality in this area. Heck,\n> > we had no usable archives for _months_. Is this really only important\n> > to me?\n> >\n> > Oh, I see FTS is back working at http://fts.postgresql.org/db/mw/. I\n> > like the output format, but all three are give me different results.\n> > However, fts is invisible because I can't find a link to it from\n> > anywhere on our web pages.\n> >\n> > I guess I am asking:\n> >\n> > \tCan our main archive start doing the patches list?\n> \n> Yes ...\n> \n> > \tCan it stop opening a new window for every email?\n> \n> Yes ...\n> \n> > \tCan we find out why the email/news gateway drops messages?\n> \n> Email->News gateways aren't the best to start with, since they rely on far\n> too many variables ... main one coming to mind is if the news server is\n> down for any length of time ... this should be improved now, as we moved\n> it over to the new server pretty much as soon as it came online, and so\n> far has been *alot* more stable there then on the old one ...\n> \n> > \tCan we link to the fts site?\n> \n> Should be, but it was down for a long time there, and nobody informed that\n> it was back up and running ... I have to talk to Theo about moving it tho,\n> since when we got the postgresql.org server upgraded to 4gig (from a\n> measely 512MB), I moved everything except for fts to v7.2 with a 1.5gig\n> shared memory buffer :) disk I/O still sucks, mind you, but nothing I can\n> do about *that* at this time ...\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 21 Jun 2002 10:15:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our archive searching stinks" }, { "msg_contents": "On Fri, 2002-06-21 at 17:07, Marc G. Fournier wrote:\n\n> > \tCan we find out why the email/news gateway drops messages?\n> \n> Email->News gateways aren't the best to start with, since they rely on far\n> too many variables ... main one coming to mind is if the news server is\n> down for any length of time ... this should be improved now, as we moved\n> it over to the new server pretty much as soon as it came online, and so\n> far has been *alot* more stable there then on the old one ...\n\nUnfortunately, I've received only an handful of messages in the\nnewsgroups during the last couple of week, and it doesn't seem to\nimprove.\n\nIn fact, I've suggested to my users to go back to using lists, because\nthe service is not reliable.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n", "msg_date": "21 Jun 2002 18:34:03 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: Our archive searching stinks" }, { "msg_contents": "\ndamn, I wish ppl would bring stuff like this up earlier :( I've just gone\nthrough the configs, and think the problem(s) are fixed with this ... :(\n\n\n\nOn 21 Jun 2002, Alessio Bragadini wrote:\n\n> On Fri, 2002-06-21 at 17:07, Marc G. Fournier wrote:\n>\n> > > \tCan we find out why the email/news gateway drops messages?\n> >\n> > Email->News gateways aren't the best to start with, since they rely on far\n> > too many variables ... main one coming to mind is if the news server is\n> > down for any length of time ... this should be improved now, as we moved\n> > it over to the new server pretty much as soon as it came online, and so\n> > far has been *alot* more stable there then on the old one ...\n>\n> Unfortunately, I've received only an handful of messages in the\n> newsgroups during the last couple of week, and it doesn't seem to\n> improve.\n>\n> In fact, I've suggested to my users to go back to using lists, because\n> the service is not reliable.\n>\n> --\n> Alessio F. Bragadini\t\talessio@albourne.com\n> APL Financial Services\t\thttp://village.albourne.com\n> Nicosia, Cyprus\t\t \tphone: +357-22-755750\n>\n> \"It is more complicated than you think\"\n> \t\t-- The Eighth Networking Truth from RFC 1925\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Fri, 21 Jun 2002 13:19:51 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Our archive searching stinks" }, { "msg_contents": "On Thu, 20 Jun 2002, Bruce Momjian wrote:\n\n> OK, I have finally decided that our archive searching stinks. I have\n> emails in my mailbox that don't appear in the archives.\n>\n> Our main site, http://archives.postgresql.org/ doesn't archive the\n> 'patches' list. (It isn't listed on the main site, and I can't find\n> postings via searching.) Also, why does it open a separate window for\n> each email. That doesn't make any sense to me.\n>\n> My backup is Google,\n> http://groups.google.com/groups?hl=en&group=comp.databases.postgresql,\n> but that seems to be missing emails too. Our email/news link regulary\n> drops messages and therefore Google can't see them.\n>\n> It isn't one thing, but a general lack of quality in this area. Heck,\n> we had no usable archives for _months_. Is this really only important\n> to me?\n>\n> Oh, I see FTS is back working at http://fts.postgresql.org/db/mw/. I\n> like the output format, but all three are give me different results.\n> However, fts is invisible because I can't find a link to it from\n> anywhere on our web pages.\n>\n> I guess I am asking:\n>\n> \tCan our main archive start doing the patches list?\n> \tCan it stop opening a new window for every email?\n> \tCan we find out why the email/news gateway drops messages?\n> \tCan we link to the fts site?\n\nThe only thing I can help with is the fts link, but I'm hesitant to\nlink to something that disappears. If it's going to be here and not\ngo away again I'll be happy to add it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n\n", "msg_date": "Sun, 23 Jun 2002 14:43:44 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Our archive searching stinks" }, { "msg_contents": "On Sun, 23 Jun 2002, Vince Vielhaber wrote:\n\n> > \tCan we link to the fts site?\n>\n> The only thing I can help with is the fts link, but I'm hesitant to\n> link to something that disappears. If it's going to be here and not\n> go away again I'll be happy to add it.\n\nThe only reason it \"disappeared\" was more my fault then anything ... I\nspec'd out that server for what I *thought* we were using on the old one,\nand didn't realize how much memory was required ... the upgraded to 4gig\nappears to have helped ...\n\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 13:18:16 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Our archive searching stinks" } ]
[ { "msg_contents": "I have a few questions about what would be expected from coercing to a\ntype with constraints (Domains mostly -- but complex object types may\nrun into similar issues if implemented).\n\nWhat I intend to do:\nIn gram.y, remove the application of typename directly to the A_Const\nin makeTypeCast. Use TypeCast nodes only. This ensures all casts\npass through parce_coerce.c -> coerce_type() which will wrap the\nresult Node in Constraint Nodes which the executor (ExecQual.c) will\nlearn how to call ExecEvalConstraint.\n\nExecEvalConstraint applies NOTNULL and CHECK constraints as is\nrequested by coerce_type, throwing an error if they don't apply,\nsimilarly to how ExecEvalNullTest returns false if an item doesn't\napply to the null clause.\n\nThis should cover forms of:\ncast(VALUE as domain) <- makeTypeCast currently avoids this without\nthe above change\ncast(cast(VALUE as othertype) as domain)\nselect cast(column as domain) from table;\n\n\nThe question I have is about the potential of casting. Should\ncan_coerce_type evaluate the data involved or not? Currently it\ndoesn't, and really you have to test every single value before trying\nit.\n\nIf it's an explicit cast, we do the users bidding and try anyway.\nThats a given. I'm also assuming that implicit coercion tests should\nbe done against the real type id and not the base type id. So\ncan_coerce_type and friends won't need to change at all.\n\nType lengths will be applied the same as cast('fdads' as varchar(3))\nwould apply the length.\n--\nRod\n\n", "msg_date": "Thu, 20 Jun 2002 22:08:40 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Domain coercion" } ]
[ { "msg_contents": "Here are some class notes that contain some very good ideas with\nterrific explanations:\nhttp://www.cs.duke.edu/education/courses/fall01/cps216/\n", "msg_date": "Thu, 20 Jun 2002 21:02:58 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Idea for the statistics collector" } ]
[ { "msg_contents": "Any idea if alter table drop column and background vacuum will be\nimplemented by 7.3?\nIt's really critical for large applications that must run 24/7.\n\nStephen\n\n\n", "msg_date": "Fri, 21 Jun 2002 02:36:06 -0400", "msg_from": "\"Stephen\" <jleelim@hotmail.com>", "msg_from_op": true, "msg_subject": "Alter table drop column and background vacuum?" }, { "msg_contents": "During the discussion of bools and hash index and partial indexes and \nindex growth and everything else, I tried to make a partial index on a \nbool field and got the error that \"data type bool has no default operator \nfor class hash...\"\n\nSo, can I cast something to make this work, or is it possible to make \nhash indexes work with bools. There are a few instances where a small \npercentage of a table is marked false while the rest is true, or vice \nversa, where a partial hash index would be nice to try, and may not have \nthe ever expanding index problem that brtees have.\n\n-- \n\"Force has no place where there is need of skill.\", \"Haste in every \nbusiness brings failures.\", \"This is the bitterest pain among men, to have \nmuch knowledge but no power.\" -- Herodotus\n\n\n", "msg_date": "Fri, 21 Jun 2002 16:07:38 -0600 (MDT)", "msg_from": "Scott Marlowe <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Hash and bools" }, { "msg_contents": "Scott Marlowe <scott.marlowe@ihs.com> writes:\n> During the discussion of bools and hash index and partial indexes and \n> index growth and everything else, I tried to make a partial index on a \n> bool field and got the error that \"data type bool has no default operator \n> for class hash...\"\n\nWell, no. I can't see much point in hashing for a datatype with only\ntwo values. (Of course, btree probably sucks too in this case.)\n\nYou could probably gin one up pretty quickly using the support routines\nfor char_ops, if you want one for testing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Jun 2002 17:45:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hash and bools " } ]
[ { "msg_contents": "Hi all,\n\nI'm hacking on PalmOS handhelds to implements libpq\ncalled \"libpq for PalmOS\".\n\nThis library provides many compatible libpq functions\nto manipulate PostgreSQL server from PalmOS devices\nthrough TCP/IP connection.\n\nImplementation is almost done, but some works are left\nto release.\n\nFor example, more error messages, debugging, brushing up\nmy codes, docs (README, tutorials...) and packaging.\n\nAnd I've planned to support encrypted auth and SSL\n(SSL will be supported under PalmOS 5).\n\nI need someone's help to do all of them.\nCould anyone help me?\n\nFor more information about libpq for PalmOS,\nplease visit my page.\n\nhttp://snaga.org/palm/libpq/\n\nThanks.\n\n-- \nNAGAYASU Satoshi <snaga@snaga.org>\n\n", "msg_date": "Fri, 21 Jun 2002 21:30:53 +0900", "msg_from": "Satoshi Nagayasu <snaga@snaga.org>", "msg_from_op": true, "msg_subject": "libpq for PalmOS (I need help)" } ]
[ { "msg_contents": "Hi all,\n\nI'm hacking on PalmOS handhelds to implements libpq\ncalled \"libpq for PalmOS\".\n\nThis library provides many compatible libpq functions\nto manipulate PostgreSQL server from PalmOS devices\nthrough TCP/IP connection.\n\nImplementation is almost done, but some works are left\nto release.\n\nFor example, more error messages, debugging, brushing up\nmy codes, docs (README, tutorials...) and packaging.\n\nAnd I've planned to support encrypted auth and SSL\n(SSL will be supported under PalmOS 5).\n\nI need someone's help to do all of them.\nCould anyone help me?\n\nFor more information about libpq for PalmOS,\nplease visit my page.\n\nhttp://snaga.org/palm/libpq/\n\nThanks.\n\n-- \nNAGAYASU Satoshi <snaga@snaga.org>\n\n", "msg_date": "Fri, 21 Jun 2002 21:49:10 +0900", "msg_from": "Satoshi Nagayasu <snaga@snaga.org>", "msg_from_op": true, "msg_subject": "libpq for PalmOS (I need help)" } ]
[ { "msg_contents": "Hi all!\n\nRecently I tried to use the new 7.02.0001 Win32 ODBC driver in the new\n(beta) Unicode mode in conjunction with MS Access 2000 and a \"UNICODE\"\nencoded database stored in a PostgreSQL 7.2.1 database running on a\nLinux system.\n\nI noticed that when the \"LF<->CRLF Conversion\" option is *enabled* in\nthe driver's settings dialog, only a CRLF->LF conversion (while writing\nto the database) is performed, but no LF->CRLF conversion (while reading\nfrom the database)!\n\nI thought that this might be a bug, and thus downloaded a local copy of\nthe ODBC driver sources from CVS. I searched a bit and AFAICT found the\nreason for the ill behavior, so I made a patch.\n\nJulian.\n\n==================8<==================snip==================\n--- convert.c Fri Jun 21 15:48:43 2002\n+++ convert.c.new Fri Jun 21 16:50:01 2002\n@@ -699,12 +699,20 @@\n if (pbic->data_left < 0)\n {\n BOOL lf_conv =\nSC_get_conn(stmt)->connInfo.lf_conversion;\n+ int lf_converted_len;\n #ifdef UNICODE_SUPPORT\n if (fCType == SQL_C_WCHAR)\n {\n- len =\nutf8_to_ucs2(neut_str, -1, NULL, 0);\n+ char* temp_str;\n+\n+ /* convert linefeeds to\ncarriage-return/linefeed */\n+ lf_converted_len =\nconvert_linefeeds(neut_str, NULL, 0, lf_conv, &changed);\n+ temp_str =\nmalloc(lf_converted_len + 1);\n+ convert_linefeeds(neut_str,\ntemp_str, lf_converted_len + 1, lf_conv, &changed);\n+ len =\nutf8_to_ucs2(temp_str, -1, NULL, 0);\n len *= 2;\n wchanged = changed = TRUE;\n+ free(temp_str);\n }\n else\n #endif /* UNICODE_SUPPORT */\n@@ -728,7 +736,12 @@\n #ifdef UNICODE_SUPPORT\n if (fCType == SQL_C_WCHAR)\n {\n-\nutf8_to_ucs2(neut_str, -1, (SQLWCHAR *) pbic->ttlbuf, len / 2);\n+ char* temp_str;\n+\n+ temp_str =\nmalloc(lf_converted_len + 1);\n+\nconvert_linefeeds(neut_str, temp_str, lf_converted_len + 1, lf_conv,\n&changed);\n+\nutf8_to_ucs2(temp_str, -1, (SQLWCHAR *) pbic->ttlbuf, len / 2);\n+ free(temp_str);\n }\n else\n #endif /* UNICODE_SUPPORT */\n\n==================snip==================>8==================\n\n--\nLinksystem Muenchen GmbH info@link-m.de\nSchloerstrasse 10 http://www.link-m.de\n80634 Muenchen Tel. 089 / 890 518-0\nWe make the Net work. Fax 089 / 890 518-77\n\n\n", "msg_date": "Fri, 21 Jun 2002 16:56:11 +0200", "msg_from": "\"Julian Mehnle, Linksystem Muenchen\" <j.mehnle@buero.link-m.de>", "msg_from_op": true, "msg_subject": "ODBC Driver 7.02.0001 (Win32) (Unicode mode): CRLF->LF works,\n\tLF->CRLF doesn't" }, { "msg_contents": "Julian Mehnle wrote:\n> [...] I made a patch.\n> \n> ==================8<==================snip==================\n> [word-wrapped patch]\n> ==================snip==================>8==================\n\nDoh!\n\nI swear I told my news reader not to word-wrap exactly this message,\nbut it wrapped it anyway... Alright, here's the patch as a file:\n\n http://files.mehnle.net/software/postgresql/pgsql-odbc.unicode-crlf-lf-conversion.diff\n\nJulian.\n-- \nLinksystem Muenchen GmbH info@link-m.de\nSchloerstrasse 10 http://www.link-m.de\n80634 Muenchen Tel. 089 / 890 518-0\nWe make the Net work. Fax 089 / 890 518-77\n\n\n\n", "msg_date": "Fri, 21 Jun 2002 17:24:56 +0200", "msg_from": "\"Julian Mehnle, Linksystem Muenchen\" <j.mehnle@buero.link-m.de>", "msg_from_op": true, "msg_subject": "Re: ODBC Driver 7.02.0001 (Win32) (Unicode mode): CRLF->LF works,\n\tLF->CRLF doesn't" }, { "msg_contents": "\"Julian Mehnle, Linksystem Muenchen\" wrote:\n> \n> Hi all!\n> \n> Recently I tried to use the new 7.02.0001 Win32 ODBC driver in the new\n> (beta) Unicode mode in conjunction with MS Access 2000 and a \"UNICODE\"\n> encoded database stored in a PostgreSQL 7.2.1 database running on a\n> Linux system.\n> \n> I noticed that when the \"LF<->CRLF Conversion\" option is *enabled* in\n> the driver's settings dialog, only a CRLF->LF conversion (while writing\n> to the database) is performed, but no LF->CRLF conversion (while reading\n> from the database)!\n\nCould you try the snapshot dll at http://w2422.nsk.ne.jp/~inoue/ ?\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Mon, 24 Jun 2002 18:59:10 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ODBC Driver 7.02.0001 (Win32) (Unicode mode): CRLF->LF " }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> wrote:\n> Julian Mehnle <j.mehnle@buero.link-m.de> wrote:\n> > Recently I tried to use the new 7.02.0001 Win32 ODBC driver in the new\n> > (beta) Unicode mode in conjunction with MS Access 2000 and a \"UNICODE\"\n> > encoded database stored in a PostgreSQL 7.2.1 database running on a\n> > Linux system.\n> >\n> > I noticed that when the \"LF<->CRLF Conversion\" option is *enabled* in\n> > the driver's settings dialog, only a CRLF->LF conversion (while writing\n> > to the database) is performed, but no LF->CRLF conversion (while reading\n> > from the database)!\n>\n> Could you try the snapshot dll at http://w2422.nsk.ne.jp/~inoue/ ?\n\nThere are several versions and I don't know which one you mean:\n- psqlodbc.dll (version 7.02.0001),\n- psqlodbc.dll (the multibyte version),\n- psqlodbc30.dll (7.02.0001 ODBC3.0 trial driver),\n- psqlodbc30.dll (the multibyte version),\n- another trial driver(7.02.0001 + ODBC3.0 + Unicode)?\n\nPlease give me a hint! :-)\n\nJulian Mehnle.\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 13:15:15 +0200", "msg_from": "\"Julian Mehnle\" <julian@mehnle.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ODBC Driver 7.02.0001 (Win32) (Unicode mode): CRLF->LF\n\tworks, LF->CRLF doesn't" }, { "msg_contents": "Julian Mehnle wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> wrote:\n> > Julian Mehnle <j.mehnle@buero.link-m.de> wrote:\n> > > Recently I tried to use the new 7.02.0001 Win32 ODBC driver\n> > > in the new (beta) Unicode mode in conjunction with MS Access\n> > > 2000 and a \"UNICODE\" encoded database stored in a PostgreSQL\n> > > 7.2.1 database running on a Linux system.\n> > >\n> > > I noticed that when the \"LF<->CRLF Conversion\" option is\n> > > *enabled* in the driver's settings dialog, only a CRLF->LF\n> > > conversion (while writing to the database) is performed,\n> > > but no LF->CRLF conversion (while reading from the database)!\n> >\n> > Could you try the snapshot dll at http://w2422.nsk.ne.jp/~inoue/ ?\n> \n> There are several versions and I don't know which one you mean:\n> - psqlodbc.dll (version 7.02.0001),\n> - psqlodbc.dll (the multibyte version),\n> - psqlodbc30.dll (7.02.0001 ODBC3.0 trial driver),\n> - psqlodbc30.dll (the multibyte version),\n> - another trial driver(7.02.0001 + ODBC3.0 + Unicode)?\n\nThe last one because you seem to be using Unicode driver.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Tue, 25 Jun 2002 08:51:41 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ODBC Driver 7.02.0001 (Win32) (Unicode mode): CRLF->LF " }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> wrote:\n> Julian Mehnle <j.mehnle@buero.link-m.de> wrote:\n> > Recently I tried to use the new 7.02.0001 Win32 ODBC driver in the new\n> > (beta) Unicode mode in conjunction with MS Access 2000 and a \"UNICODE\"\n> > encoded database stored in a PostgreSQL 7.2.1 database running on a\n> > Linux system.\n> >\n> > I noticed that when the \"LF<->CRLF Conversion\" option is *enabled* in\n> > the driver's settings dialog, only a CRLF->LF conversion (while writing\n> > to the database) is performed, but no LF->CRLF conversion (while reading\n> > from the database)!\n>\n> Could you try the snapshot dll at http://w2422.nsk.ne.jp/~inoue/ ?\n\nYeah, it does *not* exhibit the faulty CRLF<->LF conversion behavior. Is\nthis a custom build of the ODBC driver done by you? Will the official driver\nbe fixed soon?\n\nRegards,\nJulian Mehnle.\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 13:06:49 +0200", "msg_from": "\"Julian Mehnle\" <julian@mehnle.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ODBC Driver 7.02.0001 (Win32) (Unicode mode): CRLF->LF\n\tworks, LF->CRLF doesn't" }, { "msg_contents": "Julian Mehnle wrote:\n> \n> Yeah, it does *not* exhibit the faulty CRLF<->LF conversion behavior.\n> Is this a custom build of the ODBC driver done by you? Will the\n> official driver be fixed soon?\n\nI would commit the fix to cvs this week.\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n\n\n", "msg_date": "Wed, 26 Jun 2002 19:14:19 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ODBC Driver 7.02.0001 (Win32) (Unicode mode): CRLF->LF " } ]
[ { "msg_contents": "For some reason a view with a select distinct, an order and an exception\nby will cause pg_dump to output a double order by -- one for each select\nwhich of course is bad SQL.\n\n\nPSQL\n====\nrbt_t=# create view test as select distinct relname, reltuples, relnatts\nfrom pg_class where relkind = 't' except select relname, reltuples,\nrelnatts from pg_class where relkind = 't' and relnatts > 4 order by\nrelname;\nCREATE\n\nrbt_t=# select * from test;\n relname | reltuples | relnatts \n----------------+-----------+----------\n pg_toast_1255 | 0 | 3\n pg_toast_16384 | 0 | 3\n pg_toast_16386 | 0 | 3\n pg_toast_16408 | 0 | 3\n pg_toast_16410 | 5 | 3\n pg_toast_16416 | 0 | 3\n\n\n\nPG_DUMP\n=======\n--\n-- TOC Entry ID 2 (OID 337283)\n--\n-- Name: test Type: VIEW Owner: rbt\n--\n\nCREATE VIEW \"test\" as SELECT DISTINCT pg_class.relname,\npg_class.reltuples, pg_class.relnatts FROM pg_class WHERE\n(pg_class.relkind = 't'::\"char\") ORDER BY pg_class.relname,\npg_class.reltuples, pg_class.relnatts EXCEPT SELECT pg_class.relname,\npg_class.reltuples, pg_class.relnatts FROM pg_class WHERE\n((pg_class.relkind = 't'::\"char\") AND (pg_class.relnatts > 4)) ORDER BY\n1;\n\n\nPSQL\n====\nrbt_t=# drop view test;\nDROP\nrbt_t=# CREATE VIEW \"test\" as SELECT DISTINCT pg_class.relname,\npg_class.reltuples, pg_class.relnatts FROM pg_class WHERE\n(pg_class.relkind = 't'::\"char\") ORDER BY pg_class.relname,\npg_class.reltuples, pg_class.relnatts EXCEPT SELECT pg_class.relname,\npg_class.reltuples, pg_class.relnatts FROM pg_class WHERE\n((pg_class.relkind = 't'::\"char\") AND (pg_class.relnatts > 4)) ORDER BY\n1;\nERROR: parser: parse error at or near \"EXCEPT\"\n\n", "msg_date": "21 Jun 2002 12:06:52 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Problems with dump /restore of views" }, { "msg_contents": "Rod Taylor wrote:\n> \n> For some reason a view with a select distinct, an order and an exception\n> by will cause pg_dump to output a double order by -- one for each select\n> which of course is bad SQL.\n\nI think views should not have ORDER BY clauses at all in the first\nplace. \n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Fri, 21 Jun 2002 12:45:20 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Problems with dump /restore of views" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> For some reason a view with a select distinct, an order and an exception\n> by will cause pg_dump to output a double order by -- one for each select\n> which of course is bad SQL.\n\nThis is fixed in current sources and REL7_2 branch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 13:13:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems with dump /restore of views " } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, June 21, 2002 6:32 AM\n> To: Tom Lane\n> Cc: Neil Conway; mloftis@wgops.com; Dann Corbit;\n> pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] What is wrong with hashed index usage?\n> \n> \n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > <para>\n> > > ! Because of the limited utility of hash indexes, a \n> B-tree index\n> > > ! should generally be preferred over a hash index. \n> We do not have\n> > > ! sufficient evidence that hash indexes are actually \n> faster than\n> > > ! B-trees even for <literal>=</literal> comparisons. \n> Moreover,\n> > > ! hash indexes require coarser locks; see <xref\n> > > ! linkend=\"locking-indexes\">.\n> > > </para>\n> > > </note> \n> > > </para>\n> > > --- 181,189 ----\n> > > </synopsis>\n> > > <note>\n> > > <para>\n> > > ! Testing has shown that hash indexes are slower \n> than btree indexes,\n> > > ! and the size and build time for hash indexes is \n> much worse. For\n> > > ! these reasons, hash index use is discouraged.\n> > \n> > This change strikes me as a step backwards. The existing \n> wording tells\n> > the truth; the proposed revision removes the facts in favor \n> of a blanket\n> > assertion that is demonstrably false.\n> \n> OK, which part of is \"demonstrably false\"? I think the old \"should\n> generally be preferred\" is too vague. No one has come up with a case\n> where hash has shown to be faster, and a lot of cases where \n> it is slower.\n\nI agree with Tom. Maybe it is not true for PostgreSQL that hashed\nindexes are better, but for every other database if you are doing single\nlookups and do not need to order the items sequentially, hashed indexes\nare better. What this indicates to me is that hashed indexes could\n{potentially} be much better implemented for PostgreSQL.\n\nSee section 2.4:\nhttp://citeseer.nj.nec.com/cache/papers/cs/21214/http:zSzzSzwww.cs.cmu.e\nduzSz~christoszSzcourseszSz826-resourceszSzFOILS-LATEXzSzslides.pdf/inde\nxing-multimedia-databases.pdf\n\nSee\nhttp://ycmi.med.yale.edu/nadkarni/db_course/NonStd_Contents.htm\n\nSee also:\nhttp://www-courses.cs.uiuc.edu/~cs411/RR2_goodpoints.html\n\nFrom the Oracle Rdb documentation:\n1.3.5 Retrieval Methods\nOracle Rdb provides several methods for retrieving or accessing data. In\nthe physical design of your database, consider that Oracle Rdb can use\none or more of the following methods to retrieve the rows in a table: \n\nSequential: locating a row or rows in sequence by retrieving data within\na logical area \nSorted index lookup with value retrieval: using the database key (dbkey)\nfor the value from the index to retrieve the row \nSorted index only: using data values in the index key pertinent to your\nquery \nHashed index retrieval: for retrieving exact data value matches \nDbkey only: retrieving a row through its dbkey \nYou determine the retrieval method Oracle Rdb chooses by creating one or\nmore sorted or hashed indexes. \n\nSorted index retrieval provides indexed sequential access to rows in a\ntable. (A sorted index is also called a B-tree index.) By contrast,\nhashed index retrieval, also known as hash-addressing, provides direct\nretrieval of a specific row. Retrieval of a row is based on a given\nvalue of some set of columns in the row (called the search key). \n\nUse a hashed index primarily for random, direct retrieval when you can\nsupply the entire hashed key on which the hashed index is defined, such\nas an employee identification number (ID). For this kind of retrieval,\ninput/output operations can be significantly reduced, particularly for\ntables with many rows and large indexes. \n\nFor example, to retrieve a row using a sorted index that is four levels\ndeep, Oracle Rdb may need to do a total of five input/output operations,\none for each level of the sorted index and one to retrieve the actual\nrow. By using a hashed index, the number of input/output operations may\nbe reduced to one or two because hashed index retrieval retrieves the\nrow directly. \n", "msg_date": "Fri, 21 Jun 2002 11:06:50 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Dann Corbit wrote:\n> > > This change strikes me as a step backwards. The existing \n> > wording tells\n> > > the truth; the proposed revision removes the facts in favor \n> > of a blanket\n> > > assertion that is demonstrably false.\n> > \n> > OK, which part of is \"demonstrably false\"? I think the old \"should\n> > generally be preferred\" is too vague. No one has come up with a case\n> > where hash has shown to be faster, and a lot of cases where \n> > it is slower.\n> \n> I agree with Tom. Maybe it is not true for PostgreSQL that hashed\n> indexes are better, but for every other database if you are doing single\n> lookups and do not need to order the items sequentially, hashed indexes\n> are better. What this indicates to me is that hashed indexes could\n> {potentially} be much better implemented for PostgreSQL.\n\nYes, our implementation needs help. People who know other db's are\nprobably choosing hash thinking it is as good as btree in our code, and\nit isn't. That's why I wanted the documentation update, and why I am\nsuggesting the elog(NOTICE).\n\nI have updated the documentation to specifically mention that\nPostgreSQL's hashes are slower/similar to btree.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/diskusage.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/diskusage.sgml,v\nretrieving revision 1.1\ndiff -c -r1.1 diskusage.sgml\n*** doc/src/sgml/diskusage.sgml\t13 Jun 2002 05:15:22 -0000\t1.1\n--- doc/src/sgml/diskusage.sgml\t21 Jun 2002 19:06:03 -0000\n***************\n*** 22,31 ****\n </para>\n \n <para>\n! You can monitor disk space from two places; from inside\n! <application>psql</> and from the command line using\n! <application>contrib/oid2name</>. Using <application>psql</> you can\n! issue queries to see the disk usage for any table:\n <programlisting>\n play=# SELECT relfilenode, relpages\n play-# FROM pg_class\n--- 22,33 ----\n </para>\n \n <para>\n! You can monitor disk space from three places: from\n! <application>psql</> using <command>VACUUM</> information, from\n! <application>psql</> using <application>contrib/dbsize</>, and from\n! the command line using <application>contrib/oid2name</>. Using\n! <application>psql</> on a recently vacuumed (or analyzed) database,\n! you can issue queries to see the disk usage of any table:\n <programlisting>\n play=# SELECT relfilenode, relpages\n play-# FROM pg_class\n***************\n*** 38,47 ****\n </para>\n \n <para> \n! Each page is typically 8 kilobytes. <literal>relpages</> is only\n! updated by <command>VACUUM</> and <command>ANALYZE</>. To show the\n! space used by <acronym>TOAST</> tables, use a query based on the heap\n! relfilenode:\n <programlisting>\n play=# SELECT relname, relpages\n play-# FROM pg_class\n--- 40,49 ----\n </para>\n \n <para> \n! Each page is typically 8 kilobytes. (Remember, <literal>relpages</>\n! is only updated by <command>VACUUM</> and <command>ANALYZE</>.) To\n! show the space used by <acronym>TOAST</> tables, use a query based on\n! the heap relfilenode shown above:\n <programlisting>\n play=# SELECT relname, relpages\n play-# FROM pg_class\nIndex: doc/src/sgml/indices.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/indices.sgml,v\nretrieving revision 1.33\ndiff -c -r1.33 indices.sgml\n*** doc/src/sgml/indices.sgml\t21 Jun 2002 16:52:00 -0000\t1.33\n--- doc/src/sgml/indices.sgml\t21 Jun 2002 19:06:04 -0000\n***************\n*** 181,190 ****\n </synopsis>\n <note>\n <para>\n! Testing has shown hash indexes to be similar or slower than btree\n! indexes, and the index size and build time for hash indexes is much\n! worse. Hash indexes also suffer poor performance under high\n! concurrency. For these reasons, hash index use is discouraged.\n </para>\n </note> \n </para>\n--- 181,191 ----\n </synopsis>\n <note>\n <para>\n! Testing has shown PostgreSQL's hash indexes to be similar or slower\n! than btree indexes, and the index size and build time for hash\n! indexes is much worse. Hash indexes also suffer poor performance\n! under high concurrency. For these reasons, hash index use is\n! discouraged.\n </para>\n </note> \n </para>\nIndex: doc/src/sgml/ref/create_index.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/create_index.sgml,v\nretrieving revision 1.33\ndiff -c -r1.33 create_index.sgml\n*** doc/src/sgml/ref/create_index.sgml\t21 Jun 2002 16:52:00 -0000\t1.33\n--- doc/src/sgml/ref/create_index.sgml\t21 Jun 2002 19:06:05 -0000\n***************\n*** 330,339 ****\n the <literal>=</literal> operator.\n </para>\n <para>\n! Testing has shown hash indexes to be similar or slower than btree\n! indexes, and the index size and build time for hash indexes is much\n! worse. Hash indexes also suffer poor performance under high\n! concurrency. For these reasons, hash index use is discouraged.\n </para>\n \n <para>\n--- 330,340 ----\n the <literal>=</literal> operator.\n </para>\n <para>\n! Testing has shown PostgreSQL's hash indexes to be similar or slower\n! than btree indexes, and the index size and build time for hash\n! indexes is much worse. Hash indexes also suffer poor performance\n! under high concurrency. For these reasons, hash index use is\n! discouraged.\n </para>\n \n <para>", "msg_date": "Fri, 21 Jun 2002 15:06:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" } ]
[ { "msg_contents": "bash-2.05a$ make install > /dev/null\nIn file included from tupdesc.c:22:\n../../../../src/include/funcapi.h:69: syntax error before `uint'\n\n\nDropping the u works fine.\n\n\nFreeBSD fury.inquent.com 4.5-RELEASE FreeBSD 4.5-RELEASE #1: Mon Feb 4\n13:30:57 EST 2002 root@fury.inquent.com:/usr/obj/usr/src/sys/FURY \ni386\n\nbash-2.05a$ gcc --version\n2.95.3\n\n\n\n\n\n\n", "msg_date": "21 Jun 2002 15:26:36 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "funcapi.h:69: syntax error before `uint'" }, { "msg_contents": "Rod Taylor wrote:\n> bash-2.05a$ make install > /dev/null\n> In file included from tupdesc.c:22:\n> ../../../../src/include/funcapi.h:69: syntax error before `uint'\n> \n> \n> Dropping the u works fine.\n> \n\nSorry, that should have been uint32. Here is a patch.\n\nJoe", "msg_date": "Fri, 21 Jun 2002 16:17:39 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] funcapi.h:69: syntax error before `uint'" }, { "msg_contents": "\nOK, fixed. uint changed to uint32.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> bash-2.05a$ make install > /dev/null\n> In file included from tupdesc.c:22:\n> ../../../../src/include/funcapi.h:69: syntax error before `uint'\n> \n> \n> Dropping the u works fine.\n> \n> \n> FreeBSD fury.inquent.com 4.5-RELEASE FreeBSD 4.5-RELEASE #1: Mon Feb 4\n> 13:30:57 EST 2002 root@fury.inquent.com:/usr/obj/usr/src/sys/FURY \n> i386\n> \n> bash-2.05a$ gcc --version\n> 2.95.3\n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Jun 2002 00:08:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: funcapi.h:69: syntax error before `uint'" }, { "msg_contents": "\nAlready fixed. Thanks.\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> Rod Taylor wrote:\n> > bash-2.05a$ make install > /dev/null\n> > In file included from tupdesc.c:22:\n> > ../../../../src/include/funcapi.h:69: syntax error before `uint'\n> > \n> > \n> > Dropping the u works fine.\n> > \n> \n> Sorry, that should have been uint32. Here is a patch.\n> \n> Joe\n> \n> \n\n> Index: src/include/funcapi.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/funcapi.h,v\n> retrieving revision 1.1\n> diff -c -r1.1 funcapi.h\n> *** src/include/funcapi.h\t20 Jun 2002 20:39:04 -0000\t1.1\n> --- src/include/funcapi.h\t21 Jun 2002 23:04:00 -0000\n> ***************\n> *** 66,75 ****\n> typedef struct\n> {\n> \t/* Number of times we've been called before */\n> ! \tuint\t\t\tcall_cntr;\n> \n> \t/* Maximum number of calls */\n> ! \tuint\t\t\tmax_calls;\n> \n> \t/* pointer to result slot */\n> \tTupleTableSlot *slot;\n> --- 66,75 ----\n> typedef struct\n> {\n> \t/* Number of times we've been called before */\n> ! \tuint32\t\t\tcall_cntr;\n> \n> \t/* Maximum number of calls */\n> ! \tuint32\t\t\tmax_calls;\n> \n> \t/* pointer to result slot */\n> \tTupleTableSlot *slot;\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Jun 2002 00:12:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] funcapi.h:69: syntax error before `uint'" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, June 21, 2002 9:52 AM\n> To: Tom Lane\n> Cc: Neil Conway; mloftis@wgops.com; Dann Corbit;\n> pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] What is wrong with hashed index usage?\n> \n> \n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I remember three problems: build time, index size, and \n> concurrency\n> > > problems. I was wondering about the equal key case \n> myself, and assumed\n> > > hash may be a win there, but with the concurrency \n> problems, is that even\n> > > possible?\n> > \n> > Sure. Many-equal-keys are a problem for btree whether you have any\n> > concurrency or not.\n> > \n> > > OK, I have reworded it. Is that better?\n> > \n> > It's better, but you've still discarded the original's \n> explicit mention\n> > of concurrency problems. Why do you want to remove information?\n> \n> OK, concurrency added. How is that?\n> \n> > \n> > > How about an elog(NOTICE) for hash use?\n> > \n> > I don't think that's appropriate.\n> \n> I was thinking of this during CREATE INDEX ... hash:\n> \n> \tNOTICE: Hash index use is discouraged. See the CREATE INDEX\n> \treference page for more information.\n> \n> Does anyone else like/dislike that?\n\nI think it might be OK temporarily, to show that there is some work that\nneeds done. When hashed indexes are fixed, the notice should be\nremoved.\n\nI have not looked at the hash code. Here is a strategy (off the top of\nmy head) that seems that it should work:\n\nUse Bob Jenkins' 64 bit generic hash from here (totally free for use and\nfast as blazes):\nhttp://burtleburtle.net/bob/hash/index.html\n\nSpecifically:\nhttp://burtleburtle.net/bob/c/lookup8.c and routine: hash( k, length,\nlevel)\n\nNow, with a 64 bit hash, there is very tiny probability of a collision\n(but you could have duplicate data).\nThe hash index would consist of nothing more than this:\n[long long hash=64 bit hash code][unsigned nmatches=count of matching\nhashes][array of {nmatches} pointers directly to the records with that\nhash]\n\nThis is probably grotesqely oversimplified. But maybe it will spur an\nindea in the person who writes the indexing code.\n", "msg_date": "Fri, 21 Jun 2002 13:20:18 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Dann Corbit wrote:\n> > I was thinking of this during CREATE INDEX ... hash:\n> > \n> > \tNOTICE: Hash index use is discouraged. See the CREATE INDEX\n> > \treference page for more information.\n> > \n> > Does anyone else like/dislike that?\n> \n> I think it might be OK temporarily, to show that there is some work that\n> needs done. When hashed indexes are fixed, the notice should be\n> removed.\n\nOh, yes, clearly, we would remove it once we had a hash implementation\nthat had _any_ advantages over btree.\n\nSo, is you vote for or against the elog(NOTICE)?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 21 Jun 2002 16:31:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "> So, is you vote for or against the elog(NOTICE)?\n\nOK, if we are still voting, then I'll mention that I generally dislike\nthe idea of notices of this kind. And would not like this notice in\nparticular. So would vote no with both hands ;)\n\nI'm pretty sure that we have a consensus policy (hmm, at least if a\nconsensus consists of repeated votes on one question or the other) that\nnotices to protect people from doing what they ask the system to do are\nnot generally desirable.\n\nPutting messages in as a spur to development is not particularly\neffective; witness a few chapters in the docs which consist of \"This\nneeds to be written. Any volunteers?\" and which have stayed untouched\nfor three years now ;)\n\n - Thomas\n", "msg_date": "Fri, 21 Jun 2002 17:57:14 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "On the other hand, I like hints on how to do things better ;)\n\nDavid\n\nThomas Lockhart wrote:\n\n>>So, is you vote for or against the elog(NOTICE)?\n>> \n>>\n>\n>OK, if we are still voting, then I'll mention that I generally dislike\n>the idea of notices of this kind. And would not like this notice in\n>particular. So would vote no with both hands ;)\n> \n>\n\n", "msg_date": "Fri, 21 Jun 2002 22:10:37 -0400", "msg_from": "David Ford <david+cert@blue-labs.org>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Which is whay you RTFM ;)\n\n--On Friday, June 21, 2002 10:10 PM -0400 David Ford \n<david+cert@blue-labs.org> wrote:\n\n> On the other hand, I like hints on how to do things better ;)\n>\n> David\n>\n> Thomas Lockhart wrote:\n>\n>>> So, is you vote for or against the elog(NOTICE)?\n>>>\n>>>\n>>\n>> OK, if we are still voting, then I'll mention that I generally dislike\n>> the idea of notices of this kind. And would not like this notice in\n>> particular. So would vote no with both hands ;)\n>>\n>>\n>\n\n\n", "msg_date": "Fri, 21 Jun 2002 19:17:22 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Thomas Lockhart wrote:\n> > So, is you vote for or against the elog(NOTICE)?\n> \n> OK, if we are still voting, then I'll mention that I generally dislike\n> the idea of notices of this kind. And would not like this notice in\n> particular. So would vote no with both hands ;)\n> \n> I'm pretty sure that we have a consensus policy (hmm, at least if a\n> consensus consists of repeated votes on one question or the other) that\n> notices to protect people from doing what they ask the system to do are\n> not generally desirable.\n> \n> Putting messages in as a spur to development is not particularly\n> effective; witness a few chapters in the docs which consist of \"This\n> needs to be written. Any volunteers?\" and which have stayed untouched\n> for three years now ;)\n\nOK, elog(NOTICE) is voted down. SGML docs are updated. We don't need\nan FAQ item for this, do we?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Jun 2002 00:01:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, June 21, 2002 1:31 PM\n> To: Dann Corbit\n> Cc: Tom Lane; Neil Conway; mloftis@wgops.com;\n> pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] What is wrong with hashed index usage?\n> \n> \n> Dann Corbit wrote:\n> > > I was thinking of this during CREATE INDEX ... hash:\n> > > \n> > > \tNOTICE: Hash index use is discouraged. See the CREATE INDEX\n> > > \treference page for more information.\n> > > \n> > > Does anyone else like/dislike that?\n> > \n> > I think it might be OK temporarily, to show that there is \n> some work that\n> > needs done. When hashed indexes are fixed, the notice should be\n> > removed.\n> \n> Oh, yes, clearly, we would remove it once we had a hash implementation\n> that had _any_ advantages over btree.\n> \n> So, is you vote for or against the elog(NOTICE)?\n\nI will defer to the preference of the others. I lean ever so slightly\ntowards the notice, because it is very unusual for hashed index not to\nbe faster for single item lookup.\n", "msg_date": "Fri, 21 Jun 2002 13:34:57 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: What is wrong with hashed index usage?" } ]
[ { "msg_contents": "\nignore this one ...\n\n", "msg_date": "Fri, 21 Jun 2002 23:58:07 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "test 2, first failed ..." }, { "msg_contents": "\nOkay, just did a series of upgrades to the server to hopefully speed up\ndelivery a bit ... 6minutes more reasonable? let's see if it keeps up,\nmind you ...\n\nOn Fri, 21 Jun 2002, Marc G. Fournier wrote:\n\n>\n> ignore this one ...\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\n", "msg_date": "Sat, 22 Jun 2002 00:06:32 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: test 2, first failed ..." }, { "msg_contents": "On Fri, 2002-06-21 at 22:06, Marc G. Fournier wrote:\n> \n> Okay, just did a series of upgrades to the server to hopefully speed up\n> delivery a bit ... 6minutes more reasonable? let's see if it keeps up,\n> mind you ...\nThanks, Marc. I assume this is in response to my note about the\nmulti-hour delay?\n\nLER\n> \n> On Fri, 21 Jun 2002, Marc G. Fournier wrote:\n> \n> >\n> > ignore this one ...\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "21 Jun 2002 22:12:22 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: test 2, first failed ..." }, { "msg_contents": "\nyup, as well as Francisco's ...\n\nOn 21 Jun 2002, Larry Rosenman wrote:\n\n> On Fri, 2002-06-21 at 22:06, Marc G. Fournier wrote:\n> >\n> > Okay, just did a series of upgrades to the server to hopefully speed up\n> > delivery a bit ... 6minutes more reasonable? let's see if it keeps up,\n> > mind you ...\n> Thanks, Marc. I assume this is in response to my note about the\n> multi-hour delay?\n>\n> LER\n> >\n> > On Fri, 21 Jun 2002, Marc G. Fournier wrote:\n> >\n> > >\n> > > ignore this one ...\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n>\n\n", "msg_date": "Sat, 22 Jun 2002 01:24:28 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: test 2, first failed ..." } ]
[ { "msg_contents": "\nto ignore ...\n\n\n", "msg_date": "Sat, 22 Jun 2002 00:02:52 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "one more ... " } ]
[ { "msg_contents": "With the pg_depend / pg_constraint implementation foreign keys are\napplied to dumps via alter table / add foreign key (retains inter\ntable dependencies).\n\nSome have expressed that this could be quite slow for large databases,\nand want a type of:\n\nSET CONSTRAINTS UNCHECKED;\n\nHowever, others don't believe constraints other than foreign keys\nshould go unchecked.\n\nThat said, is this functionality wanted outside of pg_dump /\npg_restore?\n\nOr would the below be more appropriate?:\nALTER TABLE tab ADD FOREIGN KEY .... TRUST EXISTING DATA;\n\nThat is, it will not check pre-existing data to ensure it's proper.\nThe assumption being that pg_dump came from an already consistent\ndatabase. Needs better wording.\n\n--\nRod\n\n", "msg_date": "Sat, 22 Jun 2002 12:11:56 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "pg_dump and ALTER TABLE / ADD FOREIGN KEY" }, { "msg_contents": "> However, others don't believe constraints other than foreign keys\n> should go unchecked.\n>\n> That said, is this functionality wanted outside of pg_dump /\n> pg_restore?\n\npg_dump should reload a database as it was stored in the previous database. \nIf your old data is not clean, pg_dump / restore is not a very good tool for \ncleaning it up. I think ignoring contrains is a good thing if it will load \nthe data faster (at least when you are doing a database backup / restore). \nWhy can't we do all alter table commands (that add constraints) after we load \nthe data, that way we don't need to alter syntax at all.\n", "msg_date": "Sat, 22 Jun 2002 18:32:39 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump and ALTER TABLE / ADD FOREIGN KEY" }, { "msg_contents": "> Some have expressed that this could be quite slow for large databases,\n> and want a type of:\n>\n> SET CONSTRAINTS UNCHECKED;\n>\n> However, others don't believe constraints other than foreign keys\n> should go unchecked.\n\nWell, at the moment remember taht all that other SET CONSTRAINTS commands\nonly affect foreign keys. However, this is a TODO to allow deferrable\nunique constraints.\n\n> Or would the below be more appropriate?:\n> ALTER TABLE tab ADD FOREIGN KEY .... TRUST EXISTING DATA;\n\nMaybe instead of TRUST EXISTING DATA, it could be just be WITHOUT CHECK or\nsomething that uses existing keywords?\n\nEither way, it must be a superuser-only command. I'm kinda beginning to\nfavour the latter now actually...\n\nExcept if we could make all constraints uncheckable, then restoring a dump\nwould be really fast (but risky!)\n\nChris\n\n\n\n\n\n", "msg_date": "Sun, 23 Jun 2002 13:23:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump and ALTER TABLE / ADD FOREIGN KEY" }, { "msg_contents": "\nOn Sat, 22 Jun 2002, Matthew T. O'Connor wrote:\n\n> > However, others don't believe constraints other than foreign keys\n> > should go unchecked.\n> >\n> > That said, is this functionality wanted outside of pg_dump /\n> > pg_restore?\n>\n> pg_dump should reload a database as it was stored in the previous database.\n> If your old data is not clean, pg_dump / restore is not a very good tool for\n> cleaning it up. I think ignoring contrains is a good thing if it will load\n> the data faster (at least when you are doing a database backup / restore).\n> Why can't we do all alter table commands (that add constraints) after we load\n> the data, that way we don't need to alter syntax at all.\n\nThat doesn't help. ALTER TABLE checks the constraint at the time the\nalter table is issued since the constraint must be satisified by the\ncurrent data. Right now that check is basically run the trigger for each\nrow checking it, which is probably sub-optimal since it could be one\nstatement, but changing that won't prevent it from being slow on big\ntables.\n\n\n\n", "msg_date": "Sun, 23 Jun 2002 16:55:44 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump and ALTER TABLE / ADD FOREIGN KEY" }, { "msg_contents": "On 2002.06.23 01:23 Christopher Kings-Lynne wrote:\n> > Some have expressed that this could be quite slow for large\n> databases,\n> > and want a type of:\n> >\n> > SET CONSTRAINTS UNCHECKED;\n> >\n> > However, others don't believe constraints other than foreign keys\n> > should go unchecked.\n> \n> Well, at the moment remember taht all that other SET CONSTRAINTS\n> commands\n> only affect foreign keys. However, this is a TODO to allow deferrable\n> unique constraints.\n> \n> > Or would the below be more appropriate?:\n> > ALTER TABLE tab ADD FOREIGN KEY .... TRUST EXISTING DATA;\n> \n> Maybe instead of TRUST EXISTING DATA, it could be just be WITHOUT\n> CHECK or\n> something that uses existing keywords?\n\nWITHOUT CHECK doesn't sound right. 'Make a foreign key but don't \nenforce it'.\n\nWITHOUT BACKCHECKING, WITHOUT ENFORCING CURRENT, ...\n\nAnyway you look at it it's going to further break loading pgsql backups \ninto another database. Atleast the set constraints line will be \nerrored out on most other DBs -- but the foreign key will still be \ncreated.\n\nSET FKEY_CONSTRAINTS TO UNCHECKED;\n\n> Except if we could make all constraints uncheckable, then restoring a\n> dump\n> would be really fast (but risky!)\n\nNo more risky than simply avoiding foreign key constraints. A unique \nkey is a simple matter to fix usually, foreign keys are not so easy \nwhen you get into the double / triple keys\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 08:58:13 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: pg_dump and ALTER TABLE / ADD FOREIGN KEY" } ]
[ { "msg_contents": "hi all. i want to begin contributing work to stuff on the todo list. i have \nworked on suns for a long time. i plan to buy a used pc to begin helping out \nwith postgresql. my question is, what would be a good buy? what is the \noldest model of PC that i could buy and be able to use to work on stuff on \nthe todo list for a year or two years or three? i am pretty ignorant as far \nas PC's go. just have not kept up or had motivation to. but am going to now. \nthanx! there is a large used computer store near me in northern virginia. i \nhave around 300 dollars to invest. thanx! looking forward to hearing some \nintersting responses!\n\nbob kernell\n\n\n_________________________________________________________________\nSend and receive Hotmail on your mobile device: http://mobile.msn.com\n\n", "msg_date": "Sat, 22 Jun 2002 16:48:12 +0000", "msg_from": "\"Robert Kernell\" <kernell0000@hotmail.com>", "msg_from_op": true, "msg_subject": "computer for todo list" }, { "msg_contents": "> hi all. i want to begin contributing work to stuff on the todo list. i have\n> worked on suns for a long time.\n\nI have a dual processor 180MHz Pentium Pro which is adequate for\nbuilding and testing PostgreSQL, though I find I use my newer laptop\nmore often nowadays. I'd expect that any machine substantially better\nthan that would be sufficient. I have a preference for dual processor\nmachines, and you might find one of those even at the lower price range.\n\nYou'll be pleasantly suprised at how good a Linux or FreeBSD (or some\nother similar) machine will be for you, given your background with\nSolaris. There are lots of functional similarities and good performance.\n\n - Thomas\n", "msg_date": "Sat, 22 Jun 2002 18:42:44 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: computer for todo list" } ]
[ { "msg_contents": "Hi,\n\nI added the code to make IDENT authentification work even if the \nresponses are DES encrypted. The changes are contained in the attached \ntar.gz file.\n\nThere is a readme included in the tar.gz which explains things. The tar \nfile contains the following files:\n\nident-des.patch\nsrc/backend/libpq/ident-des.c\nsrc/include/libpq/ident-des.h\nREADME.ident-des\n\nThanks,\nDavid Kaplan", "msg_date": "Sat, 22 Jun 2002 16:57:56 -0700", "msg_from": "\"David M. Kaplan\" <dmkaplan@ucdavis.edu>", "msg_from_op": true, "msg_subject": "ident-des patches" }, { "msg_contents": "\nI haven't seen any demand for ident DES so I have not applied this\npatch. If it becomes a feature request, we can revisit this. Thanks.\n\n---------------------------------------------------------------------------\n\nDavid M. Kaplan wrote:\n> Hi,\n> \n> I added the code to make IDENT authentification work even if the \n> responses are DES encrypted. The changes are contained in the attached \n> tar.gz file.\n> \n> There is a readme included in the tar.gz which explains things. The tar \n> file contains the following files:\n> \n> ident-des.patch\n> src/backend/libpq/ident-des.c\n> src/include/libpq/ident-des.h\n> README.ident-des\n> \n> Thanks,\n> David Kaplan\n> \n\n[ application/x-gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 23:26:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ident-des patches" } ]
[ { "msg_contents": "I'm looking at implementing IS DISTINCT FROM, among other things. It has\nthe unusual behavior that it compares elements for a tuple by\nconsidering two NULLs to be equal (hence non-distinct) rather than\n\"unknown\". So the rules for comparison seem to be:\n\na) if the rows compared have different lengths, they are distinct\nb) if the rows are both zero-length, they are not distinct\nc) otherwise, each element in the row (or a single value on each side of\nthe comparison) are compared pairwise, with\n 1) if both elements are NULL, they are not distinct\n 2) if one element of the pair is NULL, they are distinct\n 3) if both elements are NOT NULL and are equal, they are not distinct\n 4) if no pair of elements is distinct, the rows are not distinct\n 5) otherwise, the rows are distinct\n\nI was thinking to implement this by simply expanding these rules within\ngram.y to be a tree of comparison tests. But I think that this does not\ngeneralize properly into allowing tuples or rows to be supplied by\nsubqueries or other non-literal tuples. So, I'm looking for suggestions\non how to go about implementing this. Should I define a new comparison\nnode like the AND expression which can directly handle the NULL\nbehaviors correctly? That would require at least minor changes in the\noptimizer and executor. Does another approach come to mind (esp. one\nwhich works ;)?\n\nTIA\n\n - Thomas\n", "msg_date": "Sat, 22 Jun 2002 18:59:43 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Suggestions for implementing IS DISTINCT FROM?" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I'm looking at implementing IS DISTINCT FROM, among other things.\n> ...\n> I was thinking to implement this by simply expanding these rules within\n> gram.y to be a tree of comparison tests.\n\nPlease, please, do not do that. Make a new expression node tree type,\ninstead. We've made this mistake before (eg for BETWEEN) and I don't\nwant to do it again.\n\nAside from the points you make, a direct expansion approach cannot\nreverse-list properly in rules/views, and it will force multiple\nevaluations of arguments that should not be multiply evaluated.\n\nAdding a new expression node tree type is not too difficult these days;\nsee for example Joe Conway's recent NullTest and BooleanTest additions.\n\nI believe the existing expansions of row comparison operators\n(makeRowExpr) should be replaced by specialized nodes, too. That would\ngive us a shot at implementing row '<', '>' comparisons in a\nspec-compliant fashion...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Jun 2002 12:01:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestions for implementing IS DISTINCT FROM? " }, { "msg_contents": "> > I'm looking at implementing IS DISTINCT FROM, among other things.\n> > ...\n> > I was thinking to implement this by simply expanding these rules within\n> > gram.y to be a tree of comparison tests.\n> Please, please, do not do that. Make a new expression node tree type,\n> instead. We've made this mistake before (eg for BETWEEN) and I don't\n> want to do it again.\n\nUh, sure. If you don't quote out of context I think it is pretty clear\nthat I was looking for a helpful suggestion to do just that. Thanks,\nI'll proceed with the assurance that you won't object to *that* too ;)\n\n - Thomas\n\n\n", "msg_date": "Sun, 23 Jun 2002 18:53:54 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Suggestions for implementing IS DISTINCT FROM?" }, { "msg_contents": "> Please, please, do not do that. Make a new expression node tree type,\n> instead. We've made this mistake before (eg for BETWEEN) and I don't\n> want to do it again.\n\nI've actually already done almost all the work for converting BETWEEN to a\nnode but I have a couple of questions:\n\nShould I use a boolean in the node to indicate whether it is SYMMETRIC or\nASYMMETRIC, or should I use some sort of integer to indicate whether it is\nSYMMETRIC, ASYMMETRIC or DEFAULT (ASYMMETRIC). That way the reverse in\nrules and views could leave out the ASYMMETRIC if it wasn't specified\noriginally, rather than always adding it in. Which is better?\n\nChris\n\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 11:44:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Suggestions for implementing IS DISTINCT FROM? " }, { "msg_contents": "> I've actually already done almost all the work for converting BETWEEN to a\n> node but I have a couple of questions:\n> Should I use a boolean in the node to indicate whether it is SYMMETRIC or\n> ASYMMETRIC, or should I use some sort of integer to indicate whether it is\n> SYMMETRIC, ASYMMETRIC or DEFAULT (ASYMMETRIC). That way the reverse in\n> rules and views could leave out the ASYMMETRIC if it wasn't specified\n> originally, rather than always adding it in. Which is better?\n\nGreat! \n\nI would use a boolean (or integer) to indicate two possibilities, not\nthree.\n\nThe language specifies what the default should be, and dump programs\ncould choose to omit the ASYMMETRIC if they choose. imho it is best to\nresolve defaults earlier, rather than pushing the resolution deep into\nthe parser or even farther.\n\n - Thomas\n\n\n", "msg_date": "Sun, 23 Jun 2002 21:12:04 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Suggestions for implementing IS DISTINCT FROM?" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Should I use a boolean in the node to indicate whether it is SYMMETRIC or\n> ASYMMETRIC, or should I use some sort of integer to indicate whether it is\n> SYMMETRIC, ASYMMETRIC or DEFAULT (ASYMMETRIC). That way the reverse in\n> rules and views could leave out the ASYMMETRIC if it wasn't specified\n> originally, rather than always adding it in. Which is better?\n\nMy intention is to reverse-list as either \"BETWEEN\" or \"BETWEEN SYMMETRIC\".\nWhile I believe in reproducing the source text during reverse listing,\nI don't take it to extremes ;-)\n\nSo a boolean is sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 09:42:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestions for implementing IS DISTINCT FROM? " }, { "msg_contents": "...\n> Adding a new expression node tree type is not too difficult these days;\n> see for example Joe Conway's recent NullTest and BooleanTest additions.\n> I believe the existing expansions of row comparison operators\n> (makeRowExpr) should be replaced by specialized nodes, too. That would\n> give us a shot at implementing row '<', '>' comparisons in a\n> spec-compliant fashion...\n\nOK, now that we are pushing new nodes into the executor with abandon :)\n...\n\nLet's talk about the preferred technique for doing so, especially with\nrow-style argument lists. I want to implement IS NULL for rows also\n(actually, already did so using a transformation in gram.y -- no need to\njump on that one Tom ;), and found a comment from Joe on the NullTest\ncode saying that he wanted to do just that \"someday\". Should we honk\naround the existing NullTest node to handle both lists of arguments and\nsingle arguments, or should we have a completely different set of nodes\nfor handling row arguments? I'll guess the latter, but we should talk\nwhether that scales properly, about the preferred style for this new\nkind of node, and about how to minimize performance hits which we might\nsee if we have a large number of new node types being handled by the\nexecutor.\n\nIf we extend existing nodes to handle lists, then I might be able to\npackage the DISTINCT test into an A_Expr node, though I haven't pushed\nthat all the way through to see for sure.\n\nUsing the SubLink node does not seem quite right because IS DISTINCT\nFROM does not seem to make sense with an embedded select as one of the\narguments, but maybe it does??\n\nComments? I'm itchy to code but don't want to waste more time than\nnecessary heading the wrong direction...\n\n - Thomas\n\n\n", "msg_date": "Tue, 25 Jun 2002 08:19:04 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Suggestions for implementing IS DISTINCT FROM?" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Let's talk about the preferred technique for doing so, especially with\n> row-style argument lists. I want to implement IS NULL for rows also\n> (actually, already did so using a transformation in gram.y -- no need to\n> jump on that one Tom ;), and found a comment from Joe on the NullTest\n> code saying that he wanted to do just that \"someday\". Should we honk\n> around the existing NullTest node to handle both lists of arguments and\n> single arguments, or should we have a completely different set of nodes\n> for handling row arguments? I'll guess the latter, but we should talk\n> whether that scales properly, about the preferred style for this new\n> kind of node, and about how to minimize performance hits which we might\n> see if we have a large number of new node types being handled by the\n> executor.\n\nI have a mild preference for minimizing the number of expression node\ntypes. If we can represent two similar kinds of expressions with one\nnode type instead of two, we roughly halve the amount of overhead code\nneeded.\n\nMy inclination would be to divvy things up on the basis of the kinds\nof things to be compared, so we'd have these classes of nodes:\n\n1. Test a single <row value expression>; this would cover\n\n <null predicate> ::=\n <row value expression> IS [ NOT ] NULL\n\n2. Compare two <row value expressions>; this would cover\n\n <comparison predicate> ::=\n <row value expression> <comp op> <row value expression>\n\n <distinct predicate> ::=\n <row value expression> IS DISTINCT FROM <row value expression>\n\n (OVERLAPS is a special case that should remain separate, IMHO)\n\n3. Compare three <row value expressions>:\n\n <between predicate> ::=\n <row value expression> [ NOT ] BETWEEN\n [ ASYMMETRIC | SYMMETRIC ]\n <row value expression> AND <row value expression>\n\n4. Compare a <row value expression> to a list of <row value expressions>:\n\n <in predicate> ::=\n <row value expression>\n [ NOT ] IN <left paren> <in value list> <right paren>\n\n <in value list> ::=\n <row value expression> { <comma> <row value expression> }...\n\n5. Compare a <row value expression> to the outputs of a <subquery>:\n\n <in predicate> ::=\n <row value expression>\n [ NOT ] IN <table subquery>\n\n <quantified comparison predicate> ::=\n <row value expression> <comp op> <quantifier>\n <table subquery>\n\n <match predicate> ::=\n <row value expression> MATCH [ UNIQUE ]\n [ SIMPLE | PARTIAL | FULL ]\n <table subquery>\n\nCase 5 corresponds exactly to the existing SubLink node type (although\nit's missing the MATCH options at the moment, and doesn't implement all\nthe <comp op> cases it should).\n\nThe spec intends all of these constructs to include the case where the\n<row value expression> is a single scalar expression. I'm feeling\nambivalent about whether we should have the same node type or different\nones for the single-value and row-value cases. For SubLink we currently\nhandle single-value and row-value left hand sides with the same code,\nand that seems fine. For <comparison predicate> I definitely *don't*\nwant to burden scalar comparisons with row-value overhead, so we need\ntwo separate representations for that. The other cases seem to be on the\nborderline. We don't currently have scalar-case node types for these,\nexcept for NullTest, so new node-type coding is needed anyway --- it\ncould be either a separate scalar node type or merged with the row-value\ncase. No strong preference here, but slight leaning towards merging.\n\n> Using the SubLink node does not seem quite right because IS DISTINCT\n> FROM does not seem to make sense with an embedded select as one of the\n> arguments, but maybe it does??\n\nI agree it doesn't make sense. However, we should look at SubLink and\nsee if we can't clean it up and maybe share some code. For all of the\ncases that compare rows, you need to have a list of the appropriate\nscalar comparison operators to use for each column. The way SubLink\ndoes that is pretty ugly (especially its use of a modifiable Const node).\nLet's see if we can't improve that before we copy it ;-). I'm thinking\nthat the expression tree itself ought to contain just a list of Oper\nnodes dangling from the SubLink or row comparison node. We'd need an\nalternate entry point similar to ExecEvalOper/ExecMakeFunctionResult\nthat would accept two Datum values rather than a list of subexpressions\nto evaluate, but that seems very doable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:36:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suggestions for implementing IS DISTINCT FROM? " } ]
[ { "msg_contents": "Version 7.2.1, RH 7.3, installed from RPM.\n\nFollowing error occurs:\n\namber_ws=> delete from samples;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!>\n\nThe table looks like this:\n\nCREATE TABLE \"samples\" (\n \"id\" integer DEFAULT nextval('\"samples_id_seq\"'::varchar(32)),\n \"fieldtrip_in\" integer\n REFERENCES \"fieldtrips\" ON DELETE RESTRICT ON UPDATE CASCADE,\n \"fieldtrip_out\" integer\n REFERENCES \"fieldtrips\" ON DELETE RESTRICT ON UPDATE CASCADE,\n \"site_name\" varchar(32)\n REFERENCES \"sites\" ON DELETE RESTRICT ON UPDATE CASCADE,\n \"collector_type\" varchar(32)\n REFERENCES \"collector_types\" ON DELETE RESTRICT ON UPDATE CASCADE,\n \"depth\" varchar(32)\n REFERENCES \"depth_types\" ON DELETE RESTRICT ON UPDATE CASCADE,\n \"replicate\" varchar(32)\n REFERENCES \"replicate_set\" ON DELETE RESTRICT ON UPDATE CASCADE,\n \"lost_bool\" boolean default false,\n \"buoy\" varchar(32),\n \"comments\" varchar(32),\n Constraint \"samples_pkey\" Primary Key (\"id\")\n);\n\nThe single record in it looks like this:\n\namber_ws=> select * from samples;\n-[ RECORD 1 ]--+--\nid | 1\nfieldtrip_in | 1\nfieldtrip_out | 1\nsite_name |\ncollector_type |\ndepth |\nreplicate |\nlost_bool | f\nbuoy |\ncomments |\n\nNo tables are using it as a REFERENCES target.\n\nLet me know if I can help more. I am not root on the box, so I am not going\nto try attaching gdb to anything tonight. However, the root user and I\nwould be quite happy to do so later.\n\nThanks\nWebb\n\n", "msg_date": "Sat, 22 Jun 2002 19:45:29 -0700", "msg_from": "\"Foo\" <wwsprague@ucdavis.edu>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "I upgrade from PG 7.1.3 to 7.2, and I am trying to restore my dbs but I\nkeep getting:\n\n[nsadmin@roam backup-20020622]$ pg_restore <all-good.dmp \npg_restore: [archiver] input file does not appear to be a valid archive\n", "msg_date": "Sun, 23 Jun 2002 00:12:11 -0500", "msg_from": "James Thornton <thornton@cs.baylor.edu>", "msg_from_op": true, "msg_subject": "pg_restore: [archiver] input file does not appear to be a valid\n\tarchive" }, { "msg_contents": "On Sun, 23 Jun 2002, James Thornton wrote:\n\n> I upgrade from PG 7.1.3 to 7.2, and I am trying to restore my dbs but I\n> keep getting:\n> \n> [nsadmin@roam backup-20020622]$ pg_restore <all-good.dmp \n> pg_restore: [archiver] input file does not appear to be a valid archive\n\n\nWell we could try and work out the problem and a solution on the assumption\nthat the dump is a valid archive or you could confirm that it is first.\n\nJust look at the file. I believe I'm right, but accept I may be wrong, in\nsaying that the 7.1.3 pg_dump can only generate a script file. So if it's valid\nit should look like a script to recreate and load your database. You could\ntry running it straight into psql as well.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 20:55:06 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: pg_restore: [archiver] input file does not appear to" }, { "msg_contents": "James Thornton <thornton@cs.ecs.baylor.edu> writes:\n> I upgrade from PG 7.1.3 to 7.2, and I am trying to restore my dbs but I\n> keep getting:\n> [nsadmin@roam backup-20020622]$ pg_restore <all-good.dmp \n> pg_restore: [archiver] input file does not appear to be a valid archive\n\nHow did you make the dump file exactly?\n\nI'm betting that what you have is not a dump, but just a SQL script that\nyou are supposed to feed to psql not pg_restore ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 17:02:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_restore: [archiver] input file does not appear to be\n\ta valid archive" } ]
[ { "msg_contents": "This is the problem:\n\n*** ./expected/rules.out Fri May 3 08:32:19 2002\n--- ./results/rules.out Sun Jun 23 14:08:37 2002\n***************\n*** 1005,1012 ****\n SELECT * FROM shoe_ready WHERE total_avail >= 2;\n shoename | sh_avail | sl_name | sl_avail | total_avail\n ------------+----------+------------+----------+-------------\n- sh1 | 2 | sl1 | 5 | 2\n sh3 | 4 | sl7 | 7 | 4\n (2 rows)\n\n CREATE TABLE shoelace_log (\n--- 1005,1012 ----\n SELECT * FROM shoe_ready WHERE total_avail >= 2;\n shoename | sh_avail | sl_name | sl_avail | total_avail\n ------------+----------+------------+----------+-------------\n sh3 | 4 | sl7 | 7 | 4\n+ sh1 | 2 | sl1 | 5 | 2\n (2 rows)\n\n CREATE TABLE shoelace_log (\n\n======================================================================\n\n\n\n\n\n\n\n\nThis is the problem:\n \n*** \n./expected/rules.out        Fri May  3 \n08:32:19 2002--- ./results/rules.out Sun Jun 23 14:08:37 \n2002****************** 1005,1012 ****  SELECT * FROM shoe_ready \nWHERE total_avail >= 2;    shoename  | sh_avail \n|  sl_name   | sl_avail | total_avail  \n------------+----------+------------+----------+--------------  \nsh1        \n|        2 | \nsl1        \n|        5 \n|           2   \nsh3        \n|        4 | \nsl7        \n|        7 \n|           4  (2 \nrows)\n \n      CREATE TABLE \nshoelace_log (--- 1005,1012 ----  SELECT * FROM shoe_ready WHERE \ntotal_avail >= 2;    shoename  | sh_avail |  \nsl_name   | sl_avail | total_avail  \n------------+----------+------------+----------+-------------   \nsh3        \n|        4 | \nsl7        \n|        7 \n|           4+  \nsh1        \n|        2 | \nsl1        \n|        5 \n|           2  (2 \nrows)\n \n      CREATE TABLE \nshoelace_log (\n \n======================================================================", "msg_date": "Sun, 23 Jun 2002 14:06:48 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RULE regression failure on freebsd/alpha" } ]
[ { "msg_contents": "Hi All,\n\nWhereabouts in the code is the '*' expanded into the list of valid columns and also where are the columns specified in the select arguments (or whereever) checked for validity?\n\nChris\n\n\n\n\n\n\n\n\nHi All,\n \nWhereabouts in the code is the '*' expanded into \nthe list of valid columns and also where are the columns specified in the select \narguments (or whereever) checked for validity?\n \nChris", "msg_date": "Sun, 23 Jun 2002 16:27:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Code questions" }, { "msg_contents": "On Sun, 23 Jun 2002, Christopher Kings-Lynne wrote:\n\n> Hi All,\n> \n> Whereabouts in the code is the '*' expanded into the list of valid columns \n\nSee the rule 'target_el' in gram.y. The SelectStmt node is processed\nfurther down the parser in analyze.c: see transformStmt(),\ntransformSelectStmt() and transformTargetList().\n\n> and also where are the columns specified in the select arguments (or\n> whereever) checked for validity?\n\nThis is pretty easy to discover by working backward from the\nelog(ERROR) produced when you select a non-existent attribute from a\nrelation:\n\nERROR: Attribute 'nonexistent' not found\n\nThis is generated by transformIdent(), called from transformExpr, called\nfrom transformTargetEntry. The latter is called by transformTargetList()\nwhen it the attribute is not of the form '*' or 'relation.*' or when we\ndon't know if the attribute is actually an attribute.\n\n> Chris\n\nGavin\n\n", "msg_date": "Sun, 23 Jun 2002 18:55:24 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Code questions" } ]
[ { "msg_contents": "Sorry to nag about this so late, but I fear that the new command SET LOCAL\nwill cause some confusion later on.\n\nSQL uses LOCAL to mean the local node in a distributed system (SET LOCAL\nTRANSACTION ...) and the current session as opposed to all sessions (local\ntemporary table). The new SET LOCAL command adds the meaning \"this\ntransaction only\". Instead we could simply use SET TRANSACTION, which\nwould be consistent in behaviour with the SET TRANSACTION ISOLATION LEVEL\ncommand.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Sun, 23 Jun 2002 23:52:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Use of LOCAL in SET command" }, { "msg_contents": "> SQL uses LOCAL to mean the local node in a distributed system (SET LOCAL\n> TRANSACTION ...) and the current session as opposed to all sessions (local\n> temporary table). The new SET LOCAL command adds the meaning \"this\n> transaction only\". Instead we could simply use SET TRANSACTION, which\n> would be consistent in behaviour with the SET TRANSACTION ISOLATION LEVEL\n> command.\n\nYes. If there is a possibility of confusion (now or later) over SQL99\nsyntax, we should do it The Right Way per spec.\n\n - Thomas\n\n\n", "msg_date": "Mon, 24 Jun 2002 09:49:21 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Use of LOCAL in SET command" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Sorry to nag about this so late, but I fear that the new command SET LOCAL\n> will cause some confusion later on.\n\nOkay...\n\n> SQL uses LOCAL to mean the local node in a distributed system (SET LOCAL\n> TRANSACTION ...) and the current session as opposed to all sessions (local\n> temporary table). The new SET LOCAL command adds the meaning \"this\n> transaction only\". Instead we could simply use SET TRANSACTION, which\n> would be consistent in behaviour with the SET TRANSACTION ISOLATION LEVEL\n> command.\n\nHmm ... this would mean that the implicit parsing of SET TRANSACTION\nISOLATION LEVEL would change (instead of SET / TRANSACTION ISOLATION\nLEVEL you'd now tend to read it as SET TRANSACTION / ISOLATION LEVEL)\nbut I guess that would still not create any parse conflicts. I'm okay\nwith this as long as we can fix psql's command completion stuff to\nhandle it intelligently. I hadn't gotten round to looking at that point\nyet for the LOCAL case; do you have any thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 16:37:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of LOCAL in SET command " }, { "msg_contents": "\nHas this been resolved?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Sorry to nag about this so late, but I fear that the new command SET LOCAL\n> > will cause some confusion later on.\n> \n> Okay...\n> \n> > SQL uses LOCAL to mean the local node in a distributed system (SET LOCAL\n> > TRANSACTION ...) and the current session as opposed to all sessions (local\n> > temporary table). The new SET LOCAL command adds the meaning \"this\n> > transaction only\". Instead we could simply use SET TRANSACTION, which\n> > would be consistent in behaviour with the SET TRANSACTION ISOLATION LEVEL\n> > command.\n> \n> Hmm ... this would mean that the implicit parsing of SET TRANSACTION\n> ISOLATION LEVEL would change (instead of SET / TRANSACTION ISOLATION\n> LEVEL you'd now tend to read it as SET TRANSACTION / ISOLATION LEVEL)\n> but I guess that would still not create any parse conflicts. I'm okay\n> with this as long as we can fix psql's command completion stuff to\n> handle it intelligently. I hadn't gotten round to looking at that point\n> yet for the LOCAL case; do you have any thoughts?\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Aug 2002 00:02:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of LOCAL in SET command" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Has this been resolved?\n\nI think the resolution was to do nothing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Aug 2002 00:09:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of LOCAL in SET command " } ]
[ { "msg_contents": "\nJust a quick heads up ... I've asked Rackspace to investigate *why* the\nserver crashes every 24-48hrs, and given them carte-blanche to get it\nfixed ... they are planning on swapping out/in hardware, as right now that\nappears to be where the error messages are indicating ...\n\n\n\n\n", "msg_date": "Mon, 24 Jun 2002 09:29:46 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Sporatic Server Downtime ..." } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Dave Cramer [mailto:dave@fastcrypt.com] \n> Sent: 24 June 2002 01:25\n> To: PostgreSQL Hacker\n> Subject: [HACKERS] pgadmin.postgresql.org displaying errors\n> \n> \n> I am getting lots of errors on pgadmin.postgresql.org\n> \n> Dave\n\nLooks OK now...\n\nThanks anyway, Dave.\n\n\n", "msg_date": "Mon, 24 Jun 2002 15:15:41 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: pgadmin.postgresql.org displaying errors" } ]
[ { "msg_contents": "hi\n\ni want to alter the ALTER TABLE xxx ADD statement to allow\nthe following syntax:\n\nALTER TABLE [ ONLY ] table \n ADD [ COLUMN ] ( column type [ column_constraint ] [, column type [ column_constraint ]] )\n\njust to add one or more columns to a table with one alter table statement.\n\ni know .. its easier to use multiple alter table statements \nbut the database client application uses this syntax and\ni cant change the client application.\n\nhere is my first \"small\" patch to v7.2.1. i have some\nproblems whith the concept of the Nodes. \n\nmaybe someone could give me an hint how i could\nimplement this.\n\nyours, oliver teuber\n\n\ndiff -cr postgresql-7.2.1/src/backend/parser/analyze.c postgresql-7.2.1-oli/src/backend/parser/analyze.c\n*** postgresql-7.2.1/src/backend/parser/analyze.c\tWed Feb 27 00:48:43 2002\n--- postgresql-7.2.1-oli/src/backend/parser/analyze.c\tMon Jun 24 21:03:41 2002\n***************\n*** 2519,2524 ****\n--- 2519,2532 ----\n \t */\n \tswitch (stmt->subtype)\n \t{\n+ \t\tcase 'M':\n+ \n+ \n+ \t\t\t/* ... some hints please ;) */\n+ \n+ \n+ \t\t\tbreak;\n+ \n \t\tcase 'A':\n \t\t\tcxt.stmtType = \"ALTER TABLE\";\n \t\t\tcxt.relname = stmt->relname;\ndiff -cr postgresql-7.2.1/src/backend/parser/gram.y postgresql-7.2.1-oli/src/backend/parser/gram.y\n*** postgresql-7.2.1/src/backend/parser/gram.y\tSat Mar 9 18:41:04 2002\n--- postgresql-7.2.1-oli/src/backend/parser/gram.y\tMon Jun 24 20:42:01 2002\n***************\n*** 1070,1075 ****\n--- 1070,1085 ----\n \t\t\t\t\tn->def = $6;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n+ /* ALTER TABLE <relation> ADD [COLUMN] <coldef> */\n+ \t\t| ALTER TABLE relation_expr ADD opt_column '(' OptTableElementList ')'\n+ \t\t\t\t{\n+ \t\t\t\t\tAlterTableStmt *n = makeNode(AlterTableStmt);\n+ \t\t\t\t\tn->subtype = 'M';\n+ \t\t\t\t\tn->relname = $3->relname;\n+ \t\t\t\t\tn->inhOpt = $3->inhOpt;\n+ \t\t\t\t\tn->ldef = $7;\n+ \t\t\t\t\t$$ = (Node *)n;\n+ \t\t\t\t}\n /* ALTER TABLE <relation> ALTER [COLUMN] <colname> {SET DEFAULT <expr>|DROP DEFAULT} */\n \t\t| ALTER TABLE relation_expr ALTER opt_column ColId alter_column_default\n \t\t\t\t{\ndiff -cr postgresql-7.2.1/src/include/nodes/parsenodes.h postgresql-7.2.1-oli/src/include/nodes/parsenodes.h\n*** postgresql-7.2.1/src/include/nodes/parsenodes.h\tWed Feb 27 00:48:46 2002\n--- postgresql-7.2.1-oli/src/include/nodes/parsenodes.h\tMon Jun 24 20:41:02 2002\n***************\n*** 121,126 ****\n--- 121,127 ----\n \tNodeTag\t\ttype;\n \tchar\t\tsubtype;\t\t/*------------\n \t\t\t\t\t\t\t\t *\tA = add column\n+ \t\t\t\t\t\t\t\t * M = add columns\n \t\t\t\t\t\t\t\t *\tT = alter column default\n \t\t\t\t\t\t\t\t *\tS = alter column statistics\n \t\t\t\t\t\t\t\t *\tD = drop column\n***************\n*** 135,140 ****\n--- 136,142 ----\n \tchar\t *name;\t\t\t/* column or constraint name to act on, or\n \t\t\t\t\t\t\t\t * new owner */\n \tNode\t *def;\t\t\t/* definition of new column or constraint */\n+ \tList\t *ldef;\n \tint\t\t\tbehavior;\t\t/* CASCADE or RESTRICT drop behavior */\n } AlterTableStmt;\n \n\n\n", "msg_date": "Mon, 24 Jun 2002 21:48:05 +0200", "msg_from": "Oliver Teuber <teuber@core.devicen.de>", "msg_from_op": true, "msg_subject": "Alter ALTER TABLE statement ..." } ]
[ { "msg_contents": "Fernando Nasser of Red Hat reminded me that it really makes no sense\nfor ALTER TABLE ADD COLUMN and ALTER TABLE RENAME COLUMN to behave\nnon-recursively --- that is, they should *always* affect inheritance\nchildren of the named table, never just the named table itself.\n\nAfter a non-recursive ADD/RENAME, you'd have a situation wherein\n\"SELECT * FROM foo\" would fail, because there'd be no corresponding\ncolumns in the child table(s). This seems clearly bogus to me.\n(On the other hand, non-recursive DROP COLUMN, if we had one, would\nbe okay ... the orphaned child columns would effectively become\nnon-inherited added columns. Similarly, non-recursive alterations of\ndefaults, constraints, etc seem reasonable.)\n\nAs of 7.2 we do accept \"ALTER TABLE ONLY foo\" forms of these commands,\nbut I think that's a mistake arising from thoughtless cut-and-paste\nfrom the other forms of ALTER. I believe it is better to give an error\nif such a command is given. Any objections?\n\nAlso, in the case where neither \"ONLY foo\" nor \"foo*\" is written, the\nbehavior currently depends on the SQL_INHERITANCE variable. There's\nno problem when SQL_INHERITANCE has its default value of TRUE, but what\nif it is set to FALSE? Seems to me we have two plausible choices:\n\n\t* Give an error, same as if \"ONLY foo\" had been written.\n\n\t* Assume the user really wants recursion, and do it anyway.\n\nThe second seems more user-friendly but also seems to violate the\nprinciple of least surprise. Anyone have an opinion about what to do?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 16:22:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Nonrecursive ALTER TABLE ADD/RENAME COLUMN is wrong" }, { "msg_contents": "> \t* Give an error, same as if \"ONLY foo\" had been written.\n> \n> \t* Assume the user really wants recursion, and do it anyway.\n> \n> The second seems more user-friendly but also seems to violate the\n> principle of least surprise. Anyone have an opinion about what to do?\n\nI really prefer the former. If for some reason it were to become \navailable that they could alter only foo for some strange reason we \nhaven't come up with yet (statistics related perhaps?), we would \ncertainly need to throw an error on the other 'alter table' statements \nat that point in time.\n\n\n", "msg_date": "Mon, 24 Jun 2002 22:16:21 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Nonrecursive ALTER TABLE ADD/RENAME COLUMN is wrong" }, { "msg_contents": "> Fernando Nasser of Red Hat reminded me that it really makes no sense\n> for ALTER TABLE ADD COLUMN and ALTER TABLE RENAME COLUMN to behave\n> non-recursively --- that is, they should *always* affect inheritance\n> children of the named table, never just the named table itself.\n\nHmm. Good point. Anything else would lead to structural breakage.\n\n> The second seems more user-friendly but also seems to violate the\n> principle of least surprise. Anyone have an opinion about what to do?\n\nSame point as for the main issue: the solution should not introduce\nstructural breakage, especially only on the otherwise benign setting of\na GUC variable.\n\nThe case you are worried about already *has* structural inheritance, so\nthe GUC setting could reasonably have no effect. But if one is mixing a\ndatabase with inheritance structures with command settings that hide it,\nthey shouldn't be too suprised at whatever they get. The Right Thing\nimho is to respect the underlying structures and definitions, not the\ncommand facade. But would not dig in my heels on either choice after\nmore discussion.\n\n - Thomas\n\n\n", "msg_date": "Mon, 24 Jun 2002 19:19:06 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Nonrecursive ALTER TABLE ADD/RENAME COLUMN is wrong" }, { "msg_contents": "> The second seems more user-friendly but also seems to violate the\n> principle of least surprise. Anyone have an opinion about what to do?\n\nSounds like a logical argument, given normal OO behaviour.\n\nHope it inspires someone to implement DROP COLUMN :)\n\nChris\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 14:39:48 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Nonrecursive ALTER TABLE ADD/RENAME COLUMN is wrong" } ]
[ { "msg_contents": "o\nI have a problem with an 7.1.3 database that has probably overflowed\nthe oid counter. The startup halts with these messages\n\nDEBUG: database system was interrupted at 2002-06-24 21:19:43 EEST\nDEBUG: CheckPoint record at (156, 1692817164)\nDEBUG: Redo record at (156, 1692775580); Undo record at (0, 0);\nShutdown FALSE\nDEBUG: NextTransactionId: 859255800; NextOid: 7098\nFATAL 2: Invalid NextTransactionId/NextOid\npostmaster: Startup proc 4752 exited with status 512 - abort\n\n\nCan something be sone to recover the database?\n\nRegards,\nDaniel\n\n\n", "msg_date": "Mon, 24 Jun 2002 23:47:10 +0300", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "oids rollover?" }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> I have a problem with an 7.1.3 database that has probably overflowed\n> the oid counter. The startup halts with these messages\n\n> DEBUG: database system was interrupted at 2002-06-24 21:19:43 EEST\n> DEBUG: CheckPoint record at (156, 1692817164)\n> DEBUG: Redo record at (156, 1692775580); Undo record at (0, 0);\n> Shutdown FALSE\n> DEBUG: NextTransactionId: 859255800; NextOid: 7098\n> FATAL 2: Invalid NextTransactionId/NextOid\n> postmaster: Startup proc 4752 exited with status 512 - abort\n\nLooks that way. This is fixed in 7.2, so you might want to think about\nan update sometime soon.\n\n> Can something be sone to recover the database?\n\nYou could modify contrib/pg_resetxlog to force a value at least 16384 into\nthe OID counter. Since the DB was evidently not shut down cleanly, I'd\ncounsel cutting out the xlog-reset function entirely; just make it read\nthe pg_control file, set a valid nextOid, update the CRC, and rewrite\npg_control.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2002 20:40:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: oids rollover? " } ]
[ { "msg_contents": "Hi,\n\nI corrected a few minor problems with the patch I sent Friday allowing \nIDENT authification to recognize encrypted responses.\n\nThanks,\nDavid", "msg_date": "Mon, 24 Jun 2002 13:51:27 -0700", "msg_from": "\"David M. Kaplan\" <dmkaplan@ucdavis.edu>", "msg_from_op": true, "msg_subject": "new ident-des patch" } ]
[ { "msg_contents": "Jean-Michel,\n\n> It seems clear that several teams are working without central point \nmanagement \n> and contact:\n<snip>\n> - Marketing: MySQL sucks and has a team of marketing sending junk technical \n> emails and writing false benchmarks. Who is in charge of marketing at \n> PostgreSQL? Where can I find a list of PostgreSQL features?\n<snip>\nome projects, like Debian, have a democratic organisation. The team leader is \n> elected for a year. Why not settle a similar organization? This would help \n> take decisions ... and not loose time on important issues.\n> \n> PostgreSQL is a software but it is also a community. If we believe in \n> democracy, I suggest we should organize in a democratic way and elect a \n> leader for a year.\n\nLet me introduce myself. In addition to being a contributor of supplimentary \ndocumentation and the occasional spec to the PostgreSQL project, I am \nvolunteer marketing lead and the primary motivator for governance overhaul in \nthe OpenOffice.org project. \n\nAnd frankly, I think you're way off base here. We have leaders: Tom, Bruce, \nJan, Stephan, Thomas, Marc and Oliver (did I miss anybody?). Frankly, if \nOpenOffice.org had the kind of widely trusted, committed, involved in the \ncommunity core developers that PostgreSQL already has, I wouldn't be on my \nfourth draft of an OpenOffice.org Community Council charter. OpenOffice.org \nwill have an election process because we are too big and too dispersed for a \nsimple trust network, not because we want one for its own sake.\n\nPostgreSQL is, quite possibly, the smoothest-running Open Source project with \nworldwide adoption. I find myself saying, at least once a week, \"if only \nproject X were as well-organized as PostgreSQL!\" It is perhaps not \ncoincidental that Postgres is one of the 15 or 20 oldest Open Source projects \n(older than Linux, I believe).\n\nHow would a \"democratic election\" improve this? And why would we want an \nelected person or body who was not a core developer? And if we elected a \ncore developer, why bother? They aready run things.\n\nRegarding your marketing angle: Feel free to nominate yourself \"PostgreSQL \nMarketing Czar.\" Write articles. Contact journalists. Generate press \nreleases for each new Postgres version. Apply for a dot-org booth at \nLinuxWorld. Nobody voted for me (actually, I got stuck with the job by not \nprotesting hard enough <grin>).\n\nFrankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already \nadequately marketed through our huge network of DBA users and code \ncontributors. As often as not, the database engine choice is made by the \nDBA, and they will choose PostgreSQL on its merits, not because of some \nWashington Post article.\n\nOpenOffice.org is a different story, as an end-user application. So we have \na Marketing Project.\n\n-- \n-Josh Berkus\nPorject Lead, OpenOffice.org Marketing\n\n\n", "msg_date": "Mon, 24 Jun 2002 14:44:37 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a revolution in the\n\tDebian way" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already \n> adequately marketed through our huge network of DBA users and code \n> contributors.\n\nWell, mumble ... it seems to me that we are definitely suffering from\na \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\nThat doesn't bother me in itself, but the long-term implications are\nscary. If MySQL manages to attract a larger development community as\na consequence of more usage or better marketing, then eventually they\nwill be ahead of us on features and every other measure that counts.\nOnce we're number two with no prayer of catching up, how long will our\nproject remain viable? So, no matter how silly you might think\n\"MySQL is better\" is today, you've got to consider the prospect that\nit will become a self-fulfilling prophecy.\n\nSo far I have not worried about that scenario too much, because Monty\nhas always treated the MySQL sources as his personal preserve; if he\nhadn't written it or closely reviewed it, it didn't get in, and if it\ndidn't hew closely to his opinion of what's important, it didn't get in.\nBut I get the impression that he's loosened up of late. If MySQL stops\nbeing limited by what one guy can do or review, their rate of progress\ncould improve dramatically.\n\nIn short: we could use an organized marketing effort. I really\nfeel the lack of Great Bridge these days; there isn't anyone with\ncomparable willingness to expend marketing talent and dollars on\npromoting Postgres as such. Not sure what to do about it. We've\nsort of dismissed Jean-Michel's comments (and those of others in\nthe past) with \"sure, step right up and do the marketing\" responses.\nBut the truth of the matter is that a few amateurs with no budget\nwon't make much of an impression. We really need some professionals\nwith actual dollars to spend, and I don't know where to find 'em.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 01:21:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution in the\n\tDebian way" }, { "msg_contents": "On Tue, 25 Jun 2002, Tom Lane wrote:\n\n> Josh Berkus <josh@agliodbs.com> writes:\n> > Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already\n> > adequately marketed through our huge network of DBA users and code\n> > contributors.\n>\n> Well, mumble ... it seems to me that we are definitely suffering from\n> a \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\n> That doesn't bother me in itself, but the long-term implications are\n> scary. If MySQL manages to attract a larger development community as\n> a consequence of more usage or better marketing, then eventually they\n> will be ahead of us on features and every other measure that counts.\n> Once we're number two with no prayer of catching up, how long will our\n> project remain viable? So, no matter how silly you might think\n> \"MySQL is better\" is today, you've got to consider the prospect that\n> it will become a self-fulfilling prophecy.\n\nActually, I'm not sure how \"viable\" MySQL *is* in a commercial environment\n... personally, I think that they just shot themselves in the foot with\ntheir recent 'law suit' with NuSphere, no? Other then MySQL AB\nthemselves, how many are going to jump onto that bandwagon if nobody is\nallowed *some* sort of competitive advantage?\n\n> So far I have not worried about that scenario too much, because Monty\n> has always treated the MySQL sources as his personal preserve; if he\n> hadn't written it or closely reviewed it, it didn't get in, and if it\n> didn't hew closely to his opinion of what's important, it didn't get in.\n> But I get the impression that he's loosened up of late. If MySQL stops\n> being limited by what one guy can do or review, their rate of progress\n> could improve dramatically.\n\nI don't know ... again, my view of Monty is extended to \"if it doesn't get\nsubmitted for review, whether it gets in or not, we'll sue you for breach\nof license\" ... really gives those considering jumping onto MySQL a warm,\nfuzzy feeling, no? :(\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 04:55:52 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "On Tue, 2002-06-25 at 07:21, Tom Lane wrote:\n> Josh Berkus <josh@agliodbs.com> writes:\n> > Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already \n> > adequately marketed through our huge network of DBA users and code \n> > contributors.\n> \n> Well, mumble ... it seems to me that we are definitely suffering from\n> a \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\n> That doesn't bother me in itself, but the long-term implications are\n> scary. If MySQL manages to attract a larger development community as\n> a consequence of more usage or better marketing, then eventually they\n> will be ahead of us on features and every other measure that counts.\n> Once we're number two with no prayer of catching up, how long will our\n> project remain viable? So, no matter how silly you might think\n> \"MySQL is better\" is today, you've got to consider the prospect that\n> it will become a self-fulfilling prophecy.\n\n#pragma no-flames-please\n\nWhen/if MySQL becomes better than PostgreSQL, we'd simply have a still\nbetter open-source/free software database than we have now.\n\nIn what way exactly is this bad ?\n\nPerhaps (hypothetically speaking) the \"hot breath\" of MySQL becoming\nhotter and hotter will also induce more ideas and performance/enterprise\noptimisations for postgres ...\n\nIf MySQL were better than PostgreSQL, you could really do two things :\n\n(1) divert your development effort to MySQL, injecting it with the good\nstuff known from the postgres effort\n(2) make sure postgres becomes even better !\nor\n(3) turn away from computers and programming in disgust (not\nrecommended)\n\nOf course, as things stand now, MySQL has still a long way to run before\nit's up to par with PostgreSQL on enterprise-level database features. \nBut *both* are not yet at 100% !\n\nI do agree we need more publicity for PostgreSQL, I mention it every\ntime I explain a database backend system to our prospects. I really am\na great fan of postgres and use it exclusively (except to look how\npoorly other db's compare with it :-).\n\n[snip]\n> But I get the impression that he's loosened up of late. If MySQL stops\n> being limited by what one guy can do or review, their rate of progress\n> could improve dramatically.\nwhich in itself is not bad at all.\n\nLook at it this way and perhaps the situation doesn't appear too\nnegative:\n\nA guesstimate of 90% of all open-source/free software systems and\ndevelopment is GNU/Linux. Nonetheless, the remaining 10% is *very*\nactive and are in some ways even ahead of the Linux track. In other\nways they are different. In still other ways, they're behind Linux. \nFor specific tasks, the *BSD family is vastly superior to Linux. \nEveryone should use and support the tools that fit the bill.\n\n\nFor the marketing stuff, what about asking some big company's IT dept\nfor a statement, sort of \"FooBarBank chooses/switches to PostgreSQL open\nsource database\"? Then it's just a matter of making a press release\n(wording is very important, anyone proficient in making press releases\nhere ?) and time them adequately.\n\nI'll ask around here to see whether we can publicize some cases.\n\nCheers,\nTycho\n\n/* this mail protected by No-Flam(tm) fire retardant asbestos \n underwear (owwww itchy itchy) */\n\n-- \nTycho Fruru\t\t\t tycho@fruru.com\n\"Prediction is extremely difficult. Especially about the future.\"\n - Niels Bohr", "msg_date": "25 Jun 2002 16:25:24 +0200", "msg_from": "Tycho Fruru <tycho@fruru.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "IMO One of the big reasons that MySQL is viewed as being better is it's\npercieved simplicity. It has a large following because of this, and many\nof them are not experienced database users, in fact just the opposite.\n\nThis large user base is perhaps the best marketing that an open source\nproject can hope for. So I think that if we want to attract more users\nwe should try to make postgres easier to use. The hard part is how to do\nthis without sacrificing the integrity of the project. I think for\nstarters when evaluating the next feature we want to work on we ask the\nfollowing questions:\n\n1) Does it make it easier to use for a non-dba ?\n2) Does it facilitate making web-applications easier ( assuming that\nthis is the largest user base ) ?\n3) I'm sure there are others, but at the moment I can't come up with\nthem.\n\nThen if faced with a choice of implementing something which is going to\nmake postgres more technically complete or something which is going to\nappeal to more users we lean towards more users. Note I said lean!\n\n\nDave\n\nOn Tue, 2002-06-25 at 01:21, Tom Lane wrote:\n> Josh Berkus <josh@agliodbs.com> writes:\n> > Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already \n> > adequately marketed through our huge network of DBA users and code \n> > contributors.\n> \n> Well, mumble ... it seems to me that we are definitely suffering from\n> a \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\n> That doesn't bother me in itself, but the long-term implications are\n> scary. If MySQL manages to attract a larger development community as\n> a consequence of more usage or better marketing, then eventually they\n> will be ahead of us on features and every other measure that counts.\n> Once we're number two with no prayer of catching up, how long will our\n> project remain viable? So, no matter how silly you might think\n> \"MySQL is better\" is today, you've got to consider the prospect that\n> it will become a self-fulfilling prophecy.\n> \n> So far I have not worried about that scenario too much, because Monty\n> has always treated the MySQL sources as his personal preserve; if he\n> hadn't written it or closely reviewed it, it didn't get in, and if it\n> didn't hew closely to his opinion of what's important, it didn't get in.\n> But I get the impression that he's loosened up of late. If MySQL stops\n> being limited by what one guy can do or review, their rate of progress\n> could improve dramatically.\n> \n> In short: we could use an organized marketing effort. I really\n> feel the lack of Great Bridge these days; there isn't anyone with\n> comparable willingness to expend marketing talent and dollars on\n> promoting Postgres as such. Not sure what to do about it. We've\n> sort of dismissed Jean-Michel's comments (and those of others in\n> the past) with \"sure, step right up and do the marketing\" responses.\n> But the truth of the matter is that a few amateurs with no budget\n> won't make much of an impression. We really need some professionals\n> with actual dollars to spend, and I don't know where to find 'em.\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n> \n\n\n\n\n\n", "msg_date": "25 Jun 2002 11:12:57 -0400", "msg_from": "Dave Cramer <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "On Tue, Jun 25, 2002 at 04:25:24PM +0200, Tycho Fruru wrote:\n\n> Everyone should use and support the tools that fit the bill.\n\nI've mentioned before, however, that \"the tools that fit the bill\" is\npartly a function of network effects. The *BSD guys have the same\nproblem when facing the Linux juggernaut: as one system begins to\ndominate the minds of certain types of people who happen to make a\nlot of decisions, that system knocks other things out of the running\njust by virtue of its PH quotient [1].\n\n> For the marketing stuff, what about asking some big company's IT dept\n> for a statement, sort of \"FooBarBank chooses/switches to PostgreSQL open\n> source database\"? Then it's just a matter of making a press release\n> (wording is very important, anyone proficient in making press releases\n> here ?) and time them adequately.\n\nBest of luck. Here's the dirty secret about PostgreSQL: _lots_ of\nbig-ish companies are using it, and using it in important, central\nfunctions of their organisations. But they're not willing to admit\nit. What you always get is something like, \"Yes, we're using an\nenterprise-class system with good ANSI SQL 99 compliance, WAL, hot\nbackup, triggers, rules, an advanced, extensible datatypes system,\nand excellent scalability to high concurrency. The system we're\nusing, **mumble PostmmuumblehandinfrontofmouthgrSQmllL **mumble**, is\nvery similar to ORACLE in a lot of respects. We have looked\ncarefully at ORACLE, and are always aware of the constantly-changing\ndatabase marketplace. We have a history of strong relationships with\nvendors. . . .\" You can substitute your favourite big-name RDBMS. \nThe point of such utterances seems mostly to be to get the name brand\ninserted as often as possible, as though some sort of reflected glory\nis the answer.\n\nI don't know why this is. I am, to put it mildly, unbelievably\nfrustrated (not to say embarrassed) by at least one instance of it. \nBut it's nevertheless true.\n\n\n[1] Pointy-hair quotient: the tendency of a given product name to\nelicit recognition from a technical manager of dubious technical\nability.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 11:38:03 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "\n> OK, I have heard this before, but I would like to know specifically how\n> is PostgreSQL harder to administer than MySQL. Knowing that will help\n> us address the issue.\n\nA few things that get to me:\n\nPeriodic vacuum and analyze. How often is enough? Whats cron? \nAutomated garbage collection makes all of the Java people happy ;)\n\n\nALTER TABLE / DROP COLUMN\nALTER TABLE / ALTER COLUMN <datatype>\nALTER TABLE / ADD COLUMN (as normal table creation)\n\nBoth are useful while putting together a quick system. Especially if\nyou project the ERD and PGAdmin processes onto a screen if working in a\ngroup for a quick application. Explain while creating.\n\n\ninitdb is a seperate process. I've altered my Solaris startup scripts\nto automatically initdb the data directory requested if it doesn't\nalready exist. Moving this into the postmaster itself, and\nautomatically generating the space would cut about 50% of the effort out\nof the current install process. Install, run. Many installers probably\ndo this already.\n\n\nLastly, and hopefully partially fixed (soon?) is my main problem with --\nDropped that object but something else used it. Now it throws funny\nerrors.\n\n\nUpgrades are a bit annoying, but if you've stuck with Postgresql long\nenough to get to an upgrade process I'd say you're hooked anyway.\n\n\n\n", "msg_date": "25 Jun 2002 15:41:56 +0000", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Tom,\n\n> project remain viable? So, no matter how silly you might think\n> \"MySQL is better\" is today, you've got to consider the prospect that\n> it will become a self-fulfilling prophecy.\n\nWe also don't have a couple of other things that MySQL has: A\nviciously divided community, a bobby-trapped licensing situation, and a\nflagrant disredard for the SQL standard and cumulative wisdom of 25\nyears of database knowledge. (Nested tables! Sheesh!) These things\nhandicap the MySQL project quite effectively, and are not likely to be\nstraightened out in the next year.\n\nBTW, PLEASE DO NOT QUOTE the above. It's ok for the hacker's list,\nbut I do not want to fuel the MySQL/Postgres \"debate\" anywhere more\npublic. This \"debate\" does not benefit either project.\n\nAlso, I am concerned about the focus on MySQL as our \"only competitor\".\n Frankly, I have never regarded MySQL as our primary competitor; that\nspot is reserved for Microsoft SQL Server. Especially with the death\nof SQL Anywhere, Postgres and MS SQL are the two major databases in the\ntransaction/vertical application space for the budget-minded business\n(although MS SQL is considerably less budget-minded than it was a year\nago). \n\nWhen we've crushed MS SQL, then it's time to take on Oracle and DB2.\n\nI think there's plenty of room in the RDBMS market for both MySQL and\nPostgreSQL. If there's a marketing need, it's to educate DBA's on the\ndifferent strengths of the two databases. You think MySQL would\ncooperate in this, or do they see themselves as competing head-on with\nus?\n\n > In short: we could use an organized marketing effort. I really\n> feel the lack of Great Bridge these days; there isn't anyone with\n> comparable willingness to expend marketing talent and dollars on\n> promoting Postgres as such. Not sure what to do about it. We've\n> sort of dismissed Jean-Michel's comments (and those of others in\n> the past) with \"sure, step right up and do the marketing\" responses.\n> But the truth of the matter is that a few amateurs with no budget\n> won't make much of an impression. We really need some professionals\n> with actual dollars to spend, and I don't know where to find 'em.\n\nI disagree pretty strongly, Tom. OpenOffice.org Marketing has no\ncash, and is an all-volunteer effort. To quote journalist Amy Wohl\n\"[OpenOffice.org] have managed to put together a better buunch of\nvolunteer marketers than Sun is able to hire.\" Frankly, of the various\nmarketing techniques, only going to trade shows costs money; the rest\nis all labor which can be done by volunteers and donors.\n\nOf course, this requires somebody pretty inspired to organize it. I\nalready have my hands full with OpenOffice.org. Volunteers?\n\nAs Great Bridge should have taught you, corporate money for mmarketing\ncomes with expectations and deadlines attached. Landmark gave you one\nshot at \"making it\", and then yanked the carpet when that didn't pan\nout immediately. Other companies are going to be the same. One of\nthe greatest things about Postgres is that we have been able to outlast\nthe death of half a dozen companies that supported us, and replace them\nwith new.\n\nAnd isn't Red Hat doing anything to promote us?\n\nFinally, thanks to you guys, we are still advancing our project faster\nthan most commercial software. How many RDBMSs out there have DOMAIN\nsupport? How many have advanced data types that really work? How\nmany support 5 procedural languages and subselects just abotu\neverywhere?\n\n-Josh Berkus\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 08:42:13 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "I don't normally post to this list, but have a crazy suggestion that is a \nlittle farfetched.\n\nSuggestion:\nFix the portability problems so that there is a Windows native version of \nPostgreSQL. Then offer the Open Office organization PostgreSQL as the \nproject's database. This would increase the user base my leaps and bounds.\n\nThe problem is that using and administrating PostgreSQL can be complex. Also, \nsome people may automatically assume that PostgreSQL is a low end database not \ncapable of doing more than being used as a backend for a free office app. Of \ncourse we all know better.\n\nMaybe a PostgreSQL-Lite would be a better idea. One that condenses the main \ncode down to something easy, that a desktop user could use, but maintain the \nstrength of the core code. I suppose that means creating another project.\n\nHere are just a few links that I've come across recently:\nHow-to for using Open Office and unixODBC\nhttp://www.unixodbc.org/doc/OOoMySQL.pdf\n\nOthers are considering MySQL.\nhttp://dba.openoffice.org/proposals/MySQL_OOo.html\n\nJames Hubbard\n\nDave Cramer wrote:\n> IMO One of the big reasons that MySQL is viewed as being better is it's\n> percieved simplicity. It has a large following because of this, and many\n> of them are not experienced database users, in fact just the opposite.\n> \n> This large user base is perhaps the best marketing that an open source\n> project can hope for. So I think that if we want to attract more users\n> we should try to make postgres easier to use. The hard part is how to do\n> this without sacrificing the integrity of the project. I think for\n> starters when evaluating the next feature we want to work on we ask the\n> following questions:\n> \n> 1) Does it make it easier to use for a non-dba ?\n> 2) Does it facilitate making web-applications easier ( assuming that\n> this is the largest user base ) ?\n> 3) I'm sure there are others, but at the moment I can't come up with\n> them.\n> \n> Then if faced with a choice of implementing something which is going to\n> make postgres more technically complete or something which is going to\n> appeal to more users we lean towards more users. Note I said lean!\n> \n> \n> Dave\n> \n> On Tue, 2002-06-25 at 01:21, Tom Lane wrote:\n> \n>>Josh Berkus <josh@agliodbs.com> writes:\n>>\n>>>Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already \n>>>adequately marketed through our huge network of DBA users and code \n>>>contributors.\n>>\n>>Well, mumble ... it seems to me that we are definitely suffering from\n>>a \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\n>>That doesn't bother me in itself, but the long-term implications are\n>>scary. If MySQL manages to attract a larger development community as\n>>a consequence of more usage or better marketing, then eventually they\n>>will be ahead of us on features and every other measure that counts.\n>>Once we're number two with no prayer of catching up, how long will our\n>>project remain viable? So, no matter how silly you might think\n>>\"MySQL is better\" is today, you've got to consider the prospect that\n>>it will become a self-fulfilling prophecy.\n>>\n>>So far I have not worried about that scenario too much, because Monty\n>>has always treated the MySQL sources as his personal preserve; if he\n>>hadn't written it or closely reviewed it, it didn't get in, and if it\n>>didn't hew closely to his opinion of what's important, it didn't get in.\n>>But I get the impression that he's loosened up of late. If MySQL stops\n>>being limited by what one guy can do or review, their rate of progress\n>>could improve dramatically.\n>>\n>>In short: we could use an organized marketing effort. I really\n>>feel the lack of Great Bridge these days; there isn't anyone with\n>>comparable willingness to expend marketing talent and dollars on\n>>promoting Postgres as such. Not sure what to do about it. We've\n>>sort of dismissed Jean-Michel's comments (and those of others in\n>>the past) with \"sure, step right up and do the marketing\" responses.\n>>But the truth of the matter is that a few amateurs with no budget\n>>won't make much of an impression. We really need some professionals\n>>with actual dollars to spend, and I don't know where to find 'em.\n>>\n>>\t\t\tregards, tom lane\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>>\n>>\n>>\n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:08:53 -0400", "msg_from": "James Hubbard <jhubbard@mcs.uvawise.edu>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "James Hubbard wrote:\n> I don't normally post to this list, but have a crazy suggestion that is a \n> little farfetched.\n> \n> Suggestion:\n> Fix the portability problems so that there is a Windows native version of \n> PostgreSQL. Then offer the Open Office organization PostgreSQL as the \n> project's database. This would increase the user base my leaps and bounds.\n\nOK, we just started heavy discussion on this and I believe a few people\nare actively working on this. The timeframe is 3-6 months, though we\nhave the Cygwin solution right now.\n\n> The problem is that using and administrating PostgreSQL can be complex. Also, \n> some people may automatically assume that PostgreSQL is a low end database not \n> capable of doing more than being used as a backend for a free office app. Of \n> course we all know better.\n\nOK, I have heard this before, but I would like to know specifically how\nis PostgreSQL harder to administer than MySQL. Knowing that will help\nus address the issue.\n\n\n> Maybe a PostgreSQL-Lite would be a better idea. One that condenses the main \n> code down to something easy, that a desktop user could use, but maintain the \n> strength of the core code. I suppose that means creating another project.\n\nPerhaps a config utility that asked you questions and modified template1\nand the config files. How about that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:23:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "> And, quite frankly, until the BigO loses its grip, I really don't see them\n> coming out of the closet and admitting to using PgSQL ... why? I don't\n> know abotu you, but all I can imagine in my head is a horde of O-salesman\n> descending on the company wondering why they switched and how can they\n> convince them otherwise, etc, etc ...\n> \n> I know ... I deal with those salesman all the time, from Oracle to Sun to\n> Microsoft ...\n\nYeah, but the lunches are usually pretty good ;)\n\nAnyway, we've kept a trinket system around which covers nearly every big\nname required in order to allow marketing to push that we use the\ntechnology (PeopleSoft, Oracle, NT + Sun Clustering, etc.). Honestly,\nI'm not sure how essential to the system the CRM is. We'd notice if it\nwas missing -- but we could live with a couple hours downtime without\nany issues.\n\nFact is, even if we replaced the CRM with another solution an Oracle\nbased NT box with some application would be running distributed.net in\nthe corner in order to be able to say we use it when people ask. If\nthey ask what for, it's always mission critical but very vague.\n\n\n\n", "msg_date": "25 Jun 2002 16:28:05 +0000", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Tom Lane wrote:\n> Josh Berkus <josh@agliodbs.com> writes:\n> > Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already \n> > adequately marketed through our huge network of DBA users and code \n> > contributors.\n> \n> Well, mumble ... it seems to me that we are definitely suffering from\n> a \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\n> That doesn't bother me in itself, but the long-term implications are\n> scary. If MySQL manages to attract a larger development community as\n> a consequence of more usage or better marketing, then eventually they\n> will be ahead of us on features and every other measure that counts.\n> Once we're number two with no prayer of catching up, how long will our\n> project remain viable? So, no matter how silly you might think\n> \"MySQL is better\" is today, you've got to consider the prospect that\n> it will become a self-fulfilling prophecy.\n\nOK, I want to know, does anyone see MySQL gaining in market share in\ncomparison to PostgreSQL, or is MySQL gaining against other databases?\nIs MySQL gaining sites faster than we are gaining sites?\n\nEvery indication I can see is that PostgreSQL is gaining on MySQL.\n\nThe Linux/FreeBSD comparison is potent. Does PostgreSQL remain a niche\nplayer? Does *BSD remain a niche player?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:29:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution in" }, { "msg_contents": "Josh Berkus wrote:\n> Also, I am concerned about the focus on MySQL as our \"only competitor\".\n> Frankly, I have never regarded MySQL as our primary competitor; that\n> spot is reserved for Microsoft SQL Server. Especially with the death\n> of SQL Anywhere, Postgres and MS SQL are the two major databases in the\n> transaction/vertical application space for the budget-minded business\n> (although MS SQL is considerably less budget-minded than it was a year\n> ago). \n> \n> When we've crushed MS SQL, then it's time to take on Oracle and DB2.\n\nI think Oracle is our main competitor. We seem to get more people\nporting from Oracle than any other database, and our feature set matches\nthere's most closely.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:33:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "Tom Lane wrote:\n> Josh Berkus <josh@agliodbs.com> writes:\n> > Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already \n> > adequately marketed through our huge network of DBA users and code \n> > contributors.\n> \n> Well, mumble ... it seems to me that we are definitely suffering from\n> a \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\n> That doesn't bother me in itself, but the long-term implications are\n> scary. If MySQL manages to attract a larger development community as\n> a consequence of more usage or better marketing, then eventually they\n> will be ahead of us on features and every other measure that counts.\n> Once we're number two with no prayer of catching up, how long will our\n> project remain viable? So, no matter how silly you might think\n> \"MySQL is better\" is today, you've got to consider the prospect that\n> it will become a self-fulfilling prophecy.\n> \n> So far I have not worried about that scenario too much, because Monty\n> has always treated the MySQL sources as his personal preserve; if he\n> hadn't written it or closely reviewed it, it didn't get in, and if it\n> didn't hew closely to his opinion of what's important, it didn't get in.\n> But I get the impression that he's loosened up of late. If MySQL stops\n> being limited by what one guy can do or review, their rate of progress\n> could improve dramatically.\n> \n> In short: we could use an organized marketing effort. I really\n> feel the lack of Great Bridge these days; there isn't anyone with\n> comparable willingness to expend marketing talent and dollars on\n> promoting Postgres as such. Not sure what to do about it. We've\n> sort of dismissed Jean-Michel's comments (and those of others in\n> the past) with \"sure, step right up and do the marketing\" responses.\n> But the truth of the matter is that a few amateurs with no budget\n> won't make much of an impression. We really need some professionals\n> with actual dollars to spend, and I don't know where to find 'em.\n\nOK, let me make some comments on this. First, Great Bridge had me doing\nsome marketing stuff while I was with them. This included trade shows,\nmagazine articles, and interviews. I am available to do all those\nagain. GB lined up the contacts and got it all started. If people want\nme to do more of that, I can find the time.\n\nI am not sure how effective that was. There was a lot more marketing\ndone by Great Bridge that would take lots of money to do.\n\nDo people want an advocacy article written, like \"How to choose a\ndatabase?\" I could do that.\n\nBasically, I am open to ideas. Would it help to fly me out to meet IT\nleaders? More books/articles? What does it take? What do successful\ncompanies and open source projects do that works?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:41:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution in" }, { "msg_contents": "> OK, I have heard this before, but I would like to know specifically how\n> is PostgreSQL harder to administer than MySQL. Knowing that will help\n> us address the issue.\n\nI find this statement about pgsql being hard to admin a bit hard to swallow\nas well. Maybe it's cause pgsql is all i've _really_ used, and did tech\nsupport\nfor it for a while.\n\nI've had a few run in's with mysql[ for personal projects] and tried to\n approach it with an open mind, but often i find the documentation is\nunclear\nand hard to follow.\n\nPostgreSQL has GREAT documentation and it feels very straight forward to\nrun.\n\nThe only problem is convincing everyone else that.\n\n>\n>\n> > Maybe a PostgreSQL-Lite would be a better idea. One that\n> condenses the main\n> > code down to something easy, that a desktop user could use, but\n> maintain the\n> > strength of the core code. I suppose that means creating\n> another project.\n>\n> Perhaps a config utility that asked you questions and modified template1\n> and the config files. How about that?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n>\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 13:42:05 -0300", "msg_from": "\"Jeff MacDonald\" <jeff@tsunamicreek.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "\n\nBruce Momjian wrote:\n> James Hubbard wrote:\n> \n>>I don't normally post to this list, but have a crazy suggestion that is a \n>>little farfetched.\n>>\n>>Suggestion:\n>>Fix the portability problems so that there is a Windows native version of \n>>PostgreSQL. Then offer the Open Office organization PostgreSQL as the \n>>project's database. This would increase the user base my leaps and bounds.\n> \n> \n> OK, we just started heavy discussion on this and I believe a few people\n> are actively working on this. The timeframe is 3-6 months, though we\n> have the Cygwin solution right now.\n\nI've been watching the activity around this. I have a great deal of hope that \nthis will produce something. My wife needed a database and web scripting \nsolution for use on Windows at work and my only suggestion was to use mysql and \nphp. The binaries are on the website and are easy to download and install.\n\n> \n>>The problem is that using and administrating PostgreSQL can be complex. Also, \n>>some people may automatically assume that PostgreSQL is a low end database not \n>>capable of doing more than being used as a backend for a free office app. Of \n>>course we all know better.\n> \n> \n> OK, I have heard this before, but I would like to know specifically how\n> is PostgreSQL harder to administer than MySQL. Knowing that will help\n> us address the issue.\n>\nI wasn't really comparing to MySQL here. I meant, in relationship to MS \nAccess. Start it up and it just works. No worries about configuration files, \netc. I've not used MySQL before except to install it on Windows NT in a VMWare \nsession. I've been meaning to get back around to playing with to see how well \nit functions.\n\nThe company that my wife works for is almost exclusively MS. There are a few \nfile servers that are Novell. I'm sure that the only reason that they are \nusing MySQL is that it's easy to obtain install and use and use with PHP.\n\n> \n>>Maybe a PostgreSQL-Lite would be a better idea. One that condenses the main \n>>code down to something easy, that a desktop user could use, but maintain the \n>>strength of the core code. I suppose that means creating another project.\n> \n> \n> Perhaps a config utility that asked you questions and modified template1\n> and the config files. How about that?\n> \nI think that would work pretty well. A basic configuration that locks eveything \ndown with the goal of a single user desktop setting, but also provides the user \nwith the capability of opening things up so that it could function as a \nmultiuser system. That then forces the issue of being able to move the \ndatabase to a server machine relatively painlessly.\n\nKeep in mind that I was primarily focusing on the potential to include it with \nsomething like OpenOffice. This is why I said that my post was a little far \nfetched.\n\nJames Hubbard\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:47:45 -0400", "msg_from": "James Hubbard <jhubbard@mcs.uvawise.edu>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "I'd have to say that personally, given a choice between expending effort \nto fix current know bugs and add known needed features, and expending \neffort to port to Windows, I'd pick the former, not the latter.\n\nI could personally care less if postgresql ever runs as a native window \napplication, since I personally don't believe windows is a suitable OS for \nhosting a dbms.\n\nNote that the \"portablility problems\" in postgresql are and were \nintroduced by Windows deciding to do everything different than every other \nOS. Postgresql is quite portable, when one is porting it to OSes that \naren't windows, like VMS, MVS, or all the different flavors of Unix.\n\nBesides, in another 5 years, Windows as a server OS will likely be the \nshrinking percentage, while Linux/BSD et. al. will be growing. focus on \nthe future, and let Windows wither and die (in the server room) as it \nshould.\n\n On Tue, 25 Jun 2002, James Hubbard wrote:\n\n> I don't normally post to this list, but have a crazy suggestion that is a \n> little farfetched.\n> \n> Suggestion:\n> Fix the portability problems so that there is a Windows native version of \n> PostgreSQL. Then offer the Open Office organization PostgreSQL as the \n> project's database. This would increase the user base my leaps and bounds.\n> \n> The problem is that using and administrating PostgreSQL can be complex. Also, \n> some people may automatically assume that PostgreSQL is a low end database not \n> capable of doing more than being used as a backend for a free office app. Of \n> course we all know better.\n> \n> Maybe a PostgreSQL-Lite would be a better idea. One that condenses the main \n> code down to something easy, that a desktop user could use, but maintain the \n> strength of the core code. I suppose that means creating another project.\n> \n> Here are just a few links that I've come across recently:\n> How-to for using Open Office and unixODBC\n> http://www.unixodbc.org/doc/OOoMySQL.pdf\n> \n> Others are considering MySQL.\n> http://dba.openoffice.org/proposals/MySQL_OOo.html\n> \n> James Hubbard\n> \n> Dave Cramer wrote:\n> > IMO One of the big reasons that MySQL is viewed as being better is it's\n> > percieved simplicity. It has a large following because of this, and many\n> > of them are not experienced database users, in fact just the opposite.\n> > \n> > This large user base is perhaps the best marketing that an open source\n> > project can hope for. So I think that if we want to attract more users\n> > we should try to make postgres easier to use. The hard part is how to do\n> > this without sacrificing the integrity of the project. I think for\n> > starters when evaluating the next feature we want to work on we ask the\n> > following questions:\n> > \n> > 1) Does it make it easier to use for a non-dba ?\n> > 2) Does it facilitate making web-applications easier ( assuming that\n> > this is the largest user base ) ?\n> > 3) I'm sure there are others, but at the moment I can't come up with\n> > them.\n> > \n> > Then if faced with a choice of implementing something which is going to\n> > make postgres more technically complete or something which is going to\n> > appeal to more users we lean towards more users. Note I said lean!\n> > \n> > \n> > Dave\n> > \n> > On Tue, 2002-06-25 at 01:21, Tom Lane wrote:\n> > \n> >>Josh Berkus <josh@agliodbs.com> writes:\n> >>\n> >>>Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already \n> >>>adequately marketed through our huge network of DBA users and code \n> >>>contributors.\n> >>\n> >>Well, mumble ... it seems to me that we are definitely suffering from\n> >>a \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\n> >>That doesn't bother me in itself, but the long-term implications are\n> >>scary. If MySQL manages to attract a larger development community as\n> >>a consequence of more usage or better marketing, then eventually they\n> >>will be ahead of us on features and every other measure that counts.\n> >>Once we're number two with no prayer of catching up, how long will our\n> >>project remain viable? So, no matter how silly you might think\n> >>\"MySQL is better\" is today, you've got to consider the prospect that\n> >>it will become a self-fulfilling prophecy.\n> >>\n> >>So far I have not worried about that scenario too much, because Monty\n> >>has always treated the MySQL sources as his personal preserve; if he\n> >>hadn't written it or closely reviewed it, it didn't get in, and if it\n> >>didn't hew closely to his opinion of what's important, it didn't get in.\n> >>But I get the impression that he's loosened up of late. If MySQL stops\n> >>being limited by what one guy can do or review, their rate of progress\n> >>could improve dramatically.\n> >>\n> >>In short: we could use an organized marketing effort. I really\n> >>feel the lack of Great Bridge these days; there isn't anyone with\n> >>comparable willingness to expend marketing talent and dollars on\n> >>promoting Postgres as such. Not sure what to do about it. We've\n> >>sort of dismissed Jean-Michel's comments (and those of others in\n> >>the past) with \"sure, step right up and do the marketing\" responses.\n> >>But the truth of the matter is that a few amateurs with no budget\n> >>won't make much of an impression. We really need some professionals\n> >>with actual dollars to spend, and I don't know where to find 'em.\n> >>\n> >>\t\t\tregards, tom lane\n> >>\n> >>\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 2: you can get off all lists at once with the unregister command\n> >> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >>\n> >>\n> >>\n> > \n> > \n> > \n> > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> > \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n> \n\n-- \n\"Force has no place where there is need of skill.\", \"Haste in every \nbusiness brings failures.\", \"This is the bitterest pain among men, to have \nmuch knowledge but no power.\" -- Herodotus\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 11:11:04 -0600 (MDT)", "msg_from": "Scott Marlowe <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "> > When we've crushed MS SQL, then it's time to take on Oracle and DB2.\n> \n> I think Oracle is our main competitor. We seem to get more people\n> porting from Oracle than any other database, and our feature set matches\n> there's most closely.\n\nBut if we had a native windows port, I think we would here of a lot more MS \nSQL Server converts. \n\nAlso, I think that expending effort on a windows port will be a net win as it \nit will generate a new userbase and with it more developers. Good or bad, \nwindows has a large marketshare.\n\n\n", "msg_date": "Tue, 25 Jun 2002 13:36:32 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "On Tue, 2002-06-25 at 22:48, Josh Berkus wrote:\n> \n> Bruce,\n> \n> > I think Oracle is our main competitor. We seem to get more people\n> > porting from Oracle than any other database, and our feature set matches\n> > there's most closely.\n> \n> I disagree, and did as well when you were with Great Bridge. No matter how \n> Postgres core functionality compares with Oracle, they have nearly a decade \n> of building tools, accessories, and extra whiz-bang features for their \n> product.\n\nMy (perhaps a little outdated) experience has been that with Oracle\nalmost anything on the client side sucks bad.\n\nWhat they have is a solid database and good upward path to really big\niron.\n\n> Not to mention a serious reputation as the \"ultimate database if \n> you can afford it.\" \n\nOn PC server class computers we seem to be able to match them with one\nexception - prepared statements with good(?) binary fe/be protocol ?\n \n> As long as we target Oracle as our \"competition\", we will remain \"the database \n> to use if you can't afford Oracle, but to be replaced with Oracle as soon as \n> you can.\" Heck, look at DB2, which is toe-to-toe with Oracle for feature \n> set, but is only really sold to companies who use IBM's other tools. We're \n> not in a position to challenge that reputation.\n\nBut if we are seen as challenging it, it is a good marketing point when\nselling to MS SQL folks :)\n\n> On the other hand, we already outstrip MS SQL Server's feature set, as well as \n> being more reliable, lower-maintainence, multi-platform, and cheaper. \n\nIf only someone were to write Transact SQL lookalike and even better -\nif we had pluggable frontend protocols - FreeTDS compatibility on server\nside would be a big step even without native Win32.\n\n> Frankly, the only thing that MS SQL has over us is easy-but-unreliable GUI \n> admin tools (backup, user, and database management).\n\nWe almost have it in pgAdmin and Tora.\n\n---------------\nHannu\n\n\n\n\n", "msg_date": "25 Jun 2002 22:37:49 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "\nJames,\n\n> Maybe a PostgreSQL-Lite would be a better idea. One that condenses the main \n> code down to something easy, that a desktop user could use, but maintain the \n> strength of the core code. I suppose that means creating another project.\n\nPersonally, I think it's a redundant idea. There are a couple dozen \n\"lightweight\" RDBMSs available off Sourceforge. There is only one \n\"Heavy-duty\" database: Us.\n\n> Others are considering MySQL.\n> http://dba.openoffice.org/proposals/MySQL_OOo.html\n\nLet me nip this in the bud: That proposal was shot down almost immediately, \nmostly due to MySQL's poor adherence to the SQL standard and licensing \nproblems. I also shot down PostgreSQL as a possibility for inclusion with \nOpenOffice.org, since Postgres is quite firmly a *server* database, and 70% \nof OpenOffice.org installs are on Windows 95/98.\n\nCurrently, we are leaning toward HSQLDB as our included database. However, \nyou can help us decide: join the DBA.openoffice.org project \n(http://dba.openoffice.org/). \n\nSomething we could really, really use for OpenOffice.org is \"native\" (SBDC) \ndrivers for PostgreSQL. Currently, we have to use UnixODBC or MS ODBC, which \nbrings all sorts of problems with it. Can anyone help with a driver?\n\nOnce we get a native driver, OpenOffice.org will be available as an MS \nAccess-style tool for simple PostgreSQL database management. This should \nincrease adoption of Postgres somewhat.\n\n\n-Josh Berkus\n OpenOffice.org \n\n\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:40:30 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "\nBruce,\n\n> I think Oracle is our main competitor. We seem to get more people\n> porting from Oracle than any other database, and our feature set matches\n> there's most closely.\n\nI disagree, and did as well when you were with Great Bridge. No matter how \nPostgres core functionality compares with Oracle, they have nearly a decade \nof building tools, accessories, and extra whiz-bang features for their \nproduct. Not to mention a serious reputation as the \"ultimate database if \nyou can afford it.\" \n\nAs long as we target Oracle as our \"competition\", we will remain \"the database \nto use if you can't afford Oracle, but to be replaced with Oracle as soon as \nyou can.\" Heck, look at DB2, which is toe-to-toe with Oracle for feature \nset, but is only really sold to companies who use IBM's other tools. We're \nnot in a position to challenge that reputation.\n\nOn the other hand, we already outstrip MS SQL Server's feature set, as well as \nbeing more reliable, lower-maintainence, multi-platform, and cheaper. \nFrankly, the only thing that MS SQL has over us is easy-but-unreliable GUI \nadmin tools (backup, user, and database management).\n\nLet's pick battles we can win. We'll beat Oracle eventually -- but not in the \nnext few years.\n\n-Josh Berkus\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:48:44 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "\nJust a personal observation here, based on the work we've been doing\nlately ... there are *alot* of very large companies out there, one of\nwhich we just did onsite training for that I swear has an article in just\nabout every trade magazine I read, each month ... the problem isn't\ngetting companies to adopt/use PgSQL ... the problem is getting them to\nacknowledge its usage ...\n\nAnd, quite frankly, until the BigO loses its grip, I really don't see them\ncoming out of the closet and admitting to using PgSQL ... why? I don't\nknow abotu you, but all I can imagine in my head is a horde of O-salesman\ndescending on the company wondering why they switched and how can they\nconvince them otherwise, etc, etc ...\n\nI know ... I deal with those salesman all the time, from Oracle to Sun to\nMicrosoft ...\n\nThe problem, as I see it, is everyone moaning cause we aren't the #1\ndatabase for the Web ... who cares? How many sites out there don't even\n*need* a database backend in the first place? Someone throws MySQL onto\nthat and thiks its the best thing since sliced bread, even though the\ntable contains a single record ...\n\nThe markets that matter, enterprise databases, we are making inroads into\nand quite substantial ones, but due to 'internal politics', you aren't\ngoing to hear about them ...\n\nHow many ppl here can honestly say they know of *at least* one company, if\nnot more, that are using PgSQL, but don't advertise, or let known, that\nfact? I can think of a half dozen that we (PgSQL, Inc) have worked with\nto convert, and train, so far ... Tom, call it RedHat DB or PgSQL, its the\nsame code base ... any numbers from that end? Bruce, how about from SRA?\n\nOn Tue, 25 Jun 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Josh Berkus <josh@agliodbs.com> writes:\n> > > Frankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already\n> > > adequately marketed through our huge network of DBA users and code\n> > > contributors.\n> >\n> > Well, mumble ... it seems to me that we are definitely suffering from\n> > a \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\n> > That doesn't bother me in itself, but the long-term implications are\n> > scary. If MySQL manages to attract a larger development community as\n> > a consequence of more usage or better marketing, then eventually they\n> > will be ahead of us on features and every other measure that counts.\n> > Once we're number two with no prayer of catching up, how long will our\n> > project remain viable? So, no matter how silly you might think\n> > \"MySQL is better\" is today, you've got to consider the prospect that\n> > it will become a self-fulfilling prophecy.\n> >\n> > So far I have not worried about that scenario too much, because Monty\n> > has always treated the MySQL sources as his personal preserve; if he\n> > hadn't written it or closely reviewed it, it didn't get in, and if it\n> > didn't hew closely to his opinion of what's important, it didn't get in.\n> > But I get the impression that he's loosened up of late. If MySQL stops\n> > being limited by what one guy can do or review, their rate of progress\n> > could improve dramatically.\n> >\n> > In short: we could use an organized marketing effort. I really\n> > feel the lack of Great Bridge these days; there isn't anyone with\n> > comparable willingness to expend marketing talent and dollars on\n> > promoting Postgres as such. Not sure what to do about it. We've\n> > sort of dismissed Jean-Michel's comments (and those of others in\n> > the past) with \"sure, step right up and do the marketing\" responses.\n> > But the truth of the matter is that a few amateurs with no budget\n> > won't make much of an impression. We really need some professionals\n> > with actual dollars to spend, and I don't know where to find 'em.\n>\n> OK, let me make some comments on this. First, Great Bridge had me doing\n> some marketing stuff while I was with them. This included trade shows,\n> magazine articles, and interviews. I am available to do all those\n> again. GB lined up the contacts and got it all started. If people want\n> me to do more of that, I can find the time.\n>\n> I am not sure how effective that was. There was a lot more marketing\n> done by Great Bridge that would take lots of money to do.\n>\n> Do people want an advocacy article written, like \"How to choose a\n> database?\" I could do that.\n>\n> Basically, I am open to ideas. Would it help to fly me out to meet IT\n> leaders? More books/articles? What does it take? What do successful\n> companies and open source projects do that works?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n>\n>\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 15:30:15 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "> Tom,\n> \n> > project remain viable? So, no matter how silly you might think\n> > \"MySQL is better\" is today, you've got to consider the prospect that\n> > it will become a self-fulfilling prophecy.\n> \n> We also don't have a couple of other things that MySQL has: A\n> viciously divided community, a bobby-trapped licensing situation, and a\n> flagrant disredard for the SQL standard and cumulative wisdom of 25\n> years of database knowledge. (Nested tables! Sheesh!) These things\n> handicap the MySQL project quite effectively, and are not likely to be\n> straightened out in the next year.\n> \n> BTW, PLEASE DO NOT QUOTE the above. It's ok for the hacker's list,\n> but I do not want to fuel the MySQL/Postgres \"debate\" anywhere more\n> public. This \"debate\" does not benefit either project.\n\nOh, but it's _so_ tempting :-).\n\n> Also, I am concerned about the focus on MySQL as our \"only competitor\".\n> Frankly, I have never regarded MySQL as our primary competitor; that\n> spot is reserved for Microsoft SQL Server. Especially with the death\n> of SQL Anywhere, Postgres and MS SQL are the two major databases in the\n> transaction/vertical application space for the budget-minded business\n> (although MS SQL is considerably less budget-minded than it was a year\n> ago). \n> \n> When we've crushed MS SQL, then it's time to take on Oracle and DB2.\n\nTake on SQL Server, and establish a sizable useful niche. The notion that \nPostgreSQL is _necessarily_ supposed to be all things to all people promotes \nthe danger of getting over-arrogant and over-ambitious.\n\n> I think there's plenty of room in the RDBMS market for both MySQL and\n> PostgreSQL. If there's a marketing need, it's to educate DBA's on the\n> different strengths of the two databases. You think MySQL would\n> cooperate in this, or do they see themselves as competing head-on with\n> us?\n\nWhy _should_ they want to cooperate?\n\nTheir advantage in the marketplace is largely based on the notion that\n \"MySQL isn't quite as good as Oracle, but it's a lot cheaper!\"\n\nFor them to say, \"and by the way, PostgreSQL, SAPDB, and Firebird are all \nbasically the same that way\" would be shooting themselves in the foot.\n\nTheir model is rather like that of Microsoft Access: It's not all that great, \nbut it gets used a lot, despite its limitations, because everyone has a copy \nof it as part of MS Office.\n\nFor them to \"cooperate\" would mean compromising on what's most important to \ntheir ongoing marketing strategy:\n \"Use MySQL because it's the most popular database!\"\n\n> > In short: we could use an organized marketing effort. I really\n> > feel the lack of Great Bridge these days; there isn't anyone with\n> > comparable willingness to expend marketing talent and dollars on\n> > promoting Postgres as such. Not sure what to do about it. We've\n> > sort of dismissed Jean-Michel's comments (and those of others in\n> > the past) with \"sure, step right up and do the marketing\" responses.\n> > But the truth of the matter is that a few amateurs with no budget\n> > won't make much of an impression. We really need some professionals\n> > with actual dollars to spend, and I don't know where to find 'em.\n> \n> I disagree pretty strongly, Tom. OpenOffice.org Marketing has no\n> cash, and is an all-volunteer effort. To quote journalist Amy Wohl\n> \"[OpenOffice.org] have managed to put together a better buunch of\n> volunteer marketers than Sun is able to hire.\" Frankly, of the various\n> marketing techniques, only going to trade shows costs money; the rest\n> is all labor which can be done by volunteers and donors.\n> \n> Of course, this requires somebody pretty inspired to organize it. I\n> already have my hands full with OpenOffice.org. Volunteers?\n\nThe _crucial_ marketing that would need to take place is NOT to the public. \nIt would be to:\n a) ISPs\n b) Vendors of ISP support software.\n\nThe sort of thing that has allowed MySQL to get really popular is the fact \nthat there are tools like cPanel <http://www.cpanel.net/> that provide a \n\"friendly\" front end to manage web site 'stuff,' including managing MySQL.\n\n> And isn't Red Hat doing anything to promote us?\n\nThey ought to be...\n\n> Finally, thanks to you guys, we are still advancing our project faster\n> than most commercial software. How many RDBMSs out there have DOMAIN\n> support? How many have advanced data types that really work? How\n> many support 5 procedural languages and subselects just abotu\n> everywhere?\n\n... And this is what is the plausible strategy for making PostgreSQL \nincreasingly popular. _Improve it_ and people will come.\n\nMySQL won the \"basic DBMS for web-hosting\" battle, and there's no real way to \novercome that _marketing_ advantage. MySQL got there the \"fustest with the \nmostest,\" with things like cPanel allowing ISPs and web hosters to offer a \nfree DBMS.\n\nPostgreSQL can offer \"the same thing;\" to evict MySQL, it will have to offer \n_really compelling_ advantages. Price _isn't_ a compelling advantage. \nPostgreSQL may be more powerful, but people are successfully using MySQL, so \napparently it's _usable enough_ for a lot of purposes.\n\nThe other thing that can make PostgreSQL an increasingly preferable option is \nfor there to be an increasing set of _applications_ that prefer PostgreSQL.\n\nFor instance, GnuCash has an SQL interface, or, to be more precise, a \nPostgreSQL interface. The makers of GnuCash found they preferred PostgreSQL's \ncapabilities, and are uninterested in supporting a bunch of DBMSes. Somewhat \nsimilar, SQL-Ledger is compatible with PostgreSQL (and Oracle), but NOT MySQL.\n\nThe thing that will make PostgreSQL the \"killer app\" that needs to be around \nis there being _applications_ that \"prefer PostgreSQL.\" THAT is the best \nmarketing.\n--\n(reverse (concatenate 'string \"ac.notelrac.teneerf@\" \"454aa\"))\nhttp://cbbrowne.com/info/lsf.html\nEveryone has a photographic memory, some don't have film.\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 14:34:59 -0400", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a " }, { "msg_contents": "Marc,\n\n> Just a personal observation here, based on the work we've been doing\n> lately ... there are *alot* of very large companies out there, one of\n> which we just did onsite training for that I swear has an article in just\n> about every trade magazine I read, each month ... the problem isn't\n> getting companies to adopt/use PgSQL ... the problem is getting them to\n> acknowledge its usage ...\n\nYeah. I know one database-backed application, used by about 40% of the pople \nin this city, which runs on PostgreSQL. However, the company that built \nthat application won't let me publicize their usage because they are worried \nabout getting political flack from Oracle and Microsoft's lobbyists at City \nHall.\n\n-- \n-Josh Berkus\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:30:12 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a revolution in" }, { "msg_contents": "On Tue, Jun 25, 2002 at 02:34:59PM -0400, cbbrowne@cbbrowne.com wrote:\n> The _crucial_ marketing that would need to take place is NOT to the public. \n> It would be to:\n> a) ISPs\n> b) Vendors of ISP support software.\n> \n> The sort of thing that has allowed MySQL to get really popular is the fact \n> that there are tools like cPanel <http://www.cpanel.net/> that provide a \n> \"friendly\" front end to manage web site 'stuff,' including managing MySQL.\n\nOne consideration is that prior to 7.3, PostgreSQL's permissions scheme\nmade it difficult or impossible use in a shared-hosting environment (or\nat least, that's what I've heard from several different people -- I\ndon't have any personal experience).\n\nI'm aware that there are people offering PostgreSQL hosting, but the\n*perception* among the hosting techies I've talked to is that MySQL's\nfeature set is better suited for a shared hosting environment. With\nschemas and improved permissions in 7.3, that may be a thing of the\npast (at which point, ISPs might be a prime area for marketing).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n\n", "msg_date": "Tue, 25 Jun 2002 17:19:04 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "\nFolks,\n\n> Both are useful while putting together a quick system. Especially if\n> you project the ERD and PGAdmin processes onto a screen if working in a\n> group for a quick application. Explain while creating.\n\nBTW, as nice as PG-Admin is, I do not consider it a solution to our DB \nmanagement GUI desires. It only runs on Windows, and cannot be ported to \n*nix. :-(\n\nWeren't we ressurecting PGAccess?\n\n-- \n-Josh Berkus\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 15:33:02 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Hi\n\n-*- Hannu Krosing <hannu@tm.ee> [ 2002-06-25 21:34 ]:\n> On Tue, 2002-06-25 at 22:48, Josh Berkus wrote:\n> > Frankly, the only thing that MS SQL has over us is easy-but-unreliable GUI \n> > admin tools (backup, user, and database management).\n> \n> We almost have it in pgAdmin and Tora.\n\nWell, pgAdmin has come a long way -- some of my fellow admins that have Windows workstations use it, and does the developers at my company. Don't know Tora thought, must try it out sometime.\n\nHowever, I think some sort of scheduling like MS SQL offers would be very helpful for newcomers. Scheduling stuff like backup (now only dumps), vacuum's and such with cron is unfortunately not straight-forward enough.\n\nMerging configuration files is good. However, things like access control should IMO be configurable with SQL commands -- which would also help in development of better administration tools.\n\n\nJust my two cents.\n\n\nRegards,\nTolli,\ntolli@tol.li\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 22:41:39 +0000", "msg_from": "=?iso-8859-1?Q?=DE=F3rhallur_H=E1lfd=E1narson?= <tolli@tol.li>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "Hi James,\n\nJames Hubbard wrote:\n> \n<snip> \n> Keep in mind that I was primarily focusing on the potential to include it with\n> something like OpenOffice. This is why I said that my post was a little far\n> fetched.\n\nMy understanding of this is that the OpenOffice.org guys don't want\neither PostgreSQL nor MySQL as their inbuilt database, but are instead\nlooking at an altervative Open Source database (HSQL I think, don't\nremember for sure).\n\nDoing just what you proposed (getting a Win32 version of PostgreSQL and\noffering to the OpenOffice.org people) was suggested to NuSphere a few\nmonths ago, after Jan joined them. For some reason (not sure why) it\nwasn't something which they decided to pursue.\n\nGood suggestion though James. :-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> James Hubbard\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:30:21 +0930", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Ok, a few comments on various messages that have appeared in this thread.\n\n> From: James Hubbard <jhubbard@mcs.uvawise.edu>\n>\n> I wasn't really comparing to MySQL here. I meant, in relationship\n> to MS Access. Start it up and it just works.\n\nYeah, a point-and-drool installation wizard for postgres under windows\nwould be great. I think, from looking at PGAdminII, that we've already\ngot great admin tools; it seems just as good as SQL Server Enterprise\nManager to me.\n\n> I think that would work pretty well. A basic configuration that\n> locks eveything down with the goal of a single user desktop setting,\n> but also provides the user with the capability of opening things up\n> so that it could function as a multiuser system.\n\nI don't understand this. What's the difference between a \"single\nuser desktop setting\" and a low-end multi-user system? I don't see\nwhat would change.\n\nIf you're talking more than a twenty or thirty active connections and\na couple of gig of data, yeah, then you need to change stuff. But then\nyou need a real admin and some planning, and no point-and-click tool is\ngoing to help with that.\n\n> Keep in mind that I was primarily focusing on the potential to include\n> it with something like OpenOffice. This is why I said that my post was\n> a little far fetched.\n\nThat sounds like a great idea to me.\n\n> From: Scott Marlowe <scott.marlowe@ihs.com>\n>\n> I could personally care less if postgresql ever runs as a native window\n> application, since I personally don't believe windows is a suitable OS for\n> hosting a dbms.\n\nWell, windows is fine for hosting a DBMS if you're talking about the\nfacilities the OS offers a DBMS, and effeciency. Administrating windows\nboxes sucks, but cygwin can help fix that. (Not that I'd care to go back\nto a database running under Windows, but it is practical, if unpleasant.)\n\n> Postgresql is quite portable, when one is porting it to OSes that\n> aren't windows, like VMS, MVS, or all the different flavors of Unix.\n\nI'm not sure what's up with this. Windows does offer POSIX compatability,\nafter all.\n\n> From: Josh Berkus <josh@agliodbs.com>\n>\n> > Maybe a PostgreSQL-Lite would be a better idea. One that condenses the main\n> > code down to something easy, that a desktop user could use, but maintain the\n> > strength of the core code. I suppose that means creating another project.\n>\n> Personally, I think it's a redundant idea. There are a couple dozen\n> \"lightweight\" RDBMSs available off Sourceforge. There is only one\n> \"Heavy-duty\" database: Us.\n\nAnd what on earth is the advantage of \"PostgreSQL Lite\"? I don't see how\nit would be easier to use in any way. The install dificulties could be\nworked around with an install wizard, and PGAdminII seems already to be\na good admin interface.\n\n> I also shot down PostgreSQL as a possibility for inclusion with\n> OpenOffice.org, since Postgres is quite firmly a *server* database, and 70%\n> of OpenOffice.org installs are on Windows 95/98.\n\nAgain, I don't see the problem. Server, schmerver; there's nothing wrong\nwith running postgres for \"non-server\" tasks. Unless it's completely\nimpossible to port to Win98, but is that really the case?\n\n> From: Josh Berkus <josh@agliodbs.com>\n>\n> On the other hand, we already outstrip MS SQL Server's feature set,\n> as well as being more reliable, lower-maintainence, multi-platform,\n> and cheaper. Frankly, the only thing that MS SQL has over us is\n> easy-but-unreliable GUI admin tools (backup, user, and database\n> management).\n\nUh...\"no way.\" I've found MS SQL Server is consistently faster when it\ncomes to the crunch, due to things like writing a heck of a lot less\nto the log files, significantly less table overhead, having clustered\nindexes, and so on. (Probably more efficient buffer management also\nhelps a bit.) Other areas where postgres can't compare is backup and\nrestore, ability to do transaction log shipping, replication, access\nrights, disk allocation (i.e., being able to determine on which disk\nyou're going to put a given table), and so on. SQL Server's optimizer\nalso seems to me to be better, though I could be wrong there.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:41:06 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "> BTW, as nice as PG-Admin is, I do not consider it a solution to our DB \n> management GUI desires. It only runs on Windows, and cannot be ported to \n> *nix. :-(\n> \n> Weren't we ressurecting PGAccess?\n\nWhat other development options do we have for soemthing that is GUI and \nportable to all platforms that postgresql runs on? Java? wxWindows? Qt? \nGtk? I would think that Gtk is probably the most portable, and it has \nbindings to many languages, but we would probalby want to use C. \n\nComments?\n\n\n", "msg_date": "Tue, 25 Jun 2002 23:15:41 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Josh Berkus wrote:\n> \n> Folks,\n> \n> > Both are useful while putting together a quick system. Especially if\n> > you project the ERD and PGAdmin processes onto a screen if working in a\n> > group for a quick application. Explain while creating.\n> \n> BTW, as nice as PG-Admin is, I do not consider it a solution to our DB \n> management GUI desires. It only runs on Windows, and cannot be ported to \n> *nix. :-(\n> \n> Weren't we ressurecting PGAccess?\n\nWe are. I will grab their tar file tomorrow and update our CVS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 23:24:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "> How many ppl here can honestly say they know of *at least* one company, if\n> not more, that are using PgSQL, but don't advertise, or let known, that\n> fact? I can think of a half dozen that we (PgSQL, Inc) have worked with\n> to convert, and train, so far ... Tom, call it RedHat DB or PgSQL, its the\n> same code base ... any numbers from that end? Bruce, how about from SRA?\n\nMost of the companies are Japanese-only and we would not have heard\nabout, but there was one of interest. Tatuso, can we share that one?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Tue, 25 Jun 2002 23:25:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Comparing PGSQL to MySQL is like apples to oranges. I don't see why one\nwould want to take a great project and ORDBMS such as PGSQL and make a\ndesktop version of it. When a desktop version is completely opposite of\nwhat PGSQL is, a commercial-grade RDBMS. Sure it lacks some of the areas\nwhen compared to Oracle and SQL Server... but I don't see how the PGSQL team\nis going to get as much money as Oracle/Microsoft to develop, perform R&D,\nand compete against commercial rivals. Yet, I have never seen an\nopen-source database system as good as PGSQL, especially being as it is\ndeveloped on a volunteer basis.\n\nAs far as MySQL goes, they can have their easy-to-install and manage\n\"features\". I was on the MySQL-dev team for three months trying to convince\nMonty, Sasha, and others that MySQL needed features found in commercial\nsystems (triggers, stored procs, transactions, constraints, etc.) They\nexplicitly and rudely told me that MySQL wasn't developed to perform in\nthese areas and to go elsewhere. Ever since then, I've been using PGSQL in\na production basis. The argument for easy-to-install systems is common with\nmany MySQL users, and those who don't understand how databases work. Sure\nit would be nice to have the system do complete self-tuning but in reality,\nthe DBA should know how to make the database perform better under different\nsituations. And, as for ease-of-install, I can download the PGSQL package\nfor my OpenBSD boxes and it works perfectly, same on CYGWIN. If I want to\ntune it, I can.\n\nThe objective of a good RDBMS is to allow fast access to data while also\nmaintaining data integrity (ACID properties). I personally think that\ndumbing-down database systems only causes more problems. Look at Microsoft\nand NT/2K/XP. Now there are MCSEs all over the place acting like they are\nnetwork admins because they can point-and-click to start a IIS service.\nOooh, ahh. I would rather be on UNIX where I need to know exactly what's\ngoing on. And, UNIX users don't just jump up and blame the software when\nsomething goes wrong... as often happens with Windows and Access. The same\nfollows with many MySQL users I've encountered. They don't have to do\nanything with the system, but consider themselves experts. With all my\nOracle, SQL Server, and PostgreSQL boxes, I personally tune them to do what\ntasks are designated for them. I think PGSQL, as the project goes, is just\nfine as it is. A little commercial support and marketing could greatly\nassist in furthering the usage of PGSQL, true. If the group agrees that\nthis would be a good idea, then I would be willing to do this. I also think\nit would be a good idea to get a PostgreSQL foundation or similar non-profit\nthat could accept donations, etc. to further development. Don't dumb down\nthe system and create a limited version just for people that want an\nopen-source Access... they can use MySQL for that. Just my rant.\n\nCordially,\n\nJonah H. Harris, Chairman/CEO\nNightStar Corporation\n\"One company, one world, one BIG difference!\"\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 21:33:48 -0600", "msg_from": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : My Opinion" }, { "msg_contents": "Hi Jonah,\n\nWas just looking around your company website, and it mentions a product\ncalled \"Nextgres\" which looks interesting :\n\nhttp://www.nightstarcorporation.com/?op=products\n\nHow do you guys implement the PostgreSQL SQL parser as well as the\nInterbase and Oracle parsers? Is it like an adaption of PostgreSQL with\naddons or something? Also it mentions its compatible with PostgreSQL\n7.2.2, so I'm wondering if that's a typo or something.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n\"Jonah H. Harris\" wrote:\n> \n> Comparing PGSQL to MySQL is like apples to oranges. I don't see why one\n> would want to take a great project and ORDBMS such as PGSQL and make a\n> desktop version of it. When a desktop version is completely opposite of\n> what PGSQL is, a commercial-grade RDBMS. Sure it lacks some of the areas\n> when compared to Oracle and SQL Server... but I don't see how the PGSQL team\n> is going to get as much money as Oracle/Microsoft to develop, perform R&D,\n> and compete against commercial rivals. Yet, I have never seen an\n> open-source database system as good as PGSQL, especially being as it is\n> developed on a volunteer basis.\n> \n> As far as MySQL goes, they can have their easy-to-install and manage\n> \"features\". I was on the MySQL-dev team for three months trying to convince\n> Monty, Sasha, and others that MySQL needed features found in commercial\n> systems (triggers, stored procs, transactions, constraints, etc.) They\n> explicitly and rudely told me that MySQL wasn't developed to perform in\n> these areas and to go elsewhere. Ever since then, I've been using PGSQL in\n> a production basis. The argument for easy-to-install systems is common with\n> many MySQL users, and those who don't understand how databases work. Sure\n> it would be nice to have the system do complete self-tuning but in reality,\n> the DBA should know how to make the database perform better under different\n> situations. And, as for ease-of-install, I can download the PGSQL package\n> for my OpenBSD boxes and it works perfectly, same on CYGWIN. If I want to\n> tune it, I can.\n> \n> The objective of a good RDBMS is to allow fast access to data while also\n> maintaining data integrity (ACID properties). I personally think that\n> dumbing-down database systems only causes more problems. Look at Microsoft\n> and NT/2K/XP. Now there are MCSEs all over the place acting like they are\n> network admins because they can point-and-click to start a IIS service.\n> Oooh, ahh. I would rather be on UNIX where I need to know exactly what's\n> going on. And, UNIX users don't just jump up and blame the software when\n> something goes wrong... as often happens with Windows and Access. The same\n> follows with many MySQL users I've encountered. They don't have to do\n> anything with the system, but consider themselves experts. With all my\n> Oracle, SQL Server, and PostgreSQL boxes, I personally tune them to do what\n> tasks are designated for them. I think PGSQL, as the project goes, is just\n> fine as it is. A little commercial support and marketing could greatly\n> assist in furthering the usage of PGSQL, true. If the group agrees that\n> this would be a good idea, then I would be willing to do this. I also think\n> it would be a good idea to get a PostgreSQL foundation or similar non-profit\n> that could accept donations, etc. to further development. Don't dumb down\n> the system and create a limited version just for people that want an\n> open-source Access... they can use MySQL for that. Just my rant.\n> \n> Cordially,\n> \n> Jonah H. Harris, Chairman/CEO\n> NightStar Corporation\n> \"One company, one world, one BIG difference!\"\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n", "msg_date": "Wed, 26 Jun 2002 15:37:35 +0930", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Nextgres?" }, { "msg_contents": "> OK, I want to know, does anyone see MySQL gaining in market share in\n> comparison to PostgreSQL, or is MySQL gaining against other databases?\n> Is MySQL gaining sites faster than we are gaining sites?\n>\n> Every indication I can see is that PostgreSQL is gaining on MySQL.\n>\n> The Linux/FreeBSD comparison is potent. Does PostgreSQL remain a niche\n> player? Does *BSD remain a niche player?\n\nIn all honestly, I think that MySQL simply expands the market for Postgres.\nMySQL is widely promoted by every idiot out there. So everyone and their\ndog starts using MySQL. Then about 6 months later they realise it sucks\n(which is _exactly_ what happened at our business) and then they switch to\nPostgres. Every day on PHPBuilder's SQL forum there is someone asking why\ntheir subselect doesn't work in MySQL and how hard is it to migrate from\nMySQL.\n\nIn fact, probably the best thing we can offer is an _excellent_ MySQL to\nPostgreSQL conversion tool.\n\nChris\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 14:08:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution in" }, { "msg_contents": "> What other development options do we have for soemthing that is GUI and\n> portable to all platforms that postgresql runs on? Java? wxWindows? Qt?\n> Gtk? I would think that Gtk is probably the most portable, and it has\n> bindings to many languages, but we would probalby want to use C.\n\nTOra uses QT and is cool. Unfortunately Windows version costs money. It is\nutterly, totally awesome though. Don't know how good its Postgres support\nis working at the moment, tho.\n\nhttp://www.globecom.se/tora/\n\nChris\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 14:51:09 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "> > I wasn't really comparing to MySQL here. I meant, in relationship\n> > to MS Access. Start it up and it just works.\n>\n> Yeah, a point-and-drool installation wizard for postgres under windows\n> would be great. I think, from looking at PGAdminII, that we've already\n> got great admin tools; it seems just as good as SQL Server Enterprise\n> Manager to me.\n\nOnce we have a proper Win32 native version, the guy in our office who writes\nthe Win32 installers for our Palm/PocketPC software said he'll do one for us\nno sweat. We use the free WinAmp installer which is really good... Says it\nonly takes a couple of days...\n\nChris\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 15:09:39 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "> > Yeah, a point-and-drool installation wizard for postgres under windows\n> > would be great. I think, from looking at PGAdminII, that we've already\n> > got great admin tools; it seems just as good as SQL Server Enterprise\n> > Manager to me.\n>\n> Once we have a proper Win32 native version, the guy in our office who\nwrites\n> the Win32 installers for our Palm/PocketPC software said he'll do one for\nus\n> no sweat. We use the free WinAmp installer which is really good... Says\nit\n> only takes a couple of days...\n\nBTW - here is the URL:\n\nhttp://www.nullsoft.com/free/nsis/\n\nChris\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 16:46:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "Two points to this discussion. \n\nI hate to admit this, but to some people, a Windows version is important. \nYesterday I learned that one product developed here will have a MySQL \nimplementation because marketing wants a free implementation.\nThe biggest advantage seems to be that it's working on Windows. And the \nproject leader knows of nothing but Windows :-( \n\nNext group to impress is Database Designers. I've been looking for a design \ntool for some time, but there's no Open Source equivalent to ErWin. And \nErWin can't create and reverse engineer PostgreSQL databases. \n\n --\nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@kakidata.dk \n\n\n", "msg_date": "Wed, 26 Jun 2002 08:54:09 GMT", "msg_from": "\"Kaare Rasmussen\" <kar@kakidata.dk>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "I have started a java admin tool on sourceforge just 2 weeks ago\nactually, www.sf.net/jpgadmin\n\nDave\n\nOn Wed, 2002-06-26 at 02:51, Christopher Kings-Lynne wrote:\n> > What other development options do we have for soemthing that is GUI and\n> > portable to all platforms that postgresql runs on? Java? wxWindows? Qt?\n> > Gtk? I would think that Gtk is probably the most portable, and it has\n> > bindings to many languages, but we would probalby want to use C.\n> \n> TOra uses QT and is cool. Unfortunately Windows version costs money. It is\n> utterly, totally awesome though. Don't know how good its Postgres support\n> is working at the moment, tho.\n> \n> http://www.globecom.se/tora/\n> \n> Chris\n> \n> \n> \n\n\n\n\n\n", "msg_date": "26 Jun 2002 07:05:49 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "I have started a java admin tool on sourceforge just 2 weeks ago\nactually, www.sf.net/jpgadmin\n\nDave\n\nOn Wed, 2002-06-26 at 02:51, Christopher Kings-Lynne wrote:\n> > What other development options do we have for soemthing that is GUI and\n> > portable to all platforms that postgresql runs on? Java? wxWindows? Qt?\n> > Gtk? I would think that Gtk is probably the most portable, and it has\n> > bindings to many languages, but we would probalby want to use C.\n> \n> TOra uses QT and is cool. Unfortunately Windows version costs money. It is\n> utterly, totally awesome though. Don't know how good its Postgres support\n> is working at the moment, tho.\n> \n> http://www.globecom.se/tora/\n> \n> Chris\n> \n> \n> \n\n\n\n\n\n\n", "msg_date": "26 Jun 2002 08:09:29 -0400", "msg_from": "Dave Cramer <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "> TOra uses QT and is cool. Unfortunately Windows version costs money. It is\n> utterly, totally awesome though. Don't know how good its Postgres support\n> is working at the moment, tho.\n\nIs that true? There is QT Free for windows. It's not open sourced at all but \nis free as in beer.\n\n\n", "msg_date": "Wed, 26 Jun 2002 09:44:07 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "\ncould we get this added to gborg and a link created to it? we're working\non marketing Gborg, and the software that is listed there, and Chris added\n(at my request) in code to the 'news' section so that whenever there are\nchanges, it automatically gets sent to the -announce list so that ppl are\naware of changes/enhancements/news ...\n\nOn 26 Jun 2002, Dave Cramer wrote:\n\n> I have started a java admin tool on sourceforge just 2 weeks ago\n> actually, www.sf.net/jpgadmin\n>\n> Dave\n>\n> On Wed, 2002-06-26 at 02:51, Christopher Kings-Lynne wrote:\n> > > What other development options do we have for soemthing that is GUI and\n> > > portable to all platforms that postgresql runs on? Java? wxWindows? Qt?\n> > > Gtk? I would think that Gtk is probably the most portable, and it has\n> > > bindings to many languages, but we would probalby want to use C.\n> >\n> > TOra uses QT and is cool. Unfortunately Windows version costs money. It is\n> > utterly, totally awesome though. Don't know how good its Postgres support\n> > is working at the moment, tho.\n> >\n> > http://www.globecom.se/tora/\n> >\n> > Chris\n> >\n> >\n> >\n>\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n>\n>\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:21:14 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Curt,\n\nYou do point out some good areas in which PostgreSQL needs to improve\nif we're going to go after the MS SQL market. The rest of this\ne-mail, though, is a refutation of your comparison.\n\nAs a professional MS SQL Server 7.0 manager, I have to disagree.\n However, I have not used MS SQL 2000 extensively, so it's possible\nthat some of these issues have been dealt with by MS in the version\nupgrade.\n\n> Uh...\"no way.\" I've found MS SQL Server is consistently faster when\n> it\n> comes to the crunch, due to things like writing a heck of a lot less\n> to the log files, significantly less table overhead, having clustered\n> indexes, and so on. \n\nUp to about a million records. For some reason, when MS SQL Server 7.0\nreaches the 1,000,000 point, it slows down to a crawl regardless of how\nmuch RAM and processor power you throw at it (such as a Proliant 7000\nwith dual processors, 2 gigs of RAM and Raid-5 ... and still only one\nperson at a time can do summaries on the 3,000,000 record timecard\ntable. Bleah!)\n\nAnd clustered indexes are only really useful on tables that don't see\nmuch write activity.\n\n> (Probably more efficient buffer management also\n> helps a bit.) \n\nAlso not in my experience. I've had quite a few occasions where MS SQL\nkeeps chewing up RAM until it runs out of available RAM ... and then\nkeeps going, locking up the NT server and forcing an emergency reboot.\n MS SQL doesn't seem to be able to cope with limited RAM, even when\nthat limit is 1gb.\n\n> Other areas where postgres can't compare is backup and\n> restore, \n\nHmmm .... MS SQL has nice GUI tools including tape management, and\nsupports incremental backup and Point-in-time recovery. On the other\nhand, MS SQL backup takes approximately 3x as long for a similar sized\ndatabase as PostgreSQL, the backup files are binary and can't be viewed\nor edited, sometimes the restore just fails for no good reason\ncorrupting your database and shutting down the system, restore to a\ndatabase with different security setup is sheer hell, and the database\nfiles can't be moved on the disk without destroying them. \n\nI'd say we're at a draw with MS SQL as far as backup/restore goes.\n Ours is more reliable, portable, and faster. Theirs has lots of nice\nadmin tools and features.\n\n>ability to do transaction log shipping, \n\nWell, we don't have a transaction log in the SQL Server sense, so this\nisn't relevant.\n\n>replication, \n\nThis is a missing piece for Postgres that's been much discussed on this\nlist.\n\n> access\n> rights, \n\nWe have these, especially with 7.3's new DB permissions.\n\ndisk allocation (i.e., being able to determine on which disk\n> you're going to put a given table), \n\nThis is possible with Postgres, just rather manual. And, unlike MS\nSQL, we can move the table without corrupting the database. Once\nagain, all we need is a good admin interface.\n\n> and so on. SQL Server's optimizer\n> also seems to me to be better, though I could be wrong there.\n\nHaving ported applications: You are wrong. There are a few things\nSQL server does faster (straight selects with lots (>40) of JOINs is\nthe only one I've proven) but on anything complex, it bogs down.\n Particularly things like nested subselects.\n\nNow, let me mention a few of MS SQL's defects that you've missed:\n Poor/nonexistant network security (the port 1433 hole, hey?), huge\nresource consumption, a byzantine authentication structure that\nfrequently requires hours of troubleshooting by an NT security expert,\nweak implementation of the SQL standard with lots of proprietary\nextensions, 8k data pages, no configuration of memory usage, and those\nstupid, stupid READ locks that make many complex updates deadlock.\n\n-Josh Berkus\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 09:18:06 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "what is gborg ? :)\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> Sent: Wednesday, June 26, 2002 11:21 AM\n> To: Dave Cramer\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Democracy and organisation : let's make a\n> revolution\n>\n>\n>\n> could we get this added to gborg and a link created to it? we're working\n> on marketing Gborg, and the software that is listed there, and Chris added\n> (at my request) in code to the 'news' section so that whenever there are\n> changes, it automatically gets sent to the -announce list so that ppl are\n> aware of changes/enhancements/news ...\n>\n> On 26 Jun 2002, Dave Cramer wrote:\n>\n> > I have started a java admin tool on sourceforge just 2 weeks ago\n> > actually, www.sf.net/jpgadmin\n> >\n> > Dave\n> >\n> > On Wed, 2002-06-26 at 02:51, Christopher Kings-Lynne wrote:\n> > > > What other development options do we have for soemthing\n> that is GUI and\n> > > > portable to all platforms that postgresql runs on? Java?\n> wxWindows? Qt?\n> > > > Gtk? I would think that Gtk is probably the most portable,\n> and it has\n> > > > bindings to many languages, but we would probalby want to use C.\n> > >\n> > > TOra uses QT and is cool. Unfortunately Windows version\n> costs money. It is\n> > > utterly, totally awesome though. Don't know how good its\n> Postgres support\n> > > is working at the moment, tho.\n> > >\n> > > http://www.globecom.se/tora/\n> > >\n> > > Chris\n> > >\n> > >\n> > >\n> >\n> >\n> >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> >\n> >\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n>\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 14:08:15 -0300", "msg_from": "\"Jeff MacDonald\" <jeff@tsunamicreek.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Justin Clift wrote:\n> Hi Jonah,\n> \n> Was just looking around your company website, and it mentions a product\n> called \"Nextgres\" which looks interesting :\n> \n> http://www.nightstarcorporation.com/?op=products\n> \n> How do you guys implement the PostgreSQL SQL parser as well as the\n> Interbase and Oracle parsers? Is it like an adaption of PostgreSQL with\n> addons or something? Also it mentions its compatible with PostgreSQL\n> 7.2.2, so I'm wondering if that's a typo or something.\n\nThey are so compatible, they are compatible with releases we haven't\neven made yet. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 26 Jun 2002 13:53:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Nextgres?" }, { "msg_contents": "Marc,\n\nI tried to create it on gborg originally, but could not complete the\nform ??\n\nBut to answer your question I would prefer to have it at gborg, so I\nwill try again and let you know the results.\n\nDave\nOn Wed, 2002-06-26 at 10:21, Marc G. Fournier wrote:\n> \n> could we get this added to gborg and a link created to it? we're working\n> on marketing Gborg, and the software that is listed there, and Chris added\n> (at my request) in code to the 'news' section so that whenever there are\n> changes, it automatically gets sent to the -announce list so that ppl are\n> aware of changes/enhancements/news ...\n> \n> On 26 Jun 2002, Dave Cramer wrote:\n> \n> > I have started a java admin tool on sourceforge just 2 weeks ago\n> > actually, www.sf.net/jpgadmin\n> >\n> > Dave\n> >\n> > On Wed, 2002-06-26 at 02:51, Christopher Kings-Lynne wrote:\n> > > > What other development options do we have for soemthing that is GUI and\n> > > > portable to all platforms that postgresql runs on? Java? wxWindows? Qt?\n> > > > Gtk? I would think that Gtk is probably the most portable, and it has\n> > > > bindings to many languages, but we would probalby want to use C.\n> > >\n> > > TOra uses QT and is cool. Unfortunately Windows version costs money. It is\n> > > utterly, totally awesome though. Don't know how good its Postgres support\n> > > is working at the moment, tho.\n> > >\n> > > http://www.globecom.se/tora/\n> > >\n> > > Chris\n> > >\n> > >\n> > >\n> >\n> >\n> >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> >\n> >\n> \n> \n\n\n\n\n\n", "msg_date": "26 Jun 2002 13:59:30 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "\nPlease do ... I believe Chris was able to clean out several bugs when the\nNPgSQL project started ...\n\nOn 26 Jun 2002, Dave Cramer wrote:\n\n> Marc,\n>\n> I tried to create it on gborg originally, but could not complete the\n> form ??\n>\n> But to answer your question I would prefer to have it at gborg, so I\n> will try again and let you know the results.\n>\n> Dave\n> On Wed, 2002-06-26 at 10:21, Marc G. Fournier wrote:\n> >\n> > could we get this added to gborg and a link created to it? we're working\n> > on marketing Gborg, and the software that is listed there, and Chris added\n> > (at my request) in code to the 'news' section so that whenever there are\n> > changes, it automatically gets sent to the -announce list so that ppl are\n> > aware of changes/enhancements/news ...\n> >\n> > On 26 Jun 2002, Dave Cramer wrote:\n> >\n> > > I have started a java admin tool on sourceforge just 2 weeks ago\n> > > actually, www.sf.net/jpgadmin\n> > >\n> > > Dave\n> > >\n> > > On Wed, 2002-06-26 at 02:51, Christopher Kings-Lynne wrote:\n> > > > > What other development options do we have for soemthing that is GUI and\n> > > > > portable to all platforms that postgresql runs on? Java? wxWindows? Qt?\n> > > > > Gtk? I would think that Gtk is probably the most portable, and it has\n> > > > > bindings to many languages, but we would probalby want to use C.\n> > > >\n> > > > TOra uses QT and is cool. Unfortunately Windows version costs money. It is\n> > > > utterly, totally awesome though. Don't know how good its Postgres support\n> > > > is working at the moment, tho.\n> > > >\n> > > > http://www.globecom.se/tora/\n> > > >\n> > > > Chris\n> > > >\n> > > >\n> > > >\n> > >\n> > >\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> > >\n> > >\n> >\n> >\n>\n>\n>\n>\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 15:54:27 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "Marc G. Fournier writes:\n\n> How many ppl here can honestly say they know of *at least* one company, if\n> not more, that are using PgSQL, but don't advertise, or let known, that\n> fact?\n\nOf course they don't. No company advertises what software it uses\ninternally. They don't advertise that they use accounting software X,\noperating system Y, or instant message tool Z. They don't advertise that\nthey use Foo brand telephones or Bar brand furniture. They have other\nthings to do.\n\nThe advertisings for Oracle are because Oracle is seen to be rock solid.\nBut no one advertises that they use MS SQL Server, Sybase, or Informix.\nThat creates the association (just like PostgreSQL, if PostgreSQL had any\nassociation), it's almost as good but cheaper. Why would anyone advertise\nwith that?\n\nOn the other hand, those places that do advertise that they're using a\nparticular non-Oracle database either have a marketing interest of their\nown, or they do not, in fact, have other things to do.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 22:44:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" }, { "msg_contents": "On Wed, 26 Jun 2002, Josh Berkus wrote:\n\n> As a professional MS SQL Server 7.0 manager....\n\nWell, I wouldn't call myself a professional at managing SQL Server, but\nI did do about two years of work on an application (database design,\nprogramming and day-to-day running of the production system) that ran on\nSQL Server 7.0 and gave it a pretty good workout. I've used 2000 a bit,\nbut most of my comments apply to my experience with 7.0.\n\n> > Uh...\"no way.\" I've found MS SQL Server is consistently faster when\n> > it comes to the crunch, due to things like writing a heck of a lot\n> > less to the log files, significantly less table overhead, having\n> > clustered indexes, and so on.\n>\n> Up to about a million records. For some reason, when MS SQL Server 7.0\n> reaches the 1,000,000 point, it slows down to a crawl regardless of\n> how much RAM and processor power you throw at it (such as a Proliant\n> 7000 with dual processors, 2 gigs of RAM and Raid-5 ... and still only\n> one person at a time can do summaries on the 3,000,000 record timecard\n> table. Bleah!)\n\nReally? I've dealt with 85 millon row tables in SQL Server without\ndifficulty, and the machine was not that much larger that then one you\ndescribe. (2-way 800 MHz Xeon, 4 GB RAM, Clarion 12-disk RAID array.)\n\n> And clustered indexes are only really useful on tables that don't see\n> much write activity.\n\nI've not found that to be true. If the write activity is *really*\nheavy you've got a problem, but if it's moderate, but not really low,\nclustered indexes can be really helpful.\n\nTo give you an idea of what clustering can do for a query in some\ncircumstances, clustering a 500 million row table under postgres on\nthe appropriate column reduced one of my queries from 70 seconds to\n0.6 seconds. The problem with postgres is having to re-cluster it on a\nregular basis....\n\n> I'd say we're at a draw with MS SQL as far as backup/restore goes.\n> Ours is more reliable, portable, and faster. Theirs has lots of nice\n> admin tools and features.\n\nWhile you're right that there have been problems with restores on SQL\nserver from time to time, I've done a *lot* of large (120 GB database)\nbackups and restores (copying a production system to a development\nserver), and for large tables, I've found SQL Server's binary backups to\nbe faster to restore than postgres' \"re-create the database from COPY\nstatements\" system.\n\n> >ability to do transaction log shipping,\n>\n> Well, we don't have a transaction log in the SQL Server sense, so this\n> isn't relevant.\n\nIt is completely relevant, because log shipping allows fast, easy and\nreliable replication. Not to mention another good method of backup.\n\n> > access rights,\n>\n> We have these, especially with 7.3's new DB permissions.\n\n7.2 has extremely poor access permissions. 7.3 is not out yet.\n\n> disk allocation (i.e., being able to determine on which disk > you're\n> going to put a given table),\n>\n> This is possible with Postgres, just rather manual.\n\nNo. Run CLUSTER on the table, or drop an index and re-create it, or just\nexpand the table so that it moves into yet another 1 GB file, and watch\nthe table, or part of it, move to a different disk. (The last situation\ncan be handled by pre-creating symlinks, but ugh!)\n\n> And, unlike MS SQL, we can move the table without corrupting the\ndatabase.\n\nYou can do that in MS SQL as well, just not by moving files around .\nLetting the database deal with this is a Good Thing, IMHO .\n\n> Once again, all we need is a good admin interface.\n\nNow, this I don't understand so well. PGAdminII seems pretty much as\ngood as Enterprise Manager to me, though I admit that I've looked at it\nonly briefly.\n\n> Now, let me mention a few of MS SQL's defects that you've missed:\n> Poor/nonexistant network security (the port 1433 hole, hey?)\n\nHm? You'll have to explain this one to me.\n\n> huge resource consumption\n\nI've just not found that to be so. Specifics?\n\n> a byzantine authentication structure that frequently requires hours of\n> troubleshooting by an NT security expert,\n\nEasy solution: don't use NT security. Ever. It's a nightmare.\n\n> 8k data pages\n\nYou mean like postgresql? Though the row size limit can be a bit\nannoying, I'll agree. But because of SQL Server's extent management, the\npage size is not a big problem.\n\nAnd look at some of the advantages, too. Much less row overhead, for\nexample.\n\n> no configuration of memory usage,\n\nI've always been able to tell it how much memory to use. There's not\nmuch you can do beyond that, but what did you want to do beyond that?\nIt's not like you're stuck with postgres's double-buffering (postgres\nand OS) system, or limits on connections based on how much of a certain\nspecial type of memory you allocate, or things like that. (I love the\nway that SQL server can deal with five thousand connections without even\nblinking.)\n\n> and those stupid, stupid READ locks that make many complex updates\n> deadlock.\n\nI'll admit that this is one area that I usually like much better about\npostgres. Although the locking system, though harder to use, has\nits advantages. For example, with postgresql the application *must*\nbe prepared to retry a transaction if it fails during a serialized\ntransaction. Application writers don't need to do this in SQL server.\n(It just deadlocks instead, and then it's the DBA's problem. :-))\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 12:20:43 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "Justin,\n\nIt doesn't appear that my response was posted to the list. I can thank\nYANOCC for that. However, did you receive it?\n\nBruce,\n\nDoes make for a good joke, but nowhere is compatibility mentioned. It was\ndiscussing the SQL grammar. And, it should have read PostgreSQL 7.1.2. The\nlast update to the site was very late and there are other misspellings as\nwell. That's what four hours helping people in #C on IRC will do to you.\nAlways glad to add that extra little bit of humor. Sorry.\n\n-Jonah\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\nSent: Wednesday, June 26, 2002 11:53 AM\nTo: Justin Clift\nCc: PostgreSQL-development\nSubject: Re: [HACKERS] Nextgres?\n\n\nJustin Clift wrote:\n> Hi Jonah,\n>\n> Was just looking around your company website, and it mentions a product\n> called \"Nextgres\" which looks interesting :\n>\n> http://www.nightstarcorporation.com/?op=products\n>\n> How do you guys implement the PostgreSQL SQL parser as well as the\n> Interbase and Oracle parsers? Is it like an adaption of PostgreSQL with\n> addons or something? Also it mentions its compatible with PostgreSQL\n> 7.2.2, so I'm wondering if that's a typo or something.\n\nThey are so compatible, they are compatible with releases we haven't\neven made yet. ;-)\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 21:21:17 -0600", "msg_from": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com>", "msg_from_op": false, "msg_subject": "Re: Nextgres? Corrections." }, { "msg_contents": "> > TOra uses QT and is cool. Unfortunately Windows version costs money.\nIt is\n> > utterly, totally awesome though. Don't know how good its Postgres\nsupport\n> > is working at the moment, tho.\n>\n> Is that true? There is QT Free for windows. It's not open sourced at all\nbut\n> is free as in beer.\n\nNo, TOra itself wants money for the windows version.\n\nChris\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 11:30:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a revolution" } ]
[ { "msg_contents": "\nIf you define a database field like this with the \"without time zone\" \nclause.....\n\ncreated timestamp(6) without time zone DEFAULT 'now' NOT NULL,\n\nThen the current postgresql jdbc driver falls over in a heap when trying \nto select and retrieve this field.\n\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 16:42:11 +1000", "msg_from": "Chris Bitmead <chris@bitmead.com>", "msg_from_op": true, "msg_subject": "Definite bug in JDBC" }, { "msg_contents": "Chris,\n\nThis should have been posted to pgsql-jdbc instead of hackers.\n\nCan you define what you mean by 'current postgresql jdbc driver'? There \nare some bugs in the 7.2 driver that are fixed in current sources. Have \nyou tried the latest development driver from the jdbc.postgresql.org web \nsite?\n\nCan you define what you mean by 'falls over in a heap'? The actual \nerror message would really be useful.\n\nI can't seem to be able to reproduce your problem running the latest \ndevelopment driver against a 7.2.1 database.\n\nthanks,\n--Barry\n\nChris Bitmead wrote:\n\n>\n> If you define a database field like this with the \"without time zone\" \n> clause.....\n>\n> created timestamp(6) without time zone DEFAULT 'now' NOT NULL,\n>\n> Then the current postgresql jdbc driver falls over in a heap when \n> trying to select and retrieve this field.\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n>\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 09:11:42 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Definite bug in JDBC" } ]
[ { "msg_contents": "Isn't that what msync() is for? Or is this not portable?\n\n-----Ursprüngliche Nachricht-----\nVon: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nGesendet: Dienstag, 25. Juni 2002 16:30\nAn: Curt Sampson\nCc: J. R. Nield; Bruce Momjian; PostgreSQL Hacker\nBetreff: Re: [HACKERS] Buffer Management \n\n\nCurt Sampson <cjs@cynic.net> writes:\n> On Tue, 25 Jun 2002, Tom Lane wrote:\n>> The other discussion seemed to be considering how to mmap individual\n>> data files right into backends' address space. I do not believe this\n>> can possibly work, because of loss of control over visibility of data\n>> changes to other backends, timing of write-backs, etc.\n\n> I don't understand why there would be any loss of visibility of changes.\n> If two backends mmap the same block of a file, and it's shared, that's\n> the same block of physical memory that they're accessing.\n\nIs it? You have a mighty narrow conception of the range of\nimplementations that's possible for mmap.\n\nBut the main problem is that mmap doesn't let us control when changes to\nthe memory buffer will get reflected back to disk --- AFAICT, the OS is\nfree to do the write-back at any instant after you dirty the page, and\nthat completely breaks the WAL algorithm. (WAL = write AHEAD log;\nthe log entry describing a change must hit disk before the data page\nchange itself does.)\n\n\t\t\tregards, tom lane\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 16:45:54 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> Isn't that what msync() is for? Or is this not portable?\n\nmsync can force not-yet-written changes down to disk. It does not\nprevent the OS from choosing to write changes *before* you invoke msync.\nFor example, the HPUX man page for msync says:\n\n Normal system activity can cause pages to be written to disk.\n Therefore, there are no guarantees that msync() is the only control\n over when pages are or are not written to disk.\n\nOur problem is that we want to enforce the write ordering \"WAL before\ndata file\". To do that, we write and fsync (or DSYNC, or something)\na WAL entry before we issue the write() against the data file. We\ndon't really care if the kernel delays the data file write beyond that\npoint, but we can be certain that the data file write did not occur\ntoo early.\n\nmsync is designed to ensure exactly the opposite constraint: it can\nguarantee that no changes remain unwritten after time T, but it can't\nguarantee that changes aren't written before time T.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 10:52:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " }, { "msg_contents": "* Tom Lane (tgl@sss.pgh.pa.us) [020625 11:00]:\n> \n> msync can force not-yet-written changes down to disk. It does not\n> prevent the OS from choosing to write changes *before* you invoke msync.\n> \n> Our problem is that we want to enforce the write ordering \"WAL before\n> data file\". To do that, we write and fsync (or DSYNC, or something)\n> a WAL entry before we issue the write() against the data file. We\n> don't really care if the kernel delays the data file write beyond that\n> point, but we can be certain that the data file write did not occur\n> too early.\n> \n> msync is designed to ensure exactly the opposite constraint: it can\n> guarantee that no changes remain unwritten after time T, but it can't\n> guarantee that changes aren't written before time T.\n\nOkay, so instead of looking for constraints from the OS on the data file,\nuse the constraints on the WAL file. It would work at the cost of a buffer\ncopy? Er, maybe two:\n\nmmap the data file and WAL separately.\nCopy the data file page to the WAL mmap area.\nModify the page.\nmsync() the WAL.\nCopy the page to the data file mmap area.\nmsync() or not the data file.\n\n(This is half baked, just thought I'd see if it stirred further thought).\n\nAs another approach, how expensive is re-MMAPing portions of the files\ncompared to the copies.\n\n-Brad\n\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n", "msg_date": "Tue, 25 Jun 2002 12:12:45 -0400", "msg_from": "Bradley McLean <brad@bradm.net>", "msg_from_op": false, "msg_subject": "Re: Buffer Management" }, { "msg_contents": "Bradley McLean <brad@bradm.net> writes:\n> Okay, so instead of looking for constraints from the OS on the data file,\n> use the constraints on the WAL file. It would work at the cost of a buffer\n> copy? Er, maybe two:\n\n> mmap the data file and WAL separately.\n> Copy the data file page to the WAL mmap area.\n> Modify the page.\n> msync() the WAL.\n> Copy the page to the data file mmap area.\n> msync() or not the data file.\n\nHuh? The primary argument in favor of mmap is to avoid buffer copies;\nseems like you are paying that price anyway. Also, we do not want to\nmsync WAL for every single WAL record, but I think you'd have to with\nthe above scheme. (Assuming you have adequate shared buffer space,\nthe present scheme only has to fsync WAL at transaction commit and\ncheckpoints, because it won't actually push out data pages except at\ncheckpoint time.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 12:42:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Buffer Management " } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Josh Berkus [mailto:josh@agliodbs.com]\n> Sent: Tuesday, June 25, 2002 10:49 AM\n> To: Bruce Momjian\n> Cc: Tom Lane; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Democracy and organisation : let's make a\n> \n> Bruce,\n> \n> > I think Oracle is our main competitor. We seem to get more people\n> > porting from Oracle than any other database, and our \n> feature set matches\n> > there's most closely.\n> \n> I disagree, and did as well when you were with Great Bridge. \n> No matter how \n> Postgres core functionality compares with Oracle, they have \n> nearly a decade \n> of building tools, accessories, and extra whiz-bang features \n> for their \n> product. Not to mention a serious reputation as the \n> \"ultimate database if \n> you can afford it.\" \n> \n> As long as we target Oracle as our \"competition\", we will \n> remain \"the database \n> to use if you can't afford Oracle, but to be replaced with \n> Oracle as soon as \n> you can.\" Heck, look at DB2, which is toe-to-toe with Oracle \n> for feature \n> set, but is only really sold to companies who use IBM's other \n> tools. We're \n> not in a position to challenge that reputation.\n> \n> On the other hand, we already outstrip MS SQL Server's \n> feature set, as well as \n> being more reliable, lower-maintainence, multi-platform, and \n> cheaper. \n> Frankly, the only thing that MS SQL has over us is \n> easy-but-unreliable GUI \n> admin tools (backup, user, and database management).\n> \n> Let's pick battles we can win. We'll beat Oracle eventually \n> -- but not in the \n> next few years.\n\nIf you want to aim high, aim at DB/2.\n;-)\n\nDB/2 is the best database on the market in terms of its features. It's\nhighly scalable, runs everywhere, and has an add-on for anything you can\nimagine. I absolutely love DB/2. I suspect a market share study will\nshow that DB/2 is eating Oracle alive in the past couple years.\n\nThere is a benefit to setting your sights high. Look at the features of\nthe really rich products. Take the ones that are truly possible to\nimplement in a feasible time frame and add them to the project goals.\nEventually, you can have every feature of the most advanced DBMS systems\nin that way.\n\nI do have a suggestion as to strategic thinking on how to improve\nPostgreSQL. Instead of adding a huge array of features on the to-do\nlist, attack the weak points of the current product. Here are the three\nmost important aspects of any DBMS system:\n1. Speed\n2. Reliability\n3. Security\n\nLook at places in the current product where these can be improved.\nThink about this for a minute --\nIf your product is faster, it will appeal to a large crowd for that\nreason alone.\nIf your product is more reliable, it will appeal to a large crowd for\nthat reason alone.\nIf your product is more secure, it will appeal to a large crowd for that\nreason alone.\nNone is less important than the others. If any of the above 3 features\nare seriously lacking, nobody will want to use the tool.\n\nBells and whistles are nice, but you still want the bicycle to go as\nfast as possible while not breaking down and keeping the rider safe.\n\nSo my suggestion for project direction would be as follows:\n2/3 effort on improving functionality of the core in the above three\nareas.\n1/3 effort on adding new features.\n\nNow, I have had very little to contribute myself, and therefore my\nsuggestions carry no weight. The opinions of the contributors are more\nimportant than someone from the outside looking in. But it is worth\nmulling over at least.\n\nThere is one feature to PostgreSQL that does seem to be missing that is\nfound in all the commercial systems that I have worked with (and so I\nwould suggest adding it to the 'new features' list). That is the\nability to perform stored procedures that return one or more row sets.\nThe PostgreSQL paradigm of a stored procedure seems to me to be really\nlimited to a function call. Perhaps it is already being worked on.\n\n\n", "msg_date": "Tue, 25 Jun 2002 11:28:25 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a" } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Do people want an advocacy article written, like \"How to choose a\n> database?\" I could do that.\n\nThat would be good, as would an updated \"postgres vs mysql\" article \nto point people towards. Or a \"postgres myths debunked\" page.\n\n> Basically, I am open to ideas. Would it help to fly me out to meet IT\n> leaders? More books/articles? What does it take? What do successful\n> companies and open source projects do that works?\n\nSince you asked, here are some ideas and thoughts I've been batting \naround:\n\n1. Start an advocacy mailing list, to help coordinate publicity, responses \nto mySQL FUD, ways to advertise, etc.\n\n2. Stop using the name \"postmaster\" as our daemon. Seriously. I've seen \nmany a person, some new to *nix and some not, take a look at ps -Af and \nsay \"what the heck is that?\" Whereas mysql uses \"mysqld\", cron uses \"crond\", \nssh uses \"sshd\", and apache uses \"httpd\", we (postgres) use \"postmaster.\" The \nname seems to imply something to do with email, and should be abandoned in \nfavor of postgresd or postgresqld or even pgsqld.\n\n3. Combine pg_hba.conf, pg_ident.conf, and postgresql.conf into a single \nfile, postgres.conf. Clean it up and simplify it. Have a command-line tool \nto make changes. Have a way to test out the changes, similar to \n\"apachectl configtest\"\n\n4. Fix the documentation. The interactive documentation on the website \nis particularly bad: try a search on \"sequence\", for example. The result \nis 65 matches, and each one a filename.\n\n5. The website needs lots of improvement, on layout, navigation, and \ncontent. mySQL actually has this one right.\n\n6. Moderate the lists better. There is a lot of traffic in general \nthat should be going to other lists. Keep all the high-volume, nitty-gritty \nstuff on hackers, away from everyday users looking for help.\n\n7. Stop underestimating mySQL. This is our competitor for the short-term \nat least, especially as both are open-source. Yes, we are better than \nmySQL on a technical level, but in all other areas they have us beat: \n\n* integration with other apps\n* mindshare\n* publicity\n* ease of install\n* ease of use\n* documentation\n* website navigation and appearance\n* coolness\n\nmySQL has the feel of an fun, open-source project. Postgres feels like \na stuffy, academic project. At least that's the impression I get from \nasking people. All mySQL has to do at this point is improve their \nproduct, by adding things such as sub-selects and transactions. A tall \norder, but they are well on their way. We need to tackle all the \nitems listed above. Not as easy, IMO, and we are not on our way.\n\n8. Stop overestimating Oracle. Postgres is not a blip on their radar \nyet. We will probably never catch up to them. Focus instead on the \nshortcomings compared to our real rival (see above). Oracle should \nbe emulated but not chased.\n\n9. Have an easily accessible \"todo\" list that not only itemizes coding \ntasks, but documentation tasks, advocacy tasks, etc. so anyone can \nget involved and make contributions, no matter how minor.\n\n10. Sign the source code (and other files) cryptographically. We are one of \nthe last open-source projects that do not do this. What's to stop someone \nfrom breaking in to a mirror and replacing the tarball and md5 file? \nWhat if they did it on the main server? This is very easy to implement.\n\n11. Consider an official name change to simply Postgres. Yes, there are \nhistorical reasons for this, but everyone I know ends up abbreviating it \nto postgres eventually anyway, and postgreSQL is a mouthful. \n\n12. Offer something \"fun\": a naming contest for the elephant (I know, \nI know), a bug squashing contest with prizes, a short interactive \n\"find the best database for you\" quiz, etc.\n\n13. Solve the benchmarking problem. Find out what it takes to get us \nbenchmarking to the same standards as the commercial DBs. Find a neutral \nthird-party to compare Postgres and mySQL. Publicize our outstanding \nresults. Start a debate on slashdot about it. :) Put the ball in mySQL's \ncourt for once.\n\n14. Other things: Offer a bz2 download to save people time and $$. Put \na favicon.ico on the site. Put in a site map. Consider using postgres.org. \nPublicize every little change as if it were the best thing since sliced \nbread. Solicit more lists like this. Release more often, even if more minor: \nstick to beta deadlines strictly. Offer success stories. \n\n15. Don't shoot the messenger. Some of this is my opinions, some is based \non talking to \"everyday users\" and developers about Postgres.\n\n\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200206251441\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE9GL2ovJuQZxSWSsgRAgKLAJ9zBJhw0SzDu0eXUhGSPuncGXGGdQCgua14\nIC2aWSjcSEHYxDU1hZXnZmA=\n=GV5l\n-----END PGP SIGNATURE-----\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 19:03:39 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Postgres idea list" }, { "msg_contents": "On Tue, 2002-06-25 at 21:31, Neil Conway wrote:\n> On Tue, Jun 25, 2002 at 07:03:39PM -0000, Greg Sabino Mullane wrote:\n> > 3. Combine pg_hba.conf, pg_ident.conf, and postgresql.conf into a single \n> > file, postgres.conf.\n> \n> I don't see why this would be a win.\n\nWhat would be a win is an SQL like interface to editing pg_hba.conf and\npostgresql.conf. Once that was done PG_Admin could write a lovely\ninterface to manage them without requiring direct access to the files.\n\n\n\n\n", "msg_date": "25 Jun 2002 19:48:22 +0000", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "\nFolks,\n\n> Since you asked, here are some ideas and thoughts I've been batting \n> around:\n> \n> 1. Start an advocacy mailing list, to help coordinate publicity, responses \n> to mySQL FUD, ways to advertise, etc.\netc ....\n\nI hereby nominate Greg as PostgreSQL.org Marketing Director.\n\n-- \n-Josh Berkus\n\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 13:54:17 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "\"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> Since you asked, here are some ideas and thoughts I've been batting \n> around:\n\nSome of these strike me as good ideas, some not, but for the moment I\ndon't want to get dragged into debating them individually. The thought\nthat kept coming to me as I read your list is: who exactly is going to\n*do* all this stuff? I sure don't want to. Reflecting on it leads me\nto realize that our existing project leadership (core committee and\nother key people) is mostly technically-focused people. We have been\ndoing a good job of providing technical leadership, and a pretty\ngood job of providing project infrastructure, but anything to do with\nmarketing, promotion, or advocacy has been given the cold shoulder,\nI think.\n\nWe need a few volunteers with the time and inclination to do that kind\nof work.\n\nIt also occurs to me that discussing this on -hackers, which is a\ntechnically focused list, is itself somewhat wrongheaded. The best\nlist we have for it at the moment is -general, but I wonder whether we\nshouldn't create a list centered around project promotion and outreach\nconcerns.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2002 17:15:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "On Tue, Jun 25, 2002 at 07:03:39PM -0000, Greg Sabino Mullane wrote:\n> 3. Combine pg_hba.conf, pg_ident.conf, and postgresql.conf into a single \n> file, postgres.conf.\n\nI don't see why this would be a win.\n\n> Have a command-line tool to make changes.\n\nYou mean like vi(1) ? :-)\n\n> Have a way to test out the changes, similar to \"apachectl configtest\"\n\nNot sure about this -- I can see the importance of testing out\nconfiguration changes, but AFAICS \"pg_ctl configtest\" would be\nlittle more than a glorified syntax check. I think we need to rely on\nDBA's to ensure that the configuration changes they make are valid.\n\n> 4. Fix the documentation. The interactive documentation on the website \n> is particularly bad: try a search on \"sequence\", for example. The result \n> is 65 matches, and each one a filename.\n\nI've heard this from others as well.\n\n> 5. The website needs lots of improvement, on layout, navigation, and \n> content. mySQL actually has this one right.\n\nAgreed.\n\n> Release more often, even if more minor: \n\nWhy? I think the PostgreSQL release engineering process is good.\n\n> stick to beta deadlines strictly.\n\nWhy? I'd much prefer that we release code when we are (relatively)\nsure it is ready for production use, rather than shoving experimental\ncode out the door to meet an artificial and probably unrealistic\nrelease target.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n\n", "msg_date": "Tue, 25 Jun 2002 17:31:52 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "\nFolks,\n\n> What would be a win is an SQL like interface to editing pg_hba.conf and\n> postgresql.conf. Once that was done PG_Admin could write a lovely\n> interface to manage them without requiring direct access to the files.\n\nI am going to keep arguing against PG_Admin as the primary solution to any of \nour administration UI challenges. It's WINDOWS ONLY, darn it!\n\n-- \n-Josh Berkus\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 17:51:16 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On Tue, 25 Jun 2002, Greg Sabino Mullane wrote:\n\n>\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n>\n> > Do people want an advocacy article written, like \"How to choose a\n> > database?\" I could do that.\n>\n> That would be good, as would an updated \"postgres vs mysql\" article\n> to point people towards. Or a \"postgres myths debunked\" page.\n>\n> > Basically, I am open to ideas. Would it help to fly me out to meet IT\n> > leaders? More books/articles? What does it take? What do successful\n> > companies and open source projects do that works?\n>\n> Since you asked, here are some ideas and thoughts I've been batting\n> around:\n>\n> 1. Start an advocacy mailing list, to help coordinate publicity, responses\n> to mySQL FUD, ways to advertise, etc.\n\npgsql-advocacy@postgresql.org has been around for >1year now ...\n\n> 4. Fix the documentation. The interactive documentation on the website\n> is particularly bad: try a search on \"sequence\", for example. The result\n> is 65 matches, and each one a filename.\n\nare you volunteering your time for this?\n\n> 5. The website needs lots of improvement, on layout, navigation, and\n> content. mySQL actually has this one right.\n\nalready being worked on by a group of programmers and web designers ...\n\n> 6. Moderate the lists better. There is a lot of traffic in general that\n> should be going to other lists. Keep all the high-volume, nitty-gritty\n> stuff on hackers, away from everyday users looking for help.\n\nagain, are you volunteering your time for this?\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 21:56:43 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On Tue, 25 Jun 2002, Tom Lane wrote:\n\n> \"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> > Since you asked, here are some ideas and thoughts I've been batting\n> > around:\n>\n> Some of these strike me as good ideas, some not, but for the moment I\n> don't want to get dragged into debating them individually. The thought\n> that kept coming to me as I read your list is: who exactly is going to\n> *do* all this stuff? I sure don't want to. Reflecting on it leads me\n> to realize that our existing project leadership (core committee and\n> other key people) is mostly technically-focused people. We have been\n> doing a good job of providing technical leadership, and a pretty\n> good job of providing project infrastructure, but anything to do with\n> marketing, promotion, or advocacy has been given the cold shoulder,\n> I think.\n>\n> We need a few volunteers with the time and inclination to do that kind\n> of work.\n>\n> It also occurs to me that discussing this on -hackers, which is a\n> technically focused list, is itself somewhat wrongheaded. The best\n> list we have for it at the moment is -general, but I wonder whether we\n> shouldn't create a list centered around project promotion and outreach\n> concerns.\n\nYou mean a list like ... oh, I don't know ...\npgsql-advocacy@postgresql.org? :)\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 21:57:49 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "On Tue, 25 Jun 2002 21:56:43 -0300 (ADT)\n\"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> On Tue, 25 Jun 2002, Greg Sabino Mullane wrote:\n> > 1. Start an advocacy mailing list, to help coordinate publicity, responses\n> > to mySQL FUD, ways to advertise, etc.\n> \n> pgsql-advocacy@postgresql.org has been around for >1year now ...\n\nNot on archives.postgresql.org though, nor has it is listed in the\nuser-oriented mailing lists at\n\nhttp://www.ca.postgresql.org/users-lounge/index.html\n\nIt's also not listed among the developer-oriented lists at\n\nhttp://developer.postgresql.org/maillist.php\n\neither (not to mention ftp.postgresql.org). While it may technically\nexist, it is well hidden.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n\n", "msg_date": "Tue, 25 Jun 2002 21:51:08 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On Tue, 25 Jun 2002, Josh Berkus wrote:\n\n>\n> Folks,\n>\n> > What would be a win is an SQL like interface to editing pg_hba.conf and\n> > postgresql.conf. Once that was done PG_Admin could write a lovely\n> > interface to manage them without requiring direct access to the files.\n>\n> I am going to keep arguing against PG_Admin as the primary solution to any of\n> our administration UI challenges. It's WINDOWS ONLY, darn it!\n\nAgreed about specifically focusing on PGAdmin, *but*, there are other\ninterfaces that could really make use of such a feature ... PHPPgAdmin\nbeing one ...\n\n... but, the first argument against this is what happens if/when someone\nputs in an entry in a 'pg_hba' table that blocks everyone from having\naccess? Or similar changes ...\n\nIf I recall correctly, the main argument against moving pg_hba (as an\nexample) is that you would have to move the 'access restrictions' inside\nthe backend (postgres) itself, instead of the front end (postmaster),\ncreating a high probably of a DDoS attack being quite effective ...\n\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 22:53:45 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "\nWell hidden, but so far 86 have found it and subscribed to it *grin*\n\nBut, you are right, should be advertised better ... I have some updates to\ndo to archives over the next day or two, in order to add in -patches also\n...\n\nAs for your comments about 'filtering -general', there is nothing stopping\nanyone from doing what I'm doing with this ... CC'ng -advocacy and\nsetting a Reply-To (wonder if that holds through majordomo?) over to\n-advocacy where this sort of stuff belongs ... :)\n\nVince, we can get -advocacy listed on the web site? There has been no\ntraffic over there until now, but there are ppl subscribed to it ...\n\n\n\nOn Tue, 25 Jun 2002, Neil Conway wrote:\n\n> On Tue, 25 Jun 2002 21:56:43 -0300 (ADT)\n> \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> > On Tue, 25 Jun 2002, Greg Sabino Mullane wrote:\n> > > 1. Start an advocacy mailing list, to help coordinate publicity, responses\n> > > to mySQL FUD, ways to advertise, etc.\n> >\n> > pgsql-advocacy@postgresql.org has been around for >1year now ...\n>\n> Not on archives.postgresql.org though, nor has it is listed in the\n> user-oriented mailing lists at\n>\n> http://www.ca.postgresql.org/users-lounge/index.html\n>\n> It's also not listed among the developer-oriented lists at\n>\n> http://developer.postgresql.org/maillist.php\n>\n> either (not to mention ftp.postgresql.org). While it may technically\n> exist, it is well hidden.\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n>\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 22:58:07 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres idea list" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> On Tue, 25 Jun 2002, Tom Lane wrote:\n>> It also occurs to me that discussing this on -hackers, which is a\n>> technically focused list, is itself somewhat wrongheaded. The best\n>> list we have for it at the moment is -general, but I wonder whether we\n>> shouldn't create a list centered around project promotion and outreach\n>> concerns.\n\n> You mean a list like ... oh, I don't know ...\n> pgsql-advocacy@postgresql.org? :)\n\nYou know, I seemed to remember that we had such a list, but I looked at \nhttp://archives.postgresql.org/\nand saw no archive for it, so I figured we didn't.\n\nIf you're about to go out and create it, may I suggest that the name\n-advocacy might not be the best thing? -advocacy lists seem (to me\nanyway) to be more often flamebait arenas than useful discussion areas.\nPerhaps pgsql-promotion would be a good name that'd avoid the aura of\nflamewars. Or maybe that's just my own perception not anyone else's.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2002 09:10:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "Hi Tom,\n\nTom Lane wrote:\n> \n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n<snip>\n> > You mean a list like ... oh, I don't know ...\n> > pgsql-advocacy@postgresql.org? :)\n> \n> You know, I seemed to remember that we had such a list, but I looked at\n> http://archives.postgresql.org/\n> and saw no archive for it, so I figured we didn't.\n> \n> If you're about to go out and create it, may I suggest that the name\n> -advocacy might not be the best thing? -advocacy lists seem (to me\n> anyway) to be more often flamebait arenas than useful discussion areas.\n> Perhaps pgsql-promotion would be a good name that'd avoid the aura of\n> flamewars. Or maybe that's just my own perception not anyone else's.\n\nThere is already a pgsql-advocacy list (as was pointed out recently),\nbut it's unused.\n\nBorrow a leaf from the OpenOffice.org project, how about a\npgsql-marketing list?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n", "msg_date": "Thu, 27 Jun 2002 00:09:09 +0930", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On Tue, 25 Jun 2002, Marc G. Fournier wrote:\n\n>\n> Well hidden, but so far 86 have found it and subscribed to it *grin*\n\nIt's on the subscription form.\n\n[snip]\n\n> Vince, we can get -advocacy listed on the web site? There has been no\n> traffic over there until now, but there are ppl subscribed to it ...\n\nall done.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 17:39:51 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres idea list" }, { "msg_contents": "> > Vince, we can get -advocacy listed on the web site? There has been no\n> > traffic over there until now, but there are ppl subscribed to it ...\n> \n> all done.\n\nAny chance of getting a pgsql-patches link on archives.postgresql.org? \nI know the archives are created (I use them) but there is no obvious\nlink.\n\nSecondly, could the links that do exist be ordered alphabetically?\n\nThanks,\n\tRod\n\n\n\n", "msg_date": "27 Jun 2002 00:05:16 +0000", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On 27 Jun 2002, Rod Taylor wrote:\n\n> > > Vince, we can get -advocacy listed on the web site? There has been no\n> > > traffic over there until now, but there are ppl subscribed to it ...\n> >\n> > all done.\n>\n> Any chance of getting a pgsql-patches link on archives.postgresql.org?\n> I know the archives are created (I use them) but there is no obvious\n> link.\n>\n> Secondly, could the links that do exist be ordered alphabetically?\n\nI have no idea who does what on archives. I just yell at Marc if\nsomething's broke.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 20:29:48 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Any chance of getting a pgsql-patches link on archives.postgresql.org? \n> I know the archives are created (I use them) but there is no obvious\n> link.\n\n> Secondly, could the links that do exist be ordered alphabetically?\n\nI'm for that too. Every time I go to the archives page, I have to look\ncarefully to find the list I want.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2002 22:06:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "\n-patches added ... I've gotta redo that page, as it was just a\n'quick-n-dirty' when I did it ...\n\nOn 27 Jun 2002, Rod Taylor wrote:\n\n> > > Vince, we can get -advocacy listed on the web site? There has been no\n> > > traffic over there until now, but there are ppl subscribed to it ...\n> >\n> > all done.\n>\n> Any chance of getting a pgsql-patches link on archives.postgresql.org?\n> I know the archives are created (I use them) but there is no obvious\n> link.\n>\n> Secondly, could the links that do exist be ordered alphabetically?\n>\n> Thanks,\n> \tRod\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n>\n>\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 23:06:51 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "\nwill do it tonight :)\n\nOn Wed, 26 Jun 2002, Tom Lane wrote:\n\n> Rod Taylor <rbt@zort.ca> writes:\n> > Any chance of getting a pgsql-patches link on archives.postgresql.org?\n> > I know the archives are created (I use them) but there is no obvious\n> > link.\n>\n> > Secondly, could the links that do exist be ordered alphabetically?\n>\n> I'm for that too. Every time I go to the archives page, I have to look\n> carefully to find the list I want.\n>\n> \t\t\tregards, tom lane\n>\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 23:07:09 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "\nhttp://archives.postgresql.org/ ... better?\n\nOn Wed, 26 Jun 2002, Marc G. Fournier wrote:\n\n>\n> will do it tonight :)\n>\n> On Wed, 26 Jun 2002, Tom Lane wrote:\n>\n> > Rod Taylor <rbt@zort.ca> writes:\n> > > Any chance of getting a pgsql-patches link on archives.postgresql.org?\n> > > I know the archives are created (I use them) but there is no obvious\n> > > link.\n> >\n> > > Secondly, could the links that do exist be ordered alphabetically?\n> >\n> > I'm for that too. Every time I go to the archives page, I have to look\n> > carefully to find the list I want.\n> >\n> > \t\t\tregards, tom lane\n> >\n>\n>\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 01:16:41 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> http://archives.postgresql.org/ ... better?\n\nYup, although I'd suggest making the classification line up with\nthe one on the main website --- docs and cygwin are listed as\ndeveloper lists there.\n\nAlso, someone suggested listing the by-month indexes back-to-front\n(most recent month first), which seems like a great idea if not\ndifficult.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2002 00:20:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "On Thu, 27 Jun 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > http://archives.postgresql.org/ ... better?\n>\n> Yup, although I'd suggest making the classification line up with\n> the one on the main website --- docs and cygwin are listed as\n> developer lists there.\n>\n> Also, someone suggested listing the by-month indexes back-to-front\n> (most recent month first), which seems like a great idea if not\n> difficult.\n\nBetter?\n\nhttp://archives.postgresql.org/pgsql-jdbc\n\nThe rest will 'fall in line' once there is something for mhonarc to work\non again ...\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 09:28:25 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "> > Also, someone suggested listing the by-month indexes back-to-front\n> > (most recent month first), which seems like a great idea if not\n> > difficult.\n> \n> Better?\n> \n> http://archives.postgresql.org/pgsql-jdbc\n> \n> The rest will 'fall in line' once there is something for mhonarc to work\n> on again ...\n\nI don't think I've been so happy to see a webpage.\n\nMuch better.\n\nCurious how there is a 'search the archives' link going to FTS when\nthere is a form at the top of the page using another mechanism.\n\n\n\n", "msg_date": "27 Jun 2002 08:34:22 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On 27 Jun 2002, Rod Taylor wrote:\n\n> > > Also, someone suggested listing the by-month indexes back-to-front\n> > > (most recent month first), which seems like a great idea if not\n> > > difficult.\n> >\n> > Better?\n> >\n> > http://archives.postgresql.org/pgsql-jdbc\n> >\n> > The rest will 'fall in line' once there is something for mhonarc to work\n> > on again ...\n>\n> I don't think I've been so happy to see a webpage.\n>\n> Much better.\n>\n> Curious how there is a 'search the archives' link going to FTS when\n> there is a form at the top of the page using another mechanism.\n\ntwo different methods of searching ... those pages still need one helluva\nlot of cleanups though, as I shoudl re-word that 'Search the archives' as\nsomething more like 'Alternative methods of searching' or something like\nthat, and point to FTS and Google ...\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 10:00:54 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" } ]
[ { "msg_contents": "I have noticed that unlike indexes/check constrains, \"ALTER TABLE ADD\nCONSTRAINT <c_name> FOREIGN KEY ...\" statement does NOT prevent a user from\nre-creating an existing constraint more than once. Following this, a pg_dump\non the table showed multiple entries of the foreign key constraint/trigger\ndefinitions.\n\nMy concerns are:\n\nIf it ends up creating multiple triggers (doing the same task), do all these\ntriggers\nget executed for each DML operation ?.\nWill this cause a performance hit, if so is there a work-around to\nremove duplicate entries from the sys tables ?\n\n-- Rao Kumar\n\nExample: Running Postgres 7.1.3\n========\ntest=# create table emp (emp_id integer NOT NULL PRIMARY KEY, emp_name\nvarchar(20),dept_id integer);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'emp_pkey' for\ntable 'emp'\nCREATE\ntest=# create table dept (dept_id integer NOT NULL PRIMARY KEY, dept_name\nvarchar(20));\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'dept_pkey' for\ntable 'dept'\nCREATE\ntest=# alter table emp add constraint fk_emp_dept_id foreign key (dept_id)\nreferences dept (dept_id);\nNOTICE: ALTER TABLE ... ADD CONSTRAINT will create implicit trigger(s) for\nFOREIGN KEY check(s)\nCREATE\n--- TRY CREATING THE KEY AGAIN .........\ntest=# alter table emp add constraint fk_emp_dept_id foreign key (dept_id)\nreferences dept (dept_id);\nNOTICE: ALTER TABLE ... ADD CONSTRAINT will create implicit trigger(s) for\nFOREIGN KEY check(s)\nCREATE\ntest=#\npg_dump of \"emp\" table.\n======================\n-- Selected TOC Entries:\n--\n\\connect - raokumar\n--\n-- TOC Entry ID 2 (OID 53485)\n--\n-- Name: emp Type: TABLE Owner: raokumar\n--\n\nCREATE TABLE \"emp\" (\n \"emp_id\" integer NOT NULL,\n \"emp_name\" character varying(20),\n \"dept_id\" integer,\n Constraint \"emp_pkey\" Primary Key (\"emp_id\")\n);\n\n--\n-- Data for TOC Entry ID 3 (OID 53485)\n--\n-- Name: emp Type: TABLE DATA Owner: raokumar\n--\n\n\nCOPY \"emp\" FROM stdin;\n\\.\n--\n-- TOC Entry ID 5 (OID 53515)\n--\n-- Name: \"RI_ConstraintTrigger_53514\" Type: TRIGGER Owner: raokumar\n--\n\nCREATE CONSTRAINT TRIGGER \"fk_emp_dept_id\" AFTER INSERT OR UPDATE ON \"emp\"\nFROM \"dept\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\nPROCEDURE \"RI_FKey_check_ins\" ('fk_emp_dept_id', 'emp', 'dept',\n'UNSPECIFIED', 'dept_id', 'dept_id');\n\n--\n-- TOC Entry ID 4 (OID 53521)\n--\n-- Name: \"RI_ConstraintTrigger_53520\" Type: TRIGGER Owner: raokumar\n--\n\nCREATE CONSTRAINT TRIGGER \"fk_emp_dept_id\" AFTER INSERT OR UPDATE ON \"emp\"\nFROM \"dept\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\nPROCEDURE \"RI_FKey_check_ins\" ('fk_emp_dept_id', 'emp', 'dept',\n'UNSPECIFIED', 'dept_id', 'dept_id');\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 15:08:13 -0400", "msg_from": "\"Rao Kumar\" <raokumar@netwolves.com>", "msg_from_op": true, "msg_subject": "Foreign Key/ALTER TABLE Issue" }, { "msg_contents": "On Tue, 25 Jun 2002, Rao Kumar wrote:\n\n> I have noticed that unlike indexes/check constrains, \"ALTER TABLE ADD\n> CONSTRAINT <c_name> FOREIGN KEY ...\" statement does NOT prevent a user from\n> re-creating an existing constraint more than once. Following this, a pg_dump\n> on the table showed multiple entries of the foreign key constraint/trigger\n> definitions.\n\nCorrect. The assumption is that the user knows what he or she is doing\n(and thus that the constraints are different in some way). We might want\nto change this at some point, but this isn't only foreign keys (you can\ndo the same with unique indexes or check constraints afaik) and should\nprobably be dealt with as such.\n\n> My concerns are:\n>\n> If it ends up creating multiple triggers (doing the same task), do all these\n> triggers get executed for each DML operation ?.\n\nYes.\n\n> Will this cause a performance hit, if so is there a work-around to\n> remove duplicate entries from the sys tables ?\n\nYou can remove one set of the triggers from pg_trigger which is pretty\nmuch the only way right now to drop a foreign key constraint.\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 14:19:05 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Foreign Key/ALTER TABLE Issue" } ]
[ { "msg_contents": "\nMorning ...\n\n\tFiguring that I'd try it out, I uncommented the virtual_host entry\nin postmaster.conf and set it to an IP of 64.49.215.5 (the \"base\" machine)\n...\n\nvirtual_host = '64.49.215.5'\n\n\tI then restarted the daemon as:\n\n/usr/local/pgsql721/bin/postmaster -B 10240 -N 512 -i -p 5432 -D/v1/pgsql/5432 -S (postgres)\n\n\tbut if I try and connect to it at a different IP, it allows me to:\n\n%psql -p 5432 -h 64.49.215.6 template1\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntemplate1=# \\q\n%psql -p 5432 -h 64.49.215.7 template1\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntemplate1=# \\q\n\n\tAm I misunderstandign that feature, or is there a problem with it?\n\nThanks ...\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 23:39:00 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Using virtual_host setting in postmaster.conf file ... " } ]
[ { "msg_contents": "Hi, my name is Emil, I am a software engineer at NEC, Japan. This is not in\nany way related to my work, but I am that interested in cygwin that I\ninstalled it on my own PC and right now I am enjoying it. Your work is a\ngood one really, I appreciate what you guys are doing.\n\nI am just wondering, because I have installed PostgreSQL on my Cygwin\n(Windows NT) and this is my first time to attempt a dive into the database\nrealm. I am not sure of this, but there should be a way to access a database\nthrough say, a C++ program. Right now my Cygwin-PostgreSQL database is\nworking fine, I was able to create databases and tables, view them, modify\nthe contents, etc. But what I really want to do right now is to create a\nsimple program that would successfully connect to my PostgreSQL databases (i\nhave to specify the database i assume). Just a 5-liner maybe, just to test\nthe connection.\n\nI tried and looked for some sample codes in the internet but I cant quite\nfind a good one. I hope you guys can help me.\n\nMy PostgreSQL is running okay, i just dont know what functions should be\ncalled, what header files to include, etc.\n\nThank you for your time guys, and i hope to hear from you soon.\n\nPS\n Maybe in the future, when I get to know a lot about Cygwin, I would\ncontact you guys and join your support group probably... :-)\n\n\nSincerely,\nEmilio Uy III\n\n\n@@@@@@@@@@@@@@@@@@@\n          Emilio Uy III\n        エミリオ ウイ III    \n\n     NEC通信システム九州(株)\n <メイル>  emilio@qncos.nec.co.jp \n          e.uy@pgs.com.ph\n          emil_uy_iii@docomo.ne.jp\n <携帯電話> 090-24510266 \n <家電話>   092-8839411\n@@@@@@@@@@@@@@@@@@@\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 12:22:54 +0900", "msg_from": "\"Emilio Uy III\" <emilio@qncos.nec.co.jp>", "msg_from_op": true, "msg_subject": "database access via c++ program..." }, { "msg_contents": "On Wed, Jun 26, 2002 at 12:22:54PM +0900, Emilio Uy III wrote:\n> \n> I am just wondering, because I have installed PostgreSQL on my Cygwin\n> (Windows NT) and this is my first time to attempt a dive into the database\n> realm. I am not sure of this, but there should be a way to access a database\n> through say, a C++ program. Right now my Cygwin-PostgreSQL database is\n> working fine, I was able to create databases and tables, view them, modify\n> the contents, etc. But what I really want to do right now is to create a\n> simple program that would successfully connect to my PostgreSQL databases (i\n> have to specify the database i assume). Just a 5-liner maybe, just to test\n> the connection.\n \nIf you're using gcc or a similarly good compiler (Visual C++ will NOT work),\ntry libpqxx:\n\n\thttp://members.ams.chello.nl/j.vermeulen31/proj-libpqxx.html\n\nI don't know how well it installs under Cygwin, but a simple test program\nusing libpqxx as its C++ interface to PostgreSQL can be something like:\n\n\n\t#include <iostream>\n\n\t#include <pqxx/connection.h>\n\t#include <pqxx/transaction.h>\n\t#include <pqxx/result.h>\n\n\tusing namespace std;\n\tusing namespace pqxx;\n\n\tint main()\n\t{\n\t\ttry\n\t\t{\n\t\t\tConnection C(\"dbname=mydatabase\");\n\t\t\tTransaction T(C, \"T\");\n\t\t\tResult R = T.Exec(\"SELECT * FROM pg_tables\");\n\n\t\t\tfor (Result::const_iterator c = R.begin();\n\t\t\t\tc != R.end();\n\t\t\t\t++c)\n\t\t\t{\n\t\t\t\tcout << c[0].c_str() << endl;\n\t\t\t}\n\n\t\t\tT.Commit();\n\t\t}\n\t\tcatch (const exception &e)\n\t\t{\n\t\t\tcerr << e.what() << endl;\n\t\t\treturn 1;\n\t\t}\n\n\t\treturn 0;\n\t}\n\nHTH,\n\nJeroen\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 16:42:26 +0200", "msg_from": "jtv <jtv@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: database access via c++ program..." } ]
[ { "msg_contents": "Who originally did the TPC-C benchmarks? Is the source available for them?\n\nJonah H. Harris, Chairman/CEO\nNightStar Corporation\n\"One company, one world, one BIG difference!\"\n\n\n\n", "msg_date": "Tue, 25 Jun 2002 21:36:30 -0600", "msg_from": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com>", "msg_from_op": true, "msg_subject": "TPC-C Benchmarks" }, { "msg_contents": "\"Jonah H. Harris\" wrote:\n> \n> Who originally did the TPC-C benchmarks? Is the source available for them?\n\nGreat Bridge once ran some sort of (what they thought it would be) TPC-C\nbenchmark. They used the proprietary Benchmark Factory software for\ndoing so.\n\nWhile working there I had some time to play around with that stuff. This\nbenchmark suite uses ODBC drivers to access the database remotely from\none or more Windows clients. While running queries similar to what a\ncorrect TPC-C implementation would do, the given implementation of\nBenchmark Factory is far from the specifications. It doesn't implement a\nsystem under test and doesn't use the specified thinking- and keying\ntimes and does not bla, bla, bla ... in short, forget it.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Wed, 26 Jun 2002 09:09:45 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: TPC-C Benchmarks" } ]
[ { "msg_contents": "This mail was sent to pgadmin-hackers list by Mark Radulovich \n<radulovich@yahoo.com>. It is quite interesting:\n\n*************************************************************************************\nI've been following these mailing lists for over two years, and I guess now\n is the time to chime in. I agree with Tom that an organized effort is\n necessary. As such, I'd recommend the following:\n\n1 - Revamp the website. It's not bad, but it should be better designed to\nhighlight things for new users as well as all of the documentation that is on\nit (I usually go to google instead of clicking around the website). I think\n the basic site should have four key sections - Application Developers,\n Database Admins, PG Core Developers, News & Downloads.\n\n2 - Get a list of people who can help with benchmarking efforts. This should\n be for magazines/websites that want to benchmark PG against the competition,\n as well as benchmarking PG on various hardware with various options. This\n could even start out as a simple \"Benchmarking PG FAQ\"\n\n3 - Revamp the \"Developer's Corner\". I'm a web developer, not a PG developer,\nbut I still went here looking for info on building apps in Java, PHP, Perl,\n etc that need to connect to PG on the backend. This is probably a simple\n rename, but application developers need a more prominent area.\n\n4 - Reach out and talk to authors & developers. We have a great database here\n - let's tell the world. This can be simple - identify the major magazines &\n web sites, rank order them by relevant audience. Then, make sure we contact\n someone at each site once a month, and that they get press releases via\n email. (email is essentially free, so why not send them out to all the\n magazines/web sites?)\n\n5 - Show off PGAdmin!!! You'd think it was just an afterthought when looking\naround the web site. We should promote that as a great tool to manage PG, so\nthat MS users can get the courage to try it out. We can't market it like MS\n can (unless someone around here dhas $40 billion lying around), but we can\n sure make PGAdmin more prominent on the site.\n\n6 - Improve the Windows port. I am convinced that mySQL is popular because a\nwindows user can download Apache, PHP, and mySQL onto his machine and learn\n how it works. When he's ready, he can move to *nix. PG doesn't have that\n advantage (no newbie is going to mess with cygwin setup on his Windows 98\n machine). Also, just because Windows is not an optimal database platform\n doesn't mean we shouldn't serve it better - a lot of people (myself\n included) cut their teeth on Windows computers, simply because they cannot\n afford the time or money to learn another OS just to be able to use a\n database.\n\n7 - A simple thing, really. Can someone change the order of the months on the\nmailing list archive home page? Scrolling down for 66 months, just to click\n on the \"by date\" or \"by thread\" link for the current month just bugs me.\n Whether this is possible or not, I don't know - I just wanted to comment\n about it because I'm sure there are others with the same complaint.\n\n\nAnyway, these are just a couple of ideas I have. I have used PG since 7.0,\n and have been incredibly happy with it. As for any competition with MySQL,\n so what? Let's learn from what they do better than us, and use that to\n increase our visibility.\n\n\nOn a side note, I'd like to thank *all* of the people that have contributed\n to PG. I started out in the open source database world with MySQL, but have\n grown to love the reliability of PG. For the last several years, I have been\n responsible for several MS SQL Server 2000 (and 7.0) servers. They have an\n easy to use database, in that Enterprise Manager is almost as simple as\n Access (no flames, please!). They also market the heck out of it. I never\n knew that PG would ever be as easy to use - until I used PGAdmin. I can only\n say one thing - WOW! (although I still use the command line - old habits die\n hard...) Anyway, thanks to all of you for allowing me to play (and work!)\n with such a great database.\n\nRegards,\nMark Radulovich\n\nPS - I'm willing to donate time to the website and the other items listed\nabove.\n\n======================\nTom Lane wrote:\n\nJosh Berkus <josh@agliodbs.com> writes:\nFrankly, my feeling is, as a \"geek-to-geek\" product, PostgreSQL is already\nadequately marketed through our huge network of DBA users and code\ncontributors.\n\nWell, mumble ... it seems to me that we are definitely suffering from\na \"buzz gap\" (cf missile gap, Dr Strangelove, etc) compared to MySQL.\nThat doesn't bother me in itself, but the long-term implications are\nscary. If MySQL manages to attract a larger development community as\na consequence of more usage or better marketing, then eventually they\nwill be ahead of us on features and every other measure that counts.\nOnce we're number two with no prayer of catching up, how long will our\nproject remain viable? So, no matter how silly you might think\n\"MySQL is better\" is today, you've got to consider the prospect that\nit will become a self-fulfilling prophecy.\n\nSo far I have not worried about that scenario too much, because Monty\nhas always treated the MySQL sources as his personal preserve; if he\nhadn't written it or closely reviewed it, it didn't get in, and if it\ndidn't hew closely to his opinion of what's important, it didn't get in.\nBut I get the impression that he's loosened up of late. If MySQL stops\nbeing limited by what one guy can do or review, their rate of progress\ncould improve dramatically.\n\nIn short: we could use an organized marketing effort. I really\nfeel the lack of Great Bridge these days; there isn't anyone with\ncomparable willingness to expend marketing talent and dollars on\npromoting Postgres as such. Not sure what to do about it. We've\nsort of dismissed Jean-Michel's comments (and those of others in\nthe past) with \"sure, step right up and do the marketing\" responses.\nBut the truth of the matter is that a few amateurs with no budget\nwon't make much of an impression. We really need some professionals\nwith actual dollars to spend, and I don't know where to find 'em.\n\n regards, tom lane\n\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! - Official partner of 2002 FIFA World Cup\nhttp://fifaworldcup.yahoo.com\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:55:57 +0200", "msg_from": "Mark Radulovich <radulovich@yahoo.com>(by way of Jean-Michel POURE\n\t<jm.poure@freesurf.fr>)", "msg_from_op": true, "msg_subject": "Marketing PostgreSQL" }, { "msg_contents": "I guess the website is really good. The only thing I'd do is to add a \nsection listing the core features of PostgreSQL - I think this could be \nan important point.\n\nIn my opinion MySQL is not a competitor and we should not benchmark \nPostgreSQL and compare it with MySQL. Those features which are really \nimportant are not supported and so it is just not possible to make a \nserious comparison unless you want to benchmark databases containing 300 \nrecords or so ...\nSpeed is not the only thing and it can lead to false results. Let's \nthinks of an example:\nLet think of a table containing 300 records and you want to :\n\nSELECT cosh(x) FROM y HAVING cosh(x) > z;\n\nHow can anybody implement cosh for MySQL??? In this case the benchmark \ncannot be done - in the case of MySQL the problem has to be solved on an \napplication level. These are the REAL ADVANTAGES of PostgreSQL and this \nthe reason why people are using it. I think it is not work talking about \nsmall databases and simple queries. We should focus on stability and \nextensibility and not on \"SELECT 'micky mouse' FROM smalltable\".\n\nPostgreSQL is the most advanced database on earth and that counts.\nThese days we are negotiating with a potential customer who wants to \nsubstitute Oracle on AIX for PostgreSQL on AIX because of costs and \nextensibility. We did NOT choose MySQL because he has the impression \nthat MySQL people focus on speed rather than on reliability and \nextensibility. I think that shows what things are all about.\n\nI am not happy about the Windows port - maybe it will cause a lot of \ntroubles in real applications due to problems related to Windows.\n\nLet's do enterprise computing and don't let us build a database for \nminor Web databases.\n\n Hans\n\n\nMark Radulovich (by way of Jean-Michel POURE ) wrote:\n\n>This mail was sent to pgadmin-hackers list by Mark Radulovich \n><radulovich@yahoo.com>. It is quite interesting:\n>\n>*************************************************************************************\n>I've been following these mailing lists for over two years, and I guess now\n> is the time to chime in. I agree with Tom that an organized effort is\n> necessary. As such, I'd recommend the following:\n>\n>1 - Revamp the website. It's not bad, but it should be better designed to\n>highlight things for new users as well as all of the documentation that is on\n>it (I usually go to google instead of clicking around the website). I think\n> the basic site should have four key sections - Application Developers,\n> Database Admins, PG Core Developers, News & Downloads.\n>\n>2 - Get a list of people who can help with benchmarking efforts. This should\n> be for magazines/websites that want to benchmark PG against the competition,\n> as well as benchmarking PG on various hardware with various options. This\n> could even start out as a simple \"Benchmarking PG FAQ\"\n>\n>3 - Revamp the \"Developer's Corner\". I'm a web developer, not a PG developer,\n>but I still went here looking for info on building apps in Java, PHP, Perl,\n> etc that need to connect to PG on the backend. This is probably a simple\n> rename, but application developers need a more prominent area.\n>\n>4 - Reach out and talk to authors & developers. We have a great database here\n> - let's tell the world. This can be simple - identify the major magazines &\n> web sites, rank order them by relevant audience. Then, make sure we contact\n> someone at each site once a month, and that they get press releases via\n> email. (email is essentially free, so why not send them out to all the\n> magazines/web sites?)\n>\n>5 - Show off PGAdmin!!! You'd think it was just an afterthought when looking\n>around the web site. We should promote that as a great tool to manage PG, so\n>that MS users can get the courage to try it out. We can't market it like MS\n> can (unless someone around here dhas $40 billion lying around), but we can\n> sure make PGAdmin more prominent on the site.\n>\n>6 - Improve the Windows port. I am convinced that mySQL is popular because a\n>windows user can download Apache, PHP, and mySQL onto his machine and learn\n> how it works. When he's ready, he can move to *nix. PG doesn't have that\n> advantage (no newbie is going to mess with cygwin setup on his Windows 98\n> machine). Also, just because Windows is not an optimal database platform\n> doesn't mean we shouldn't serve it better - a lot of people (myself\n> included) cut their teeth on Windows computers, simply because they cannot\n> afford the time or money to learn another OS just to be able to use a\n> database.\n>\n>7 - A simple thing, really. Can someone change the order of the months on the\n>mailing list archive home page? Scrolling down for 66 months, just to click\n> on the \"by date\" or \"by thread\" link for the current month just bugs me.\n> Whether this is possible or not, I don't know - I just wanted to comment\n> about it because I'm sure there are others with the same complaint.\n>\n>\n>Anyway, these are just a couple of ideas I have. I have used PG since 7.0,\n> and have been incredibly happy with it. As for any competition with MySQL,\n> so what? Let's learn from what they do better than us, and use that to\n> increase our visibility.\n>\n> \n>\n\n-- \nCybertec Geschwinde &. Schoenig\nLudo-Hartmannplatz 1/14; A-1160 Wien\nTel.: +43/1/913 68 09 oder +43/664/233 90 75\nURL: www.postgresql.at, www.cybertec.at, www.python.co.at, www.openldap.at\n\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 17:23:19 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Marketing PostgreSQL" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Josh Berkus [mailto:josh@agliodbs.com] \n> Sent: 26 June 2002 01:51\n> To: pgsql-hackers@postgresql.org\n> Cc: Rod Taylor\n> Subject: Re: [HACKERS] Postgres idea list\n> \n> \n> \n> Folks,\n> \n> > What would be a win is an SQL like interface to editing pg_hba.conf \n> > and postgresql.conf. Once that was done PG_Admin could \n> write a lovely \n> > interface to manage them without requiring direct access to \n> the files.\n> \n> I am going to keep arguing against PG_Admin as the primary \n> solution to any of \n> our administration UI challenges. It's WINDOWS ONLY, darn it!\n\nJust for info, *absolute* number 1 priority for the next major release\nof pgAdmin is to take the 5+ years of experience and rewrite the code in\na more platform independent language.\n\nRegards, Dave.\n\n\n", "msg_date": "Wed, 26 Jun 2002 14:45:40 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "Dave,\n\nWould you consider java as a platform independant language? I have\nstarted a project on sf.net called jpgadmin, but I see the duplication\nof effort as a waste of time.\n\nDave\n\nOn Wed, 2002-06-26 at 09:45, Dave Page wrote:\n> \n> \n> > -----Original Message-----\n> > From: Josh Berkus [mailto:josh@agliodbs.com] \n> > Sent: 26 June 2002 01:51\n> > To: pgsql-hackers@postgresql.org\n> > Cc: Rod Taylor\n> > Subject: Re: [HACKERS] Postgres idea list\n> > \n> > \n> > \n> > Folks,\n> > \n> > > What would be a win is an SQL like interface to editing pg_hba.conf \n> > > and postgresql.conf. Once that was done PG_Admin could \n> > write a lovely \n> > > interface to manage them without requiring direct access to \n> > the files.\n> > \n> > I am going to keep arguing against PG_Admin as the primary \n> > solution to any of \n> > our administration UI challenges. It's WINDOWS ONLY, darn it!\n> \n> Just for info, *absolute* number 1 priority for the next major release\n> of pgAdmin is to take the 5+ years of experience and rewrite the code in\n> a more platform independent language.\n> \n> Regards, Dave.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n> \n\n\n\n\n\n", "msg_date": "26 Jun 2002 14:00:58 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "Daves,\n\n> Would you consider java as a platform independant language? I have\n> started a project on sf.net called jpgadmin, but I see the duplication\n> of effort as a waste of time.\n\nJava has its drawbacks, but a JPgAdmin tool would significantly encourage \nPostgres-OpenOffice.org integration.\n\n-- \n-Josh Berkus\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:08:31 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "Josh,\n\nWhat do you see as the drawbacks with java, and how can they be\ncircumvented?\n\nDave\nOn Wed, 2002-06-26 at 14:08, Josh Berkus wrote:\n> Daves,\n> \n> > Would you consider java as a platform independant language? I have\n> > started a project on sf.net called jpgadmin, but I see the duplication\n> > of effort as a waste of time.\n> \n> Java has its drawbacks, but a JPgAdmin tool would significantly encourage \n> Postgres-OpenOffice.org integration.\n> \n> -- \n> -Josh Berkus\n> \n> \n\n\n\n\n", "msg_date": "26 Jun 2002 14:17:58 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "\nDave,\n\n> What do you see as the drawbacks with java, and how can they be\n> circumvented?\n\n1. Java is not Open Source. It's an open standard, but not OS.\n\n2. I understand that there are some serious limitations to the current \nPostgres JDBC drivers. I have not used them, so I'm reporting rumor, here.\n\n3. There are compatiblity issues between the various JVMs. We'd have to pick \na particular JVM and stick with it, and get a lot of complaints from users on \nother JVMs. I don't know how serious the issues are.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \tjosh@agliodbs.com\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:35:07 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "Josh,\n\n1) There is an open source implementation of java\n2) The jdbc driver is much better than it was recently we have made lots\nof improvements, and it won't affect jpgadmin anyway. I actually think\nwriting the admin tool in java will make the driver better.\n3) Don't see this as a big issue we aren't writing something esoteric\nhere.\n\nDave\nOn Wed, 2002-06-26 at 14:35, Josh Berkus wrote:\n> \n> Dave,\n> \n> > What do you see as the drawbacks with java, and how can they be\n> > circumvented?\n> \n> 1. Java is not Open Source. It's an open standard, but not OS.\n> \n> 2. I understand that there are some serious limitations to the current \n> Postgres JDBC drivers. I have not used them, so I'm reporting rumor, here.\n> \n> 3. There are compatiblity issues between the various JVMs. We'd have to pick \n> a particular JVM and stick with it, and get a lot of complaints from users on \n> other JVMs. I don't know how serious the issues are.\n> \n> -- \n> -Josh Berkus\n> \n> ______AGLIO DATABASE SOLUTIONS___________________________\n> Josh Berkus\n> Complete information technology \tjosh@agliodbs.com\n> and data management solutions \t(415) 565-7293\n> for law firms, small businesses \t fax 621-2533\n> and non-profit organizations. \tSan Francisco\n> \n> \n\n\n\n\n", "msg_date": "26 Jun 2002 14:36:15 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "\nDave,\n\n> 1) There is an open source implementation of java\n\nReally? I thought Sun had a patent.\n\n> 2) The jdbc driver is much better than it was recently we have made lots\n> of improvements, and it won't affect jpgadmin anyway. I actually think\n> writing the admin tool in java will make the driver better.\n\nThat's great news, especially as we are planning to write a small business \naccounting package using Postgres, OpenOffice.org, and Java.\n\n> 3) Don't see this as a big issue we aren't writing something esoteric\n> here.\n\nCool. As I said, I don't think that any of the issues are prohibitive. \n\nBTW, does anyone on this list know about Command Prompt, Inc.'s tools? There \nseems to be a lot of duplicte development going on in the commercial space.\n\n-Josh Berkus\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:50:05 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On Wed, 2002-06-26 at 14:50, Josh Berkus wrote:\n> \n> Dave,\n> \n> > 1) There is an open source implementation of java\n> \n> Really? I thought Sun had a patent.\nwww.blackdown.org\n> \n> > 2) The jdbc driver is much better than it was recently we have made lots\n> > of improvements, and it won't affect jpgadmin anyway. I actually think\n> > writing the admin tool in java will make the driver better.\n> \n> That's great news, especially as we are planning to write a small business \n> accounting package using Postgres, OpenOffice.org, and Java.\n\nThat's awesome, have you looked at compiere?\nwww.sf.net/projects/compiere\n> \n> > 3) Don't see this as a big issue we aren't writing something esoteric\n> > here.\n> \n> Cool. As I said, I don't think that any of the issues are prohibitive. \n> \n> BTW, does anyone on this list know about Command Prompt, Inc.'s tools? There \n> seems to be a lot of duplicte development going on in the commercial space.\n\nYa, they're on the list\n> \n> -Josh Berkus\n> \n> \n\n\n\n\n", "msg_date": "26 Jun 2002 14:50:21 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On Wed, 26 Jun 2002 11:35:07 PDT, the world broke into rejoicing as\nJosh Berkus <josh@agliodbs.com> said:\n> > What do you see as the drawbacks with java, and how can they be\n> > circumvented?\n> \n> 1. Java is not Open Source. It's an open standard, but not OS.\n\nThe problem is not with the language; it is with the layered libraries\non top.\n\nIt's reasonably usable as a server side language; the _real_ serious\nproblems come in if you want to build a GUIed application, when your\nchoice is between:\n\n a) A really klunky AWT UI that will be unacceptable to all, and\n\n b) A SWING UI that makes your application critically dependent on\n non-\"open source\" software.\n\nThe old Java 1.01 stuff is fairly successfully \"freely usable,\" but\nthat's not what anyone wants to develop with. They want the cool new\nJ2EE stuff, and it takes some serious research to figure out that you\naren't going to be doing that with 'free software,' despite the\nexistence of stuff like JBoss. You still need components that are\nDefinitely Not Free.\n\nThe answer is that someone has to implement a complete set of\nreplacements for the SunSoft components under free licenses. That\n\"circumvention\" is a distinctly non-trivial task.\n\n> 2. I understand that there are some serious limitations to the current\n> Postgres JDBC drivers. I have not used them, so I'm reporting rumor,\n> here.\n\nI've not run into problems with them, but maybe my use hasn't been\nextensive enough :-).\n--\n(concatenate 'string \"cbbrowne\" \"@acm.org\")\nhttp://cbbrowne.com/info/rdbms.html\n\"Remember folks. Street lights timed for 35 mph are also timed for 70\nmph.\" -- Jim Samuels\n\n\n", "msg_date": "Wed, 26 Jun 2002 17:38:34 -0400", "msg_from": "cbbrowne@acm.org", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "On Wed, 2002-06-26 at 13:50, Josh Berkus wrote:\n> \n\n> BTW, does anyone on this list know about Command Prompt, Inc.'s tools? There \n> seems to be a lot of duplicte development going on in the commercial space.\nI know for my PERSONAL stuff, commercial tools ($$) mean I don't even\nbother. I have some consulting clients, but using pay for stuff\ngenerally won't work for them, plus I can't usually afford the fees for\nmy own use, so therefore are not conversant with the commercial tools. \n\nNothing against them, but...\n\nJust my $.02 worth. \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n\n\n", "msg_date": "26 Jun 2002 16:55:13 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On 26 Jun 2002 14:36:15 EDT, the world broke into rejoicing as\nDave Cramer <Dave@micro-automation.net> said:\n> Josh,\n> \n> 1) There is an open source implementation of java\n> 2) The jdbc driver is much better than it was recently we have made lots\n> of improvements, and it won't affect jpgadmin anyway. I actually think\n> writing the admin tool in java will make the driver better.\n> 3) Don't see this as a big issue we aren't writing something esoteric\n> here.\n\nThere are \"free software\" implementations of Java compilers and of Java\nVirtual Machines.\n\nAre there suitable \"free software\" implementations of _all_ the\nlibraries that you will be needing to construct the admin tool? \n\nIn particular, can you direct us to a free software implementation of\nSwing?\n\nI doubt that you can, and _that_ is the characteristic problem with\nJava. The language is \"free enough,\" but the libraries you will want to\nuse aren't...\n--\n(concatenate 'string \"chris\" \"@cbbrowne.com\")\nhttp://www3.sympatico.ca/cbbrowne/spreadsheets.html\nHAKMEM ITEM 163 (Sussman):\nTo exchange two variables in LISP without using a third variable:\n(SETQ X (PROG2 0 Y (SETQ Y X))) \n\n\n", "msg_date": "Thu, 27 Jun 2002 08:20:20 -0400", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": false, "msg_subject": "Re: Postgres idea list " }, { "msg_contents": "On Wed, Jun 26, 2002 at 02:50:21PM -0400, Dave Cramer wrote:\n> On Wed, 2002-06-26 at 14:50, Josh Berkus wrote:\n> > \n> > Dave,\n> > \n> > > 1) There is an open source implementation of java\n> > \n> > Really? I thought Sun had a patent.\n> www.blackdown.org\n\nI'd rather not call this open source. From the source tree:\n\nCopyright 2001 Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto,\nCalifornia 94303, U.S.A. All rights reserved.\n\nThis product or document is protected by copyright and distributed under\nlicenses restricting its use, copying, distribution, and decompilation.\nNo part\nof this product or document may be reproduced in any form by any means\nwithout\nprior written authorization of Sun and its licensors, if any.\nThird-party\nsoftware, including font technology, is copyrighted and licensed from\nSun\nsuppliers.\n\nSun, Sun Microsystems, the Sun Logo, Java, JDK, the Java Coffee Cup\nlogo, JavaBeans,\nand JDBC\nare trademarks or registered trademarks of Sun Microsystems, Inc. in the\nU.S.\nand other countries.\n\nAll SPARC trademarks are used under license and are\ntrademarks or registered trademarks of SPARC International, Inc. in the\nU.S. and other countries.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n\n\n", "msg_date": "Fri, 28 Jun 2002 09:08:38 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "\nhttp://www.kaffe.org/\n\n\nOn Fri, 28 Jun 2002, Michael Meskes wrote:\n\n> On Wed, Jun 26, 2002 at 02:50:21PM -0400, Dave Cramer wrote:\n> > On Wed, 2002-06-26 at 14:50, Josh Berkus wrote:\n> > >\n> > > Dave,\n> > >\n> > > > 1) There is an open source implementation of java\n> > >\n> > > Really? I thought Sun had a patent.\n> > www.blackdown.org\n>\n> I'd rather not call this open source. From the source tree:\n>\n> Copyright 2001 Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto,\n> California 94303, U.S.A. All rights reserved.\n>\n> This product or document is protected by copyright and distributed under\n> licenses restricting its use, copying, distribution, and decompilation.\n> No part\n> of this product or document may be reproduced in any form by any means\n> without\n> prior written authorization of Sun and its licensors, if any.\n> Third-party\n> software, including font technology, is copyrighted and licensed from\n> Sun\n> suppliers.\n>\n> Sun, Sun Microsystems, the Sun Logo, Java, JDK, the Java Coffee Cup\n> logo, JavaBeans,\n> and JDBC\n> are trademarks or registered trademarks of Sun Microsystems, Inc. in the\n> U.S.\n> and other countries.\n>\n> All SPARC trademarks are used under license and are\n> trademarks or registered trademarks of SPARC International, Inc. in the\n> U.S. and other countries.\n>\n> Michael\n> --\n> Michael Meskes\n> Michael@Fam-Meskes.De\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n>\n\n\n\n", "msg_date": "Fri, 28 Jun 2002 09:17:04 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Greg Sabino Mullane [mailto:greg@turnstep.com] \n> Sent: 25 June 2002 20:04\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Postgres idea list\n> \n> \n> 12. Offer something \"fun\": a naming contest for the elephant (I know, \n> I know),\n\nIsn't the elephant called Slonik? I vaguely remember picking that up\nfrom an existing alt tag when I redesigned the odbc site...\n\nRegards, Dave.\n\n\n", "msg_date": "Wed, 26 Jun 2002 14:48:09 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Postgres idea list" }, { "msg_contents": "On Wed, 26 Jun 2002, Dave Page wrote:\n\n>\n>\n> > -----Original Message-----\n> > From: Greg Sabino Mullane [mailto:greg@turnstep.com]\n> > Sent: 25 June 2002 20:04\n> > To: pgsql-hackers@postgresql.org\n> > Subject: [HACKERS] Postgres idea list\n> >\n> >\n> > 12. Offer something \"fun\": a naming contest for the elephant (I know,\n> > I know),\n>\n> Isn't the elephant called Slonik? I vaguely remember picking that up\n\nit's fine.\nit's transliteration of russian translation of elephant (diminutive).\n\n\n> from an existing alt tag when I redesigned the odbc site...\n>\n> Regards, Dave.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 22:08:09 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Postgres idea list" } ]
[ { "msg_contents": "\nHackers,\n\nas some of you figured already, Katie Ward and I are working fulltime on\nPostgreSQL and are actually doing a native Win32 port. This port is not\nbased on CygWIN, Apache or any other compatibility library but uses 100%\nnative Windows functionality only.\n\nWe already have it far enough to create and drop databases, tables and\nof course do the usual stuff (like INSERT, UPDATE, DELETE and SELECT).\nBut there is still plenty of work, so don't worry, all of you will have\na chance to leave their finger- and/or footprints.\n\nWhat I want to start today is discussion about project coordination and\ncode management. Our proposal is to provide a diff first. I have no clue\nwhen exactly this will happen, but assuming the usual PostgreSQL\nschedule behaviour I would say it's measured in weeks :-). A given is\nthat we will contribute this work under the BSD license.\n\nWe will upload the diff to developer.postgresql.org and post a link\ntogether with build instructions to hackers. After some discussion we\ncan create a CVS branch and apply that patch to there. Everyone who\nwants to contribute to the Win32 port can then work in that branch.\nKatie and I will take care that changes in trunk will periodically get\nmerged into the Win32 branch.\n\nThis model guarantees that we don't change the mainstream PostgreSQL\nuntil the developers community decides to follow this road and choose\nthis implementation as the PostgreSQL Win32 port. At that point we can\nmerge the Win32 port into the trunk and ship it with the next release. \n\nAs for project coordination, I am willing to setup and maintain a page\nsimilar to the (horribly outdated) ones that I did for Toast and RI.\nSummarizing project status, pointing to resources, instructions, maybe a\nroadmap, TODO, you name it.\n\nComments? Suggestions?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Wed, 26 Jun 2002 10:44:57 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "(A) native Windows port" }, { "msg_contents": "Jan Wieck wrote:\n> As for project coordination, I am willing to setup and maintain a page\n> similar to the (horribly outdated) ones that I did for Toast and RI.\n> Summarizing project status, pointing to resources, instructions, maybe a\n> roadmap, TODO, you name it.\n\nGreat. Please see roadmap in TODO.detail/win32 for a list of items and\npossible approaches.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 26 Jun 2002 18:07:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "> As for project coordination, I am willing to setup and maintain a page\n> similar to the (horribly outdated) ones that I did for Toast and RI.\n> Summarizing project status, pointing to resources, instructions, maybe a\n> roadmap, TODO, you name it.\n\nI am willing to supply a complete, friendly, powerful and pretty installer\nprogram, based on NSIS.\n\nhttp://www.winamp.com/nsdn/nsis/index.jhtml\n\nI suggest that pgAdmin is included in the install process. Imagine it - a\nwin32 person downloads a single .exe, with contents bzip2'd. They run the\ninstaller, it asks them to agree to license, shows splash screen, asks them\nwhere to install it, gets them to supply an installation password and\ninstalls pgadmin. It could set up a folder in their start menu with\nstart/stop, edit configs, uninstall and run pgadmin.\n\nIt would all work out of the box and would do wonderful things for the\nPostgres community.\n\nChris\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 11:48:10 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wednesday 26 June 2002 11:48 pm, Christopher Kings-Lynne wrote:\n> I suggest that pgAdmin is included in the install process. Imagine it - a\n> win32 person downloads a single .exe, with contents bzip2'd. They run the\n> installer, it asks them to agree to license, shows splash screen, asks them\n> where to install it, gets them to supply an installation password and\n> installs pgadmin. It could set up a folder in their start menu with\n> start/stop, edit configs, uninstall and run pgadmin.\n\n> It would all work out of the box and would do wonderful things for the\n> Postgres community.\n\nI like this idea, but let me just bring one little issue to note: are you \ngoing to handle upgrades, and if so, how? How are you going to do a major \nversion upgrade?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n\n", "msg_date": "Mon, 1 Jul 2002 12:36:56 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "> > It would all work out of the box and would do wonderful things for the\n> > Postgres community.\n>\n> I like this idea, but let me just bring one little issue to note: are you\n> going to handle upgrades, and if so, how? How are you going to\n> do a major\n> version upgrade?\n\nWell, the easiest way would be to get them to uninstall the old version\nfirst, but I'm sure it can be worked out. Perhaps even we shouldn't\noverwrite the old version anyway?\n\nChris\n\n\n\n", "msg_date": "Tue, 2 Jul 2002 10:48:26 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "How does the upgrade work on UNIX? Is there anything available apart from\nreading the release note?\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Lamar Owen\" <lamar.owen@wgcr.org>; \"Jan Wieck\" <JanWieck@Yahoo.com>;\n\"HACKERS\" <pgsql-hackers@postgresql.org>\nSent: Tuesday, July 02, 2002 12:48 PM\nSubject: Re: [HACKERS] (A) native Windows port\n\n\n> > > It would all work out of the box and would do wonderful things for the\n> > > Postgres community.\n> >\n> > I like this idea, but let me just bring one little issue to note: are\nyou\n> > going to handle upgrades, and if so, how? How are you going to\n> > do a major\n> > version upgrade?\n>\n> Well, the easiest way would be to get them to uninstall the old version\n> first, but I'm sure it can be worked out. Perhaps even we shouldn't\n> overwrite the old version anyway?\n>\n> Chris\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n>\n\n\n\n\n", "msg_date": "Tue, 2 Jul 2002 13:19:33 +1000", "msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > > It would all work out of the box and would do wonderful things for the\n> > > Postgres community.\n> >\n> > I like this idea, but let me just bring one little issue to note: are you\n> > going to handle upgrades, and if so, how? How are you going to\n> > do a major\n> > version upgrade?\n> \n> Well, the easiest way would be to get them to uninstall the old version\n> first, but I'm sure it can be worked out. Perhaps even we shouldn't\n> overwrite the old version anyway?\n\nThe question is not how to replace some .EXE and .DLL files or modify\nsomething in the registry. The question is what to do with the existing\ndatabases in the case of a catalog version change. You have to dump and\nrestore. \n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Tue, 02 Jul 2002 09:52:18 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tuesday 02 July 2002 09:52 am, Jan Wieck wrote:\n> Christopher Kings-Lynne wrote:\n> > > > It would all work out of the box and would do wonderful things for\n> > > > the Postgres community.\n\n> > > I like this idea, but let me just bring one little issue to note: are\n> > > you going to handle upgrades, and if so, how? How are you going to do\n> > > a major\n> > > version upgrade?\n\n> > Well, the easiest way would be to get them to uninstall the old version\n> > first, but I'm sure it can be worked out. Perhaps even we shouldn't\n> > overwrite the old version anyway?\n\n> The question is not how to replace some .EXE and .DLL files or modify\n> something in the registry. The question is what to do with the existing\n> databases in the case of a catalog version change. You have to dump and\n> restore.\n\nNow, riddle me this: we're going to explain the vagaries of \ndump/initdb/restore to a typical Windows user, and further explain why the \ndump won't necessarily restore because of a bug in the older version's \ndump....\n\nThe typical Windows user is going to barf when confronted with our extant \n'upgrade' process. While I really could not care less if PostgreSQL goes to \nWindows or not, I am of a mind to support the Win32 effort if it gets an \nupgrade path done so that everyone can upgrade sanely. At least the Windows \ninstaller can check for existing database structures and ask what to do -- \nthe RPM install cannot do this. In fact, the Windows installer *must* check \nfor an existing database installation, or we're going to get fried by typical \nWindows users.\n\nAnd if having a working, usable, Win32 native port gets the subject of good \nupgrading higher up the priority list, BY ALL MEANS LET'S SUPPORT WIN32 \nNATIVELY! :-) (and I despise Win32....)\n\nBut it shouldn't be an installer issue -- this is an issue which cause pain \nfor all of our users, not just Windows or RPM (or Debian) users. Upgrading \n(pg_upgrade is a start -- but it's not going to work as written on Windows) \nneeds to be core functionality. If I can't easily upgrade my database, what \ngood are new features going to do for me?\n\nMartin O has come up with a 'pg_fsck' utility that, IMHO, holds a great deal \nof promise for seamless binary 'in place' upgrading. He has been able to \nwrite code to read multiple versions' database structures -- proving that it \nCAN be done.\n\nWindows programs such as Lotus Organizer, Microsoft Access, Lotus Approach, \nand others, allow you to convert the old to the new as part of initial \nstartup. This will be a prerequisite for wide acceptance in the Windows \nworld, methinks.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n\n", "msg_date": "Tue, 2 Jul 2002 11:41:04 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "> The question is not how to replace some .EXE and .DLL files or modify\n> something in the registry. The question is what to do with the existing\n> databases in the case of a catalog version change. You have to dump and\n> restore. \n\npg_upgrade?\n\nOtherwise: no upgrades persay, but you can intall the new version into a new \ndirectory and then have an automated pg_dump / restore between the old and \nthe new. This would require a lot of disk space, but I don't see any other \nclean way to automate it.\n\n\n", "msg_date": "Tue, 2 Jul 2002 12:04:05 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Lamar Owen wrote:\n> [...]\n> \n> And if having a working, usable, Win32 native port gets the subject of good\n> upgrading higher up the priority list, BY ALL MEANS LET'S SUPPORT WIN32\n> NATIVELY! :-) (and I despise Win32....)\n\nHehehe :-)\n\n> [...]\n> Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great deal\n> of promise for seamless binary 'in place' upgrading. He has been able to\n> write code to read multiple versions' database structures -- proving that it\n> CAN be done.\n\nUnfortunately it's not the on-disk binary format of files that causes\nthe big problems. Our dump/initdb/restore sequence is also the solution\nfor system catalog changes. If we add/remove internal functions, there\nwill be changes to pg_proc. When the representation of parsetrees\nchanges, there will be changes to pg_rewrite (dunno how to convert\nthat). Consider adding another attribute to pg_class. You'd have to add\na row in pg_attribute, possibly (because it likely isn't added at the\nend) increment the attno for 50% of all pg_attribute entries, and of\ncourse insert an attribute in the middle of all existing pg_class rows\n... ewe.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Tue, 02 Jul 2002 15:14:35 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:\n> Lamar Owen wrote:\n> > [...]\n> > Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great\n> > deal of promise for seamless binary 'in place' upgrading. He has been\n> > able to write code to read multiple versions' database structures --\n> > proving that it CAN be done.\n\n> Unfortunately it's not the on-disk binary format of files that causes\n> the big problems. Our dump/initdb/restore sequence is also the solution\n> for system catalog changes.\n\nHmmm. They get in there via the bki interface, right? Is there an OID issue \nwith these? Could differential BKI files be possible, with known system \ncatalog changes that can be applied via a 'patchdb' utility? I know pretty \nmuch how pg_upgrade is doing things now -- and, frankly, it's a little bit of \na kludge.\n\nYes, I do understand the things a dump restore does on somewhat of a detailed \nlevel. I know the restore repopulates the entries in the system catalogs for \nthe restored data, etc, etc.\n\nCurrently dump/restore handles the catalog changes. But by what other means \ncould we upgrade the system catalog in place?\n\nOur very extensibility is our weakness for upgrades. Can it be worked around? \nAnyone have any ideas?\n\nImproving pg_upgrade may be the ticket -- but if the on-disk binary format \nchanges (like it has before), then something will have to do the binary \nformat translation -- something like pg_fsck. \n\nIncidentally, pg_fsck, or a program like it, should be in the core \ndistribution. Maybe not named pg_fsck, as our database isn't a filesystem, \nbut pg_dbck, or pg_dbcheck, pr pg_dbfix, or similar. Although pg_fsck is \nmore of a pg_dbdump.\n\nI've seen too many people bitten by upgrades gone awry. The more we can do in \nthe regard, the better.\n\nAnd the Windows user will likely demand it. I never thought I'd be grateful \nfor a Win32 native PostgreSQL port... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n\n", "msg_date": "Tue, 2 Jul 2002 15:50:05 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Le Jeudi 27 Juin 2002 05:48, Christopher Kings-Lynne a écrit :\n> I am willing to supply a complete, friendly, powerful and pretty installer\n> program, based on NSIS.\n\nMaybe you should contact Dave Page, who wrote pgAdmin2 and the ODBC \ninstallers. Maybe you can both work on the installer.\n\nBy the way, when will Dave be added to the main developper list? He wrote 99% \nof pgAdmin on his own.\n\nCheers, Jean-Michel POURE\n\n\n", "msg_date": "Wed, 3 Jul 2002 10:43:59 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-02 at 21:50, Lamar Owen wrote:\n> On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:\n> > Lamar Owen wrote:\n> > > [...]\n> > > Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great\n> > > deal of promise for seamless binary 'in place' upgrading. He has been\n> > > able to write code to read multiple versions' database structures --\n> > > proving that it CAN be done.\n> \n> > Unfortunately it's not the on-disk binary format of files that causes\n> > the big problems. Our dump/initdb/restore sequence is also the solution\n> > for system catalog changes.\n> \n> Hmmm. They get in there via the bki interface, right? Is there an OID issue \n> with these? Could differential BKI files be possible, with known system \n> catalog changes that can be applied via a 'patchdb' utility? I know pretty \n> much how pg_upgrade is doing things now -- and, frankly, it's a little bit of \n> a kludge.\n> \n> Yes, I do understand the things a dump restore does on somewhat of a detailed \n> level. I know the restore repopulates the entries in the system catalogs for \n> the restored data, etc, etc.\n> \n> Currently dump/restore handles the catalog changes. But by what other means \n> could we upgrade the system catalog in place?\n> \n> Our very extensibility is our weakness for upgrades. Can it be worked around? \n> Anyone have any ideas?\n\nPerhaps we can keep an old postgres binary + old backend around and then\nuse it in single-user mode to do a pg_dump into our running backend.\n\nIIRC Access does its upgrade databse by copying old databse to new.\n\nOur approach could be like\n\n$OLD/postgres -D $OLD_DATA <pg_dump_cmds | $NEW/postgres -D NEW_BACKEND\n\nor perhaps, while old backend is still running:\n\npg_dumpall | path_to_new_backend/bin/postgres\n\n\nI dont think we should assume that we will be able to do an upgrade\nwhile we have less free space than currently used by databases (or at\nleast by data - indexes can be added later)\n\nTrying to do an in-place upgrade is an interesting CS project, but any\nserious DBA will have backups, so they can do\n$ psql < dumpfile\n\nSpeeding up COPY FROM could be a good thing (perhaps enabling it to run\nwithout any checks and outside transactions when used in loading dumps)\n\nAnd home users will have databases small enough that they should have\nenough free space to have both old and new version for some time.\n\nWhat we do need is more-or-less solid upgrade path using pg_dump\n\nBTW, how hard would it be to move pg_dump inside the backend (perhaps\nusing a dynamically loaded function to save space when not used) so that\nit could be used like COPY ?\n\npg> DUMP table [ WITH 'other cmdline options' ] TO stdout ;\n\npg> DUMP * [ WITH 'other cmdline options' ] TO stdout ;\n\n \n----------------\nHannu\n\n\n\n", "msg_date": "03 Jul 2002 14:06:13 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Lamar Owen wrote:\n> On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:\n> > Lamar Owen wrote:\n> > > [...]\n> > > Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great\n> > > deal of promise for seamless binary 'in place' upgrading. He has been\n> > > able to write code to read multiple versions' database structures --\n> > > proving that it CAN be done.\n> \n> > Unfortunately it's not the on-disk binary format of files that causes\n> > the big problems. Our dump/initdb/restore sequence is also the solution\n> > for system catalog changes.\n> \n> Hmmm. They get in there via the bki interface, right? Is there an OID issue \n> with these? Could differential BKI files be possible, with known system \n> catalog changes that can be applied via a 'patchdb' utility? I know pretty \n> much how pg_upgrade is doing things now -- and, frankly, it's a little bit of \n> a kludge.\n\nSure, if it wasn't a kludge, I wouldn't have written it. ;-)\n\nDoes everyone remember my LIKE indexing kludge in gram.y? Until people\nfound a way to get it into the optimizer, it did its job. I guess\nthat's where pg_upgrade is at this point.\n\nActually, how can pg_upgrade be improved? \n\nAlso, we have committed to making file format changes for 7.3, so it\nseems pg_upgrade will not be useful for that release unless we get some\nbinary conversion tool working.\n\n\n> Yes, I do understand the things a dump restore does on somewhat of a detailed \n> level. I know the restore repopulates the entries in the system catalogs for \n> the restored data, etc, etc.\n> \n> Currently dump/restore handles the catalog changes. But by what other means \n> could we upgrade the system catalog in place?\n> \n> Our very extensibility is our weakness for upgrades. Can it be worked around? \n> Anyone have any ideas?\n> \n> Improving pg_upgrade may be the ticket -- but if the on-disk binary format \n> changes (like it has before), then something will have to do the binary \n> format translation -- something like pg_fsck. \n\nYep.\n\n> Incidentally, pg_fsck, or a program like it, should be in the core \n> distribution. Maybe not named pg_fsck, as our database isn't a filesystem, \n> but pg_dbck, or pg_dbcheck, pr pg_dbfix, or similar. Although pg_fsck is \n> more of a pg_dbdump.\n> \n> I've seen too many people bitten by upgrades gone awry. The more we can do in \n> the regard, the better.\n\nI should mention that 7.3 will have pg_depend, which should make our\npost-7.3 reload process much cleaner because we will not have dangling\nobjects as often.\n\n> And the Windows user will likely demand it. I never thought I'd be grateful \n> for a Win32 native PostgreSQL port... :-)\n\nYea, the trick is to get an something working that will require minimal\nchange from release to release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 10:43:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Hannu Krosing wrote:\n> > Our very extensibility is our weakness for upgrades. Can it be worked around? \n> > Anyone have any ideas?\n> \n> Perhaps we can keep an old postgres binary + old backend around and then\n> use it in single-user mode to do a pg_dump into our running backend.\n\nThat brings up an interesting idea. Right now we dump the entire\ndatabase out to a file, delete the old database, and load in the file.\n\nWhat if we could move over one table at a time? Copy out the table,\nload it into the new database, then delete the old table and move on to\nthe next. That would allow use to upgrade having free space for just\nthe largest table. Another idea would be to record and remove all\nindexes in the old database. That certainly would save disk space\nduring the upgrade.\n\nHowever, the limiting factor is that we don't have a mechanism to have\nboth databases running at the same time currently. Seems this may be\nthe direction to head in.\n\n> BTW, how hard would it be to move pg_dump inside the backend (perhaps\n> using a dynamically loaded function to save space when not used) so that\n> it could be used like COPY ?\n> \n> pg> DUMP table [ WITH 'other cmdline options' ] TO stdout ;\n> \n> pg> DUMP * [ WITH 'other cmdline options' ] TO stdout ;\n\nIntersting idea, but I am not sure what that buys us. Having pg_dump\nseparate makes maintenance easier.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 11:28:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Hannu Krosing wrote:\n> > \n> > However, the limiting factor is that we don't have a mechanism to have\n> > both databases running at the same time currently. \n> \n> How so ?\n> \n> AFAIK I can run as many backends as I like (up to some practical limit)\n> on the same comuter at the same time, as long as they use different\n> ports and different data directories.\n\nWe don't have an automated system for doing this. Certainly it is done\nall the time.\n\n> > Intersting idea, but I am not sure what that buys us. Having pg_dump\n> > separate makes maintenance easier.\n> \n> can pg_dump connect to single-user-mode backend ?\n\nUh, no, I don't think so.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 3 Jul 2002 12:09:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wed, 2002-07-03 at 17:28, Bruce Momjian wrote:\n> Hannu Krosing wrote:\n> > > Our very extensibility is our weakness for upgrades. Can it be worked around? \n> > > Anyone have any ideas?\n> > \n> > Perhaps we can keep an old postgres binary + old backend around and then\n> > use it in single-user mode to do a pg_dump into our running backend.\n> \n> That brings up an interesting idea. Right now we dump the entire\n> database out to a file, delete the old database, and load in the file.\n> \n> What if we could move over one table at a time? Copy out the table,\n> load it into the new database, then delete the old table and move on to\n> the next. That would allow use to upgrade having free space for just\n> the largest table. Another idea would be to record and remove all\n> indexes in the old database. That certainly would save disk space\n> during the upgrade.\n> \n> However, the limiting factor is that we don't have a mechanism to have\n> both databases running at the same time currently. \n\nHow so ?\n\nAFAIK I can run as many backends as I like (up to some practical limit)\non the same comuter at the same time, as long as they use different\nports and different data directories.\n\n> Seems this may be\n> the direction to head in.\n> \n> > BTW, how hard would it be to move pg_dump inside the backend (perhaps\n> > using a dynamically loaded function to save space when not used) so that\n> > it could be used like COPY ?\n> > \n> > pg> DUMP table [ WITH 'other cmdline options' ] TO stdout ;\n> > \n> > pg> DUMP * [ WITH 'other cmdline options' ] TO stdout ;\n> \n> Intersting idea, but I am not sure what that buys us. Having pg_dump\n> separate makes maintenance easier.\n\ncan pg_dump connect to single-user-mode backend ?\n\n--------------------\nHannu\n\n\n\n", "msg_date": "03 Jul 2002 18:35:51 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wednesday 03 July 2002 12:09 pm, Bruce Momjian wrote:\n> Hannu Krosing wrote:\n> > AFAIK I can run as many backends as I like (up to some practical limit)\n> > on the same comuter at the same time, as long as they use different\n> > ports and different data directories.\n\n> We don't have an automated system for doing this. Certainly it is done\n> all the time.\n\nGood. Dialog. This is better than what I am used to when I bring up \nupgrading. :-)\n\nBruce, pg_upgrade isn't as kludgey as what I have been doing with the RPMset \nfor these nearly three years.\n\nNo, what I envisioned was a standalone dumper that can produce dump output \nwithout having a backend at all. If this dumper knows about the various \nbinary formats, and knows how to get my data into a form I can then restore \nreliably, I will be satisfied. If it can be easily automated so much the \nbetter. Doing it table by table would be ok as well.\n\nI'm looking for a sequence such as:\n\n----\nPGDATA=location/of/data/base\nTEMPDATA=location/of/temp/space/on/same/file/system\n\nmv $PGDATA/* $TEMPDATA\ninitdb -D $PGDATA\npg_dbdump $TEMPDATA |pg_restore {with its associated options, etc}\n\nWith an rm -rf of $TEMPDATA much further down the pike.....\n\nKeys to this working:\n1.)\tMust not require the old version executable backend. There are a number \nof reasons why this might be, but the biggest is due to the way much \nupgrading works in practice -- the old executables are typically gone by the \ntime the new package is installed.\n\n2.)\tUses pg_dbdump of the new version. This dumper can be tailored to provide \nthe input pg_restore wants to see. The dump-restore sequence has always had \ndumped-data version mismatch as its biggest problem -- there have been issues \nbefore where you would have to install the new version of pg_dump to run \nagainst the old backend. This is unacceptable in the real world of binary \npackages.\n\nOne other usability note: why can't postmaster perform the steps of an initdb \nif -D points to an empty directory? It's not that much code, is it? (I know \nthat one extra step isn't backbreaking, but I'm looking at this from a rank \nnewbie's point of view -- or at least I'm trying to look at it in that way, \nas it's been a while since I was a rank newbie at PostgreSQL) Oh well, just \na random thought.\n\nBut I believe a backend-independent data dumper would be very useful in many \ncontexts, particularly those where a backend cannot be run for whatever \nreason, but you need your data (corrupted system catalogs, high system load, \nwhatever). Upgrading is just one of those contexts.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 12:39:13 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Lamar Owen wrote:\n> On Wednesday 03 July 2002 12:09 pm, Bruce Momjian wrote:\n> > Hannu Krosing wrote:\n> > > AFAIK I can run as many backends as I like (up to some practical limit)\n> > > on the same comuter at the same time, as long as they use different\n> > > ports and different data directories.\n> \n> > We don't have an automated system for doing this. Certainly it is done\n> > all the time.\n> \n> Good. Dialog. This is better than what I am used to when I bring up \n> upgrading. :-)\n> \n> Bruce, pg_upgrade isn't as kludgey as what I have been doing with the RPMset \n> for these nearly three years.\n> \n> No, what I envisioned was a standalone dumper that can produce dump output \n> without having a backend at all. If this dumper knows about the various \n> binary formats, and knows how to get my data into a form I can then restore \n> reliably, I will be satisfied. If it can be easily automated so much the \n> better. Doing it table by table would be ok as well.\n\nThe problem with a standalone dumper is that you would have to recode\nthis for every release, with little testing possible. Having the old\nbackend active saves us that step. If we get it working, we can use it\nover and over again for each release with little work on our part.\n\n> Keys to this working:\n> 1.)\tMust not require the old version executable backend. There are a number \n> of reasons why this might be, but the biggest is due to the way much \n> upgrading works in practice -- the old executables are typically gone by the \n> time the new package is installed.\n\nOh, that is a problem. We would have to require the old executables.\n\n> 2.)\tUses pg_dbdump of the new version. This dumper can be tailored to provide \n> the input pg_restore wants to see. The dump-restore sequence has always had \n> dumped-data version mismatch as its biggest problem -- there have been issues \n> before where you would have to install the new version of pg_dump to run \n> against the old backend. This is unacceptable in the real world of binary \n> packages.\n> \n> One other usability note: why can't postmaster perform the steps of an initdb \n> if -D points to an empty directory? It's not that much code, is it? (I know \n> that one extra step isn't backbreaking, but I'm looking at this from a rank \n> newbie's point of view -- or at least I'm trying to look at it in that way, \n> as it's been a while since I was a rank newbie at PostgreSQL) Oh well, just \n> a random thought.\n\nThe issue is that if you have PGDATA pointed to the wrong place, it\ncreates a new instance automatically. Could be strange for people, but\nwe could prompt them to run initdb I guess.\n\n> But I believe a backend-independent data dumper would be very useful in many \n> contexts, particularly those where a backend cannot be run for whatever \n> reason, but you need your data (corrupted system catalogs, high system load, \n> whatever). Upgrading is just one of those contexts.\n\nYes, but who wants to write one of those for every release? That is\nwhere we get stuck, and with our limited resources, it is desirable to\nencourage people to work on it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 5 Jul 2002 14:59:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Fri, 2002-07-05 at 17:39, Lamar Owen wrote:\n> No, what I envisioned was a standalone dumper that can produce dump output \n> without having a backend at all. If this dumper knows about the various \n> binary formats, and knows how to get my data into a form I can then restore \n> reliably, I will be satisfied. If it can be easily automated so much the \n> better. Doing it table by table would be ok as well.\n...\n> 1.)\tMust not require the old version executable backend. There are a number \n> of reasons why this might be, but the biggest is due to the way much \n> upgrading works in practice -- the old executables are typically gone by the \n> time the new package is installed.\n> \n> 2.)\tUses pg_dbdump of the new version. This dumper can be tailored to provide \n> the input pg_restore wants to see. The dump-restore sequence has always had \n> dumped-data version mismatch as its biggest problem -- there have been issues \n> before where you would have to install the new version of pg_dump to run \n> against the old backend. This is unacceptable in the real world of binary \n> packages.\n\nI concur completely!\n\nAs a package maintainer, this would remove my biggest problem.\n\n\nOliver Elphick\n(Debian maintainer)\n\n\n\n\n", "msg_date": "05 Jul 2002 22:18:44 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Fri, Jul 05, 2002 at 12:39:13PM -0400, Lamar Owen wrote:\n\n> One other usability note: why can't postmaster perform the steps of\n> an initdb if -D points to an empty directory? It's not that much\n> code, is it? (I know that one extra step isn't backbreaking, but\n> I'm looking at this from a rank newbie's point of view -- or at\n> least I'm trying to look at it in that way, as it's been a while\n> since I was a rank newbie at PostgreSQL) Oh well, just a random\n> thought.\n\nRank newbies shouldn't be protected in this way, partly because if\nsomething goes wrong, _they won't know what to do_. Please, please,\ndon't be putting automagic, database destroying functions like that\ninto the postmaster. It's a sure way to cause a disaster at aome\npoint.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n\n\n", "msg_date": "Fri, 5 Jul 2002 17:33:45 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> On Fri, Jul 05, 2002 at 12:39:13PM -0400, Lamar Owen wrote:\n>> One other usability note: why can't postmaster perform the steps of\n>> an initdb if -D points to an empty directory?\n\n> Rank newbies shouldn't be protected in this way, partly because if\n> something goes wrong, _they won't know what to do_. Please, please,\n> don't be putting automagic, database destroying functions like that\n> into the postmaster.\n\nI agree completely with Andrew, even though an auto-initdb on an empty\ndirectory presumably won't destroy any data. What it *does* do is\neffectively mask a DBA error. We'll be getting panic-stricken support\ncalls/emails saying \"all my databases are gone! Postgres sucks!\" when\nthe problem is just that PG was restarted with the wrong -D pointer.\nThe existing behavior points that out loud and clear, in a context\nwhere the DBA shouldn't have too much trouble figuring out what he\ndid wrong.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2002 11:15:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port " }, { "msg_contents": "On Saturday 06 July 2002 11:15 am, Tom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > On Fri, Jul 05, 2002 at 12:39:13PM -0400, Lamar Owen wrote:\n> >> One other usability note: why can't postmaster perform the steps of\n> >> an initdb if -D points to an empty directory?\n\n> > Rank newbies shouldn't be protected in this way, partly because if\n> > something goes wrong, _they won't know what to do_.\n\n> I agree completely with Andrew, even though an auto-initdb on an empty\n> directory presumably won't destroy any data.\n\nGood grief, I was just asking a question. :-)\n\n> What it *does* do is\n> effectively mask a DBA error.\n\nThis is a satisfactory answer. In the context of the RPM distribution, if the \ninitscript is used the DBA error probability is greatly reduced, thus the \ninitscript can safely initdb.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n\n", "msg_date": "Sat, 6 Jul 2002 21:46:28 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Sat, 6 Jul 2002, Tom Lane wrote:\n\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > On Fri, Jul 05, 2002 at 12:39:13PM -0400, Lamar Owen wrote:\n> >> One other usability note: why can't postmaster perform the steps of\n> >> an initdb if -D points to an empty directory?\n>\n> > Rank newbies shouldn't be protected in this way, partly because if\n> > something goes wrong, _they won't know what to do_. Please, please,\n> > don't be putting automagic, database destroying functions like that\n> > into the postmaster.\n>\n> I agree completely with Andrew, even though an auto-initdb on an empty\n> directory presumably won't destroy any data. What it *does* do is\n> effectively mask a DBA error. We'll be getting panic-stricken support\n> calls/emails saying \"all my databases are gone! Postgres sucks!\" when\n> the problem is just that PG was restarted with the wrong -D pointer. The\n> existing behavior points that out loud and clear, in a context where the\n> DBA shouldn't have too much trouble figuring out what he did wrong.\n\nOkay, I'm sitting on the fence on this one ... but, as DBA for several\nPgSQL installs on at least a half dozen machines or more, if someone\nrestarts PG with the wrong -D pointer, they haven't setup their machine to\nlive through a reboot ... first thing any DBA *should* be doing after they\nhave 'initdb'd their system is add the appropriate start-up scripts for\nafter the reboot ...\n\nAlso, what is the difference between forgetting where you put it in an\ninitdb or on the first postmaster? Why not put in a 'safety'? If you\nstart up postmaster with -D on a directory that doesn't yet exist, it\nprompts the DBA as to whether they are certain that they wish to do this?\n\nJust thoughts ... I'm happy enough with initdb *shrug*\n\n\n\n", "msg_date": "Sat, 6 Jul 2002 23:15:28 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port " }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n>> What it *does* do is effectively mask a DBA error.\n\n> This is a satisfactory answer. In the context of the RPM distribution, if the \n> initscript is used the DBA error probability is greatly reduced, thus the \n> initscript can safely initdb.\n\nFair enough --- if the upper-layer script thinks it has enough safeties\nin place, let it auto-initdb. I just don't think the postmaster should\ndo that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2002 23:56:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port " }, { "msg_contents": "> > Keys to this working:\n> > 1.)\tMust not require the old version executable backend. There are a \nnumber \n> > of reasons why this might be, but the biggest is due to the way much \n> > upgrading works in practice -- the old executables are typically gone by \nthe \n> > time the new package is installed.\n> \n> Oh, that is a problem. We would have to require the old executables.\n\nCould this be solved with packaging? Meaning can postmasters from old versions \nbe packed with a new release strictly for the purpose of upgrading? It is my \nunderstanding that the only old executable needed is the postmaster is that \ncorrect? Perhaps this also requires adding functionality so that pg_dump can \nrun against a singer user postmaster.\n\nExample: When PG 7.3 is released, the RPM / deb / setup.exe include the \npostmaster binary for v 7.2 (perhaps two or three older versions...). An \nupgrade script is included that does the automatic dump / restore described \neariler in this thread. Effectivly, you are using old versions of the \npostmaster as your standalone dumper. \n\nI think this could sidestep the problem of having to create / test / maintain \nnew version of a dumper or pg_upgrade for every release.\n\nBy default perhaps the postmaster for the previous version of postgres is \nincluded, and postmasters from older versions are distrubuted in separate \npackages, so if I am still runnig 6.5.3 and I want to upgrade to 7.3, I have \ndo install the 6.5.3 upgrade package. Or perhaps there i one pg_upgrade rpm \npackage that includes every postmaster since 6.4. This would allow the \nupgrade script to know that it all backends are availble to it depeding on \nwhat it finds in PG_VERSION, it also allows the admin to removed them all \neasily once they are no longer needed.\n\n\n", "msg_date": "Mon, 8 Jul 2002 20:30:37 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 01:30, Matthew T. O'Connor wrote:\n> > Oh, that is a problem. We would have to require the old executables.\n> \n> Could this be solved with packaging? Meaning can postmasters from old versions \n> be packed with a new release strictly for the purpose of upgrading? It is my \n> understanding that the only old executable needed is the postmaster is that \n> correct? Perhaps this also requires adding functionality so that pg_dump can \n> run against a singer user postmaster.\n> \n> Example: When PG 7.3 is released, the RPM / deb / setup.exe include the \n> postmaster binary for v 7.2 (perhaps two or three older versions...). \n\nThat isn't usable for Debian. A package must be buildable from source;\nso I would have to include separate (though possibly cut-down) source\nfor n previous packages. It's a horrid prospect and a dreadful kludge\nof a solution - a maintainer's nightmare.\n\nOliver \n\n\n\n", "msg_date": "09 Jul 2002 12:48:14 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tuesday 09 July 2002 11:41 am, Hannu Krosing wrote:\n> The old postmaster should not be built/distributed. As it is for\n> _upgrading_ only, you just have to _keep_ it when doing an upgrade, not\n> build a new \"old\" one ;)\n\nLet me reiterate one thing about this. In the midst of a total OS upgrade, \nduring which PostgreSQL is being upgraded as well (the new OS release \nincludes a 'better' PostgreSQL), you also get library upgrades. If the \nupgrade is from an old enough version of the OS, the old postmaster/postgres \nmay not even be able to execute AT ALL.\n\nSome may say that this is a problem for the vendor. Well I know of one vendor \nthat has thrown up its hands in disgust over our lack of upgradability that \nthey have now quit supporting even the kludgy semi-automatic upgrade process \nI did up three years ago. They will refuse to support any mechanism that \nrequires any portion of an old package to remain around. The new package \nmust be self-contained and must be able to upgrade the old data, or they will \nnot accept it.\n\nTheir statement now is simply that PostgreSQL upgrading is broken; dump before \nupgrading and complain to the PostgreSQL developers.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 9 Jul 2002 11:04:15 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 13:48, Oliver Elphick wrote:\n> On Tue, 2002-07-09 at 01:30, Matthew T. O'Connor wrote:\n> > > Oh, that is a problem. We would have to require the old executables.\n> > \n> > Could this be solved with packaging? Meaning can postmasters from old versions \n> > be packed with a new release strictly for the purpose of upgrading? It is my \n> > understanding that the only old executable needed is the postmaster is that \n> > correct? Perhaps this also requires adding functionality so that pg_dump can \n> > run against a singer user postmaster.\n> > \n> > Example: When PG 7.3 is released, the RPM / deb / setup.exe include the \n> > postmaster binary for v 7.2 (perhaps two or three older versions...). \n> \n> That isn't usable for Debian. A package must be buildable from source;\n> so I would have to include separate (though possibly cut-down) source\n> for n previous packages. It's a horrid prospect and a dreadful kludge\n> of a solution - a maintainer's nightmare.\n\nThe old postmaster should not be built/distributed. As it is for\n_upgrading_ only, you just have to _keep_ it when doing an upgrade, not\nbuild a new \"old\" one ;)\n\n--------------\nHannu\n\n\n\n", "msg_date": "09 Jul 2002 17:41:58 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 16:41, Hannu Krosing wrote:\n> On Tue, 2002-07-09 at 13:48, Oliver Elphick wrote:\n> > On Tue, 2002-07-09 at 01:30, Matthew T. O'Connor wrote:\n> > > Example: When PG 7.3 is released, the RPM / deb / setup.exe include the \n> > > postmaster binary for v 7.2 (perhaps two or three older versions...). \n> > \n> > That isn't usable for Debian. A package must be buildable from source;\n> > so I would have to include separate (though possibly cut-down) source\n> > for n previous packages. It's a horrid prospect and a dreadful kludge\n> > of a solution - a maintainer's nightmare.\n> \n> The old postmaster should not be built/distributed. As it is for\n> _upgrading_ only, you just have to _keep_ it when doing an upgrade, not\n> build a new \"old\" one ;)\n\nNo, it doesn't work like that. You cannot rely on anything's being left\nfrom an old distribution; apt is quite likely to delete it altogether\nbefore installing the new version (to enable dependencies to be\nsatisfied). At present I have the preremoval script copy the old\nbinaries to a special location in case they will be needed, but that\nfails if the version is very old (and doesn't contain that code), and\nit's a very fragile mechanism.\n\nI never have understood why the basic table structure changes so much\nthat it can't be read; just what is involved in getting the ability to\nread old versions?\n\n\n", "msg_date": "09 Jul 2002 16:49:47 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 18:05, Hannu Krosing wrote:\n> The big change was from 6.x to 7.x where a chunk of data moved from end\n> of page to start of page and tableoid column was added. Otherways the\n> table structure is quite simple. The difficulties with user _data_ can\n> be mainly because of binary format changes for some types and such.\n> \n> But I still can't see how will having a binary dumper that does mostly\n> the work of [ old_backend -c \"COPY tablex TO STDOUT\" ] help us here. \n> \n> IIRC the main difficulties in upgrading have always been elsewhere, like\n> migrating always changing system table data.\n\nThe main problem is getting access to the user data after an upgrade. \nThere's no particular problem in having to do an initdb, though it is an\ninconvenience; the difficulty is simply that any packaged distribution\n(rpm, deb, xxx) is going to have to replace all the old binaries. So by\nthe time the package is ready to do the database upgrade, it has\ndestroyed the means of dumping the old data. Lamar and I have to jump\nthrough hoops to get round this -- small hoops with flaming rags round\nthem!\n\nThe current upgrade process for PostgreSQL is founded on the idea that\npeople build from source. With binary distributions, half the users\nwouldn't know what to do with source; they expect (and are entitled to\nexpect) that an upgrade will progress without the need for significant\nintervention on their part. PostgreSQL makes this really difficult for\nthe package maintainers, and this has a knock-on effect on the\nreliability of the upgrade process and thus on PostgreSQL itself.\n\n\n", "msg_date": "09 Jul 2002 17:30:52 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 17:49, Oliver Elphick wrote:\n> On Tue, 2002-07-09 at 16:41, Hannu Krosing wrote:\n> > On Tue, 2002-07-09 at 13:48, Oliver Elphick wrote:\n> > > On Tue, 2002-07-09 at 01:30, Matthew T. O'Connor wrote:\n> > > > Example: When PG 7.3 is released, the RPM / deb / setup.exe include the \n> > > > postmaster binary for v 7.2 (perhaps two or three older versions...). \n> > > \n> > > That isn't usable for Debian. A package must be buildable from source;\n> > > so I would have to include separate (though possibly cut-down) source\n> > > for n previous packages. It's a horrid prospect and a dreadful kludge\n> > > of a solution - a maintainer's nightmare.\n> > \n> > The old postmaster should not be built/distributed. As it is for\n> > _upgrading_ only, you just have to _keep_ it when doing an upgrade, not\n> > build a new \"old\" one ;)\n> \n> No, it doesn't work like that. You cannot rely on anything's being left\n> from an old distribution; apt is quite likely to delete it altogether\n> before installing the new version (to enable dependencies to be\n> satisfied). At present I have the preremoval script copy the old\n> binaries to a special location in case they will be needed, but that\n> fails if the version is very old (and doesn't contain that code), and\n> it's a very fragile mechanism.\n> \n> I never have understood why the basic table structure changes so much\n> that it can't be read; just what is involved in getting the ability to\n> read old versions?\n\nThe big change was from 6.x to 7.x where a chunk of data moved from end\nof page to start of page and tableoid column was added. Otherways the\ntable structure is quite simple. The difficulties with user _data_ can\nbe mainly because of binary format changes for some types and such.\n\nBut I still can't see how will having a binary dumper that does mostly\nthe work of [ old_backend -c \"COPY tablex TO STDOUT\" ] help us here. \n\nIIRC the main difficulties in upgrading have always been elsewhere, like\nmigrating always changing system table data.\n\n----------\nHannu\n", "msg_date": "09 Jul 2002 19:05:55 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tuesday 09 July 2002 01:46 pm, Hannu Krosing wrote:\n> On Tue, 2002-07-09 at 18:30, Oliver Elphick wrote:\n> > The main problem is getting access to the user data after an upgrade.\n\n> Can't it be dumped in pre-upgrade script ?\n\nThe pre-upgrade script is run in an environment that isn't robust enough to \nhandle that. What if you run out of disk space during the dump? What if a \npostmaster is running -- and many people stop their postmaster before \nupgrading their version of PostgreSQL?\n\nBesides, at least in the case of the RPM, during OS upgrade time the %pre \nscriptlet (the one you allude to) isn't running in a system with all the \nnormal tools available. Nor is there a postmaster running. Due to a largish \nRAMdisk, a postmaster running might cause all manners of problems.\n\nAnd an error in the scriptlet could potentially cause the OS upgrade to abort \nin midstream -- not a nice thing to do to users, having a package during \nupgrade abort their OS upgrade when it is a little over half through, and in \nan unbootable state.... No, any dumping of data cannot happen during the %pre \nscript -- too many issues there.\n\n> IMHO, if rpm and apt can't run a pre-install script before deleting the\n> old binaries they are going to replace/upgrade then you should complain\n> to authors of rpm and apt.\n\nOh, so it's RPM's and APT's problem that we require so many resources during \nupgrade.... :-)\n\n> The right order should of course be\n\n> 1) run pre-upgrade (pg_dumpall >dumpfile)\n> 2) upgrade\n> 3) run post-upgrade (initdb; psql < dumpfile)\n\nAll but the first step works fine. The first step is impossible in the \nenvironment in which the %pre script runs.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 9 Jul 2002 13:10:10 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 18:30, Oliver Elphick wrote:\n> On Tue, 2002-07-09 at 18:05, Hannu Krosing wrote:\n> > The big change was from 6.x to 7.x where a chunk of data moved from end\n> > of page to start of page and tableoid column was added. Otherways the\n> > table structure is quite simple. The difficulties with user _data_ can\n> > be mainly because of binary format changes for some types and such.\n> > \n> > But I still can't see how will having a binary dumper that does mostly\n> > the work of [ old_backend -c \"COPY tablex TO STDOUT\" ] help us here. \n> > \n> > IIRC the main difficulties in upgrading have always been elsewhere, like\n> > migrating always changing system table data.\n> \n> The main problem is getting access to the user data after an upgrade. \n\nCan't it be dumped in pre-upgrade script ?\n\n> There's no particular problem in having to do an initdb, though it is an\n> inconvenience; the difficulty is simply that any packaged distribution\n> (rpm, deb, xxx) is going to have to replace all the old binaries. So by\n> the time the package is ready to do the database upgrade, it has\n> destroyed the means of dumping the old data. Lamar and I have to jump\n> through hoops to get round this -- small hoops with flaming rags round\n> them!\n\nIMHO, if rpm and apt can't run a pre-install script before deleting the\nold binaries they are going to replace/upgrade then you should complain\nto authors of rpm and apt. \n\nIt seems that they are doing things in wrong order. \n\nThe right order should of course be\n\n1) run pre-upgrade (pg_dumpall >dumpfile)\n2) upgrade\n3) run post-upgrade (initdb; psql < dumpfile)\n\n---------------\nHannu\n\n", "msg_date": "09 Jul 2002 19:46:10 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 22:10, Lamar Owen wrote:\n> On Tuesday 09 July 2002 01:46 pm, Hannu Krosing wrote:\n> > On Tue, 2002-07-09 at 18:30, Oliver Elphick wrote:\n> > > The main problem is getting access to the user data after an upgrade.\n> \n> > Can't it be dumped in pre-upgrade script ?\n> \n> The pre-upgrade script is run in an environment that isn't robust enough to \n> handle that. What if you run out of disk space during the dump? \n\nYou can either check beforehand or abort and delete the offending\ndumpfile.\n\n> What if a postmaster is running -- and many people stop their postmaster before \n> upgrading their version of PostgreSQL?\n\nIt is quite easy to both check for a running postmaster and start/stop\none.\n \n> Besides, at least in the case of the RPM, during OS upgrade time the %pre \n> scriptlet (the one you allude to) isn't running in a system with all the \n> normal tools available.\n\nI don't think that postmaster needs very many normal tools - it should\nbe quite independent, except for compat libs for larger version\nupgrades\n\n> Nor is there a postmaster running. Due to a largish \n> RAMdisk, a postmaster running might cause all manners of problems.\n\nI don't know anything about the largish RAMdisk,what I meant was that\npostmaster (a 2.7 MB program with ~4 MB RAM footprint) could include the\nfunctionality of pg_dump and be runnable in single-user mode for dumping\nold databases. \n \n> And an error in the scriptlet could potentially cause the OS upgrade to abort \n> in midstream -- not a nice thing to do to users, having a package during \n> upgrade abort their OS upgrade when it is a little over half through, and in \n> an unbootable state.... No, any dumping of data cannot happen during the %pre \n> script -- too many issues there.\n\nBut is it not the same with _every_ package ? Is there any actual\nupgrading done in the pre/post scripts or are they generally not to be\ntrusted ?\n\n> > IMHO, if rpm and apt can't run a pre-install script before deleting the\n> > old binaries they are going to replace/upgrade then you should complain\n> > to authors of rpm and apt.\n> \n> Oh, so it's RPM's and APT's problem that we require so many resources during \n> upgrade.... :-)\n\nAs you said: \"The pre-upgrade script is run in an environment that isn't\nrobust enough to handle that\". Ok, maybe it's the environmental issue\nthen ;)\n\nBut more seriously - it is a DATAbase upgrade, not a usual program\nupgrade which has a minuscule data part, usually not more than a\nconfiguration file. Postgres, as a very extensible database, has an\nability to keep much of its functionality in the database.\n\nWe already do a pretty good job with pg_dump, but I would still not\ntrust it to do everything automatically and erase the originals.\n\nIf we start claiming that postgresql can do automatic \"binary\" upgrades\nthere will be much fun with people who have some application that runs\nfine on 7.0.3 but barfs on 7.1.2, even if it is due to stricter\nadherence to SQL99 and the SQL is completely out of control or rpm/apt. \n\nThere may be even some lazy people who will think that now is the time\nto auto-upgrade from 6.x ;/\n\n> > The right order should of course be\n> \n> > 1) run pre-upgrade (pg_dumpall >dumpfile)\n> > 2) upgrade\n> > 3) run post-upgrade (initdb; psql < dumpfile)\n> \n> All but the first step works fine. The first step is impossible in the \n> environment in which the %pre script runs.\n\nOk. But would it be impossible to move the old postmaster to some other\nplace, or is the environment too fragile even for that ?\n\nIf we move the old postmaster instead of copying then there will be a\nlot less issues about running out of disk space :)\n\nWhat we are facing here is a problem similar to trying upgrade all users\nC programs when upgrading gcc. While it would be a good thing, nobody\nactually tries to do it - we require them to have source code and to do\nthe \"upgrade\" manually.\n\nThat's what I propose - dump all databases in pre-upgrade (if you are\nconcerned about disk usage, run it twice, first to | wc and then to a\nfile) and try to load in post-upgrade. \n\nThere will still be some things that are be impossible to \"upgrade\" like\nupgrading a.out \"C\" functions to elf format backend.\n\nPerhaps we will be able to detect what we can actually upgrade and bail\nout if we find something unupgradable ?\n\n-------------------\nHannu\n\n\n", "msg_date": "10 Jul 2002 01:17:22 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Oliver Elphick writes:\n\n> I never have understood why the basic table structure changes so much\n> that it can't be read; just what is involved in getting the ability to\n> read old versions?\n\nThe problem in an extensible system such as PostgreSQL is that virtually\nevery feature change is reflected by a change in the structure of the\nsystem catalogs. It wouldn't be such a terribly big problem in theory to\nmake the backend handle these changes, but you'd end up with a huge bunch\nof\n\nif (dataVersion == 1)\n do this;\nelse if (dataVersion == 2)\n do that;\n...\n\nwhich would become slow and unwieldy, and would scare away developers.\nThat would of course be a self-serving scheme, because if the development\nprogress slowed down, you would have to update less frequently.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 10 Jul 2002 00:20:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tuesday 09 July 2002 04:17 pm, Hannu Krosing wrote:\n> On Tue, 2002-07-09 at 22:10, Lamar Owen wrote:\n> > The pre-upgrade script is run in an environment that isn't robust enough\n> > to handle that. What if you run out of disk space during the dump?\n\n> You can either check beforehand or abort and delete the offending\n> dumpfile.\n\nAnd what if you have enough disk space to do the dump, but then that causes \nthe OS upgrade to abort because there wasn't enough space left to finish \nupgrading (larger packages, perhaps)? The system's hosed, and it's our \nfault.\n\n> > What if a postmaster is running -- and many people stop their postmaster\n> > before upgrading their version of PostgreSQL?\n\n> It is quite easy to both check for a running postmaster and start/stop\n> one.\n\nNot when there is no ps in your path. Or pg_ctl for that matter. Nor is \nthere necessarily a /proc tree waiting to be exploited. We're talking the \nanaconda environment, which is tailored for OS installation and upgrading. \nYou cannot start a postmaster; you cannot check to see if one is running -- \nyou can't even check to see if you're in the anaconda chroot or not, so that \nyou can use more tools if not in the OS installation mode. Again -- the \ntotal OS upgrade path is a big part of this scenario, as far as the RPM's are \nconcerned. The Debian package may or may not have as grievous a structure.\n\nThe only tool you can really use under the anaconda chroot is busybox, and it \nmay not do what you want it to.\n\n> > Besides, at least in the case of the RPM, during OS upgrade time the %pre\n> > scriptlet (the one you allude to) isn't running in a system with all the\n> > normal tools available.\n\n> I don't think that postmaster needs very many normal tools - it should\n> be quite independent, except for compat libs for larger version\n> upgrades\n\nThe problem there is that you really have no way to tell the system which sets \nof libraries you want. More to the point: RPM dependencies cannot take \nconditionals and have no concept of if..then. Nor can you tell the system to \n_install_ the new postgresql instead of _upgrade_ (incidentally, in the RPM \ncontext an upgrade is an install of the new version followed by an uninstall \nof the old one -- if the new one overwrote files their traces are just wiped \nfrom the RPM database, if they weren't overwritten, the files get wiped along \nwith their respective database entries). If I could _force_ no upgrades, it \nwould be much easier -- but I can't. Nor can I be sure the %pre scriptlet \nwill be run -- some people are so paranoid that they use rpm -U --no-scripts \nreligiously.\n\nThus, when the old postgresql rpm's database entries (in practice virtually \nevery old executable gets overwritten) are removed, its dependency \ninformation is also removed. As the install/upgrade path builds a complete \ndependency tree of the final installation as part of the process, it knows \nwhether the compat libs are needed or not. If no other program needs them, \nyou don't get them, even if you kept an old backend around that does need \nthem. But you really can't make the -server subpackage Require the compat \npackages, because you don't necessarily know what they will be named, or \nanything else they will provide. If compat libs are even available for the \nversion you're upgrading from.\n\n> > Nor is there a postmaster running. Due to a largish\n> > RAMdisk, a postmaster running might cause all manners of problems.\n\n> I don't know anything about the largish RAMdisk,what I meant was that\n> postmaster (a 2.7 MB program with ~4 MB RAM footprint) could include the\n> functionality of pg_dump and be runnable in single-user mode for dumping\n> old databases.\n\nIf a standalone backend could reliably dump the database without needing \nnetworking and many of the other things we take for granted (the install mode \nis a cut-down single-user mode of sorts, running in a chroot of a sort), then \nit might be worth looking at.\n\n> > And an error in the scriptlet could potentially cause the OS upgrade to\n> > abort in midstream -- not a nice thing to do to users, having a package\n\n> But is it not the same with _every_ package ? Is there any actual\n> upgrading done in the pre/post scripts or are they generally not to be\n> trusted ?\n\nNo other package is so *different* to require such a complicated upgrade \nprocess. Some packages do more with their scriptlets than others, but no \npackage does anything near as complicated as dumping a database. \n\n> We already do a pretty good job with pg_dump, but I would still not\n> trust it to do everything automatically and erase the originals.\n\nAnd that's a big problem. We shouldn't have that ambivalence. IOW, I think \nwe need more upgrade testing. I don't think I've seen a cycle yet that \ndidn't have upgrade problems.\n\n> If we start claiming that postgresql can do automatic \"binary\" upgrades\n> there will be much fun with people who have some application that runs\n> fine on 7.0.3 but barfs on 7.1.2, even if it is due to stricter\n> adherence to SQL99 and the SQL is completely out of control or rpm/apt.\n\nThat's just us not being backward compatible. I'm impacted by those things, \nbeing that I'm running OpenACS here on 7.2.1, when OACS is optimized for 7.1. \nCertain things are very broken.\n\n> There may be even some lazy people who will think that now is the time\n> to auto-upgrade from 6.x ;/\n\nAnd why not? If Red Hat Linux can upgrade a whole operating environment from \nversion 2.0 all the way up to 7.3 (which they claim), why can't we? If we \ncan just give people the tools to deal with potential problems after the \nupgrade, then I think we can do it. Such a tool as a old-version dumper \nwould be a lifesaver to people, I believe.\n\n> Ok. But would it be impossible to move the old postmaster to some other\n> place, or is the environment too fragile even for that ?\n\nThat is what I have done in the past -- the old backend got copied over (the \nexecutable), then a special script was run (after upgrade, manually, by the \nuser) that tried to pull a dump using the old backend. It wasn't reliable. \nThe biggest problem is that I have no way of insuring that the old backend's \ndependencies stay satisfied -- 'satisfied' meaning that the old glibc stays \ninstalled for compatibility. Glibc, after all, is being upgraded out from \nunder us, and I can't stop it or even slow it down.\n\nAnd this could even be that most pathological of cases, where an a.out based \nsystem is being upgraded to an elf system without a.out kernel support. \n(point of note: PostgreSQL first appeared in official Red Hat Linux as \nversion 6.2.1, released with Red Hat Linux 5.0, which was ELF/glibc \n(contrasted to 3.0.3 which was a.out/libc4 and 4.x which was ELF/libc5) -- \nbut I don't know about the Debian situation and its pathology.)\n\n> If we move the old postmaster instead of copying then there will be a\n> lot less issues about running out of disk space :)\n\nThe disk space issue is with the ASCII dump file itself. Furthermore, what \nhappens if the dumpfile is greater than MAXFILESIZE? Again, wc isn't in the \npath (because it too is being upgraded out from under us -- nothing is left \nuntouched by the upgrade EXCEPT that install image RAMdisk, which has a very \nlimited set of tools (and a nonstandard kernel to boot). Networking might or \nmight not be available. Unix domain sockets might or might not be available.\n\nBut the crux is that the OS upgrade environment is designed to do one thing \nand one thing alone -- get the OS installed and/or upgraded. General purpose \ntools just take up space on the install media, a place where space is at a \nvery high premium.\n\n> What we are facing here is a problem similar to trying upgrade all users\n> C programs when upgrading gcc. While it would be a good thing, nobody\n> actually tries to do it - we require them to have source code and to do\n> the \"upgrade\" manually.\n\nIs that directly comparable? If you have alot of user functions written in C \nthen possibly. But I'm not interested in pathological cases -- I'm \ninterested in something that works OK for the majority of users. As long as \nit works properly for users who aren't sophisticated enough to need the \npathological cases handled, then it should be available. Besides, one can \nalways dump and restore if one wants to. And just how well does the \nvenerable dump/restore cycle work in the presence of these pathological \ncases?\n\nRed Hat Linux doesn't claim upgradability in the presence of highly \npathological cases (such as rogue software installed from non-RPM sources, or \nnon-Red Hat RPM's installed (particularly Ximian Gnome). So you have to go \nthrough a process with that. But it is something you can recover from after \nthe upgrade is complete. That's what I'm after. I don't hold out hope for a \nfully automatic upgrade -- it would be nice, but we are too extensible for it \nto be practical. No -- I want tools to be able to recover my old data \nwithout the old version backend held-over from the previous install. And I \nthink this is a very resonable expectation.\n\n> That's what I propose - dump all databases in pre-upgrade (if you are\n> concerned about disk usage, run it twice, first to | wc and then to a\n> file) and try to load in post-upgrade.\n\nThe wc utility isn't in the path in an OS install situation. The df utility \nisn't in the path, either. You can use python, though. :-) Not that that \nwould be a good thing in this context, however.\n\n> There will still be some things that are be impossible to \"upgrade\" like\n> upgrading a.out \"C\" functions to elf format backend.\n\nIf a user is sophisticated enough to write such, that user is sophisticated \nenough to take responsibility for the upgrade. I'm not talking about users \nof that level here. But even then, it would be nice to at least get the data \nback out -- the function can then be rebuilt easily enough from source.\n\n> Perhaps we will be able to detect what we can actually upgrade and bail\n> out if we find something unupgradable ?\n\nAll is alleviated if I can run a utility after the fact to read in my old \ndata, without requiring the old packaged binaries. I don't have to \nworkaround ANYTHING.\n\nAgain I say -- would such a data dumper not be useful in cases of system \ncatalog corruption that prevents a postmaster from starting? I'm talking \nabout a multipurpose utility here, not just something to make my life as RPM \nmaintainer easy.\n\nThe pg_fsck program is a good beginning to such a program.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 9 Jul 2002 19:09:19 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tue, 2002-07-09 at 19:09, Lamar Owen wrote:\n> On Tuesday 09 July 2002 04:17 pm, Hannu Krosing wrote:\n> > On Tue, 2002-07-09 at 22:10, Lamar Owen wrote:\n> > > The pre-upgrade script is run in an environment that isn't robust enough\n> > > to handle that. What if you run out of disk space during the dump?\n> \n> > You can either check beforehand or abort and delete the offending\n> > dumpfile.\n> \n> And what if you have enough disk space to do the dump, but then that causes \n> the OS upgrade to abort because there wasn't enough space left to finish \n> upgrading (larger packages, perhaps)? The system's hosed, and it's our \n> fault.\n\nWhat normally happens when you have low amounts of free diskspace and\nattempt to upgrade the system?\n\nOn FreeBSD (portupgrade) it rolls back any changes it was attempting. I\ndon't know other systems to be able to say.\n\n\nPostgresql may require more diskspace to upgrade than most packages, \nbut if the tools cannot fail cleanly it is already a problem that needs\nto be addressed.\n\n", "msg_date": "09 Jul 2002 19:19:49 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tuesday 09 July 2002 06:20 pm, Peter Eisentraut wrote:\n> The problem in an extensible system such as PostgreSQL is that virtually\n> every feature change is reflected by a change in the structure of the\n> system catalogs. It wouldn't be such a terribly big problem in theory to\n> make the backend handle these changes, but you'd end up with a huge bunch\n> of\n\n> if (dataVersion == 1)\n> do this;\n> else if (dataVersion == 2)\n> do that;\n\nOk, pardon me while I take a moment to braindump here. And Peter, you of all \npeople caused this braindump, so, 'hold on to your hat' :-).\n\nYou know, it occurs to me that we are indeed an Object RDBMS, but not in the \nconventional sense. Our whole system is object oriented -- we are extensible \nby the data and the methods (functions) that operate on that data. In fact, \nthe base system is simply a set of objects, all the way down to the base data \ntypes and their functions. So the problem jells down to this:\n\nHow does one upgrade the method portion of the object, bringing in new object \ndata if necessary, while leaving non-impacted data alone? Is there a way of \npartitioning the method-dependent object data from the non-object data? This \nwould require a complete system catalog redesign -- or would it? \n\nCan such a migration be object-oriented in itself, with the new version \ninheriting the old version and extending it.... (like I said, I'm \nbraindumping here -- this may not be at all coherent -- but my stream of \nconsciousness rarely is [coherent]). Can our core be written/rewritten in \nsuch a way as to be _completely_ object driven? Someone steeped a little \nbetter in object theory please take over now....\n\nOr am I totally out in left field here?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 9 Jul 2002 19:24:10 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Tuesday 09 July 2002 07:19 pm, Rod Taylor wrote:\n> On Tue, 2002-07-09 at 19:09, Lamar Owen wrote:\n> > And what if you have enough disk space to do the dump, but then that\n> > causes the OS upgrade to abort because there wasn't enough space left to\n> > finish upgrading (larger packages, perhaps)? The system's hosed, and\n> > it's our fault.\n\n> What normally happens when you have low amounts of free diskspace and\n> attempt to upgrade the system?\n\nAnaconda calculates (internally -- it's a Python program) the space required \nby the upgrade and won't let you proceed if you don't have enough space as \nreported by the RPM headers. It's impossible to know ahead of time how much \nspace will be required by an ASCII dump of the PostgreSQL database, and thus \nit cannot be taken into account by that algorithm.\n\nAs to failing cleanly, work is underway to allow RPM to rollback entire OS \nupgrades. But again the disk space requirement shoots through the ceiling if \nyou do this. Already RPM can rollback the transaction being done on the RPM \ndatabase (it's a db3 database system), but rolling back the filesystem is a \nlittle different.\n\nBut anaconda (which doesn't use the command line RPM anymore, it uses librpm \nto do its own RPM processing) checks beforehand how much space is needed and \nwon't let you overspend disk space during the system upgrade.\n\nThe command-line RPM will also do this, and won't let you upgrade RPM's if \nthere's not enough disk space, as calculated by reading the RPM header, which \nhas the amount of space the uncompressed package takes (calculated as part of \nthe RPM build process).\n\nBut if you throw in an unknown increase in space that anaconda/rpm cannot \ngrok, then you cause a situation.\n\nCan the ports system take into account the space required for a dumpfile?? :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 9 Jul 2002 19:34:31 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "[replying to myself]\nOn Tuesday 09 July 2002 07:34 pm, Lamar Owen wrote:\n> if you do this. Already RPM can rollback the transaction being done on the\n> RPM database (it's a db3 database system), but rolling back the filesystem\n> is a little different.\n\nAs a note of interest, RPM itself is backed by a database, db3. Prior to \nversion 4.x, it was backed by db1. Upgrading between the versions of RPM is \nsimply -- installing db3 and dependenies, upgrade RPM, and run 'rpm \n--rebuilddb' -- which works most of the time, but there are pathological \ncases.....\n\nYou now are running db3 instead of db1, if you didn't get bit by a \npathological case. :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 9 Jul 2002 19:40:55 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "\n> Can the ports system take into account the space required for a dumpfile?? :-)\n\nIt cheats by keeping a backup of the old version -- makes an installable\npackage out of the currently installed version. This is removed once\nthe package has been successfully upgraded (including dependencies).\n\nOn failure, it rolls back any packages (and those that depend on it) to\nprior versions it backed up and continues on trying to upgrade other\nparts of the system which don't depend on the rolled back portion.\n\nPortupgrade regularly upgrades part of the system if the ports tree is\nbroken, won't build (architecture issues), couldn't download XYZ item,\nor has run into other problems. PostgreSQL in this case simply wouldn't\nget upgraded with everything else -- reporting errors at the end. That\nsaid, Postgresql also may no longer work after the upgrade -- but I\nguess thats what the 'test' mode is used to prevent.\n\n\n\n\n\n\n", "msg_date": "09 Jul 2002 19:54:17 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wed, 2002-07-10 at 00:09, Lamar Owen wrote:\n> On Tuesday 09 July 2002 04:17 pm, Hannu Krosing \n> > It is quite easy to both check for a running postmaster and start/stop\n> > one.\n> \n> Not when there is no ps in your path. Or pg_ctl for that matter. Nor is \n> there necessarily a /proc tree waiting to be exploited. We're talking the \n> anaconda environment, which is tailored for OS installation and upgrading. \n> You cannot start a postmaster; you cannot check to see if one is running -- \n> you can't even check to see if you're in the anaconda chroot or not, so that \n> you can use more tools if not in the OS installation mode. Again -- the \n> total OS upgrade path is a big part of this scenario, as far as the RPM's are \n> concerned. The Debian package may or may not have as grievous a structure.\n\nNo. I don't have anything like your problems to contend with!\n\nI can and do copy the old binaries and libraries in the pre-removal\nscript, which means I have a pretty good chance of accomplishing an\nupgrade without user intervention. If I had your problems I'd give up!\n\n\n", "msg_date": "10 Jul 2002 05:37:03 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> The current upgrade process for PostgreSQL is founded on the idea that\n> people build from source. With binary distributions, half the users\n> wouldn't know what to do with source; they expect (and are entitled to\n> expect) that an upgrade will progress without the need for significant\n> intervention on their part. PostgreSQL makes this really difficult for\n> the package maintainers, and this has a knock-on effect on the\n> reliability of the upgrade process and thus on PostgreSQL itself.\n\nI have to object here. The PostgreSQL upgrade process is based on\nthe idea of dump, install, initdb, restore. That has nothing to\ndo with building from source or installing from binaries.\n\nThe problem why this conflicts with these package managers is,\nbecause they work package per package, instead of looking at the\nbig picture. Who said you can replace package A before running\nthe pre-upgrade script of dependent package B? Somehow this looks\nlike a foreign key violation to me. Oh, I forgot, RI constraints\nare for documentation purposes only ... Greetings from the MySQL\ndocumentation ;-)\n\n\nJan\n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n", "msg_date": "Wed, 10 Jul 2002 03:24:04 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Lamar Owen wrote:\n> \n> [replying to myself]\n> On Tuesday 09 July 2002 07:34 pm, Lamar Owen wrote:\n> > if you do this. Already RPM can rollback the transaction being done on the\n> > RPM database (it's a db3 database system), but rolling back the filesystem\n> > is a little different.\n> \n> As a note of interest, RPM itself is backed by a database, db3. Prior to\n> version 4.x, it was backed by db1. Upgrading between the versions of RPM is\n> simply -- installing db3 and dependenies, upgrade RPM, and run 'rpm\n> --rebuilddb' -- which works most of the time, but there are pathological\n> cases.....\n> \n> You now are running db3 instead of db1, if you didn't get bit by a\n> pathological case. :-)\n\nAnd how big/complex is the db1/3 system catalog we're talking\nabout exactly? How many rewrite rules have to be converted into\nthe new parsetree format during an RPM upgrade? \n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n", "msg_date": "Wed, 10 Jul 2002 03:42:32 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wed, 2002-07-10 at 01:09, Lamar Owen wrote:\n> On Tuesday 09 July 2002 04:17 pm, Hannu Krosing wrote:\n> > On Tue, 2002-07-09 at 22:10, Lamar Owen wrote:\n> > > The pre-upgrade script is run in an environment that isn't robust enough\n> > > to handle that. What if you run out of disk space during the dump?\n> \n> > You can either check beforehand or abort and delete the offending\n> > dumpfile.\n> \n...\n> > That's what I propose - dump all databases in pre-upgrade (if you are\n> > concerned about disk usage, run it twice, first to | wc and then to a\n> > file) and try to load in post-upgrade.\n> \n> The wc utility isn't in the path in an OS install situation. The df utility \n> isn't in the path, either. You can use python, though. :-) Not that that \n> would be a good thing in this context, however.\n\nWhy not ? \n\nThe following is wc in python\n\n#!/usr/bin/python\nimport sys, string\nbytes,words,lines = 0,0,0\nwhile 1:\n s = sys.stdin.readline()\n if not s: break\n bytes = bytes + len(s)\n words = words + len(string.split(s))\n lines = lines + 1\nsys.stdout.write('%7d %7d %7d\\n' % (lines,words,bytes))\n\n\nAnd I have written custom postgres table dumpers in python without too\nmuch effort (except reverse-engineering the page structure ;) for both\n6.x and 7.x database tables, so we could actually use python here too. \n\nThe basic user_data extractor part is done in about 50 lines - I did not\nneed much else as I wrote custom datatype converters for the specific\ntable I needed.\n\nThe generic part ( conversions and determining if tuples are live)\nshould also not bee too difficult.\n\nThe only part I can see right away as hard to re-implement in python is\nTOAST.\n\nStill I guess that the basic db_dump.py app will be somewhere between\n500 and 5000 lines long, with possibly the toast compression module done\nas c-language module modtoast.so\n\n\nThe only problem with this approach is that it needs maintaining\nseparately from postgres proper. OTOH, this may also be a good thing, as\na separate reimplementation is only known working guarantee that we\nactually know what our page format is ;) as the docs have always been\nwrong about this.\n\n> Again I say -- would such a data dumper not be useful in cases of system \n> catalog corruption that prevents a postmaster from starting? I'm talking \n> about a multipurpose utility here, not just something to make my life as RPM \n> maintainer easy.\n> \n> The pg_fsck program is a good beginning to such a program.\n\nWhere can I fing pg_fsck ? \n\nIt is not in recent CVS snapshots.\n\n-------------\nHannu\n\n", "msg_date": "10 Jul 2002 15:11:34 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wednesday 10 July 2002 03:24 am, Jan Wieck wrote:\n> Oliver Elphick wrote:\n> > The current upgrade process for PostgreSQL is founded on the idea that\n> > people build from source. With binary distributions, half the users\n> > wouldn't know what to do with source; they expect (and are entitled to\n\n> I have to object here. The PostgreSQL upgrade process is based on\n> the idea of dump, install, initdb, restore. That has nothing to\n> do with building from source or installing from binaries.\n\nLet me interject a minor point here. I recall upgrade cycles where I had to \ninstall a newer pg_dump in order to get my data out of the old system due to \nbugs in the prior pg_dump. Getting two versions of PostgreSQL to cooexist \npeacefully in a binary packaged environment is a completely different problem \nthan the typical 'from source' installation path -- which almost implies two \nversions available concurrently. I believe this is the artifact Oliver was \nalluding to. \n\nI personally have not had the luxury of having two complete installations \navailable at one instant during RPM upgrades. Nor will any users of \nprepackaged binaries.\n\n> The problem why this conflicts with these package managers is,\n> because they work package per package, instead of looking at the\n> big picture. Who said you can replace package A before running\n> the pre-upgrade script of dependent package B?\n\nHow does this create the problem? The postgresql-server subpackages of two \nversions are 'Package A' above. There is no package B. \n\nDefine 'the big picture' for all possible permutations of installed packages, \nplease.\n\n> Somehow this looks\n> like a foreign key violation to me. Oh, I forgot, RI constraints\n> are for documentation purposes only ... Greetings from the MySQL\n> documentation ;-)\n\nIs sarcasm really necessary?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Jul 2002 10:12:11 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "[cc: trimmed]\n\nOn Wednesday 10 July 2002 03:42 am, Jan Wieck wrote:\n> Lamar Owen wrote:\n> > As a note of interest, RPM itself is backed by a database, db3. Prior to\n> > version 4.x, it was backed by db1. Upgrading between the versions of RPM\n> > is simply -- installing db3 and dependenies, upgrade RPM, and run 'rpm\n> > --rebuilddb' -- which works most of the time, but there are pathological\n> > cases.....\n\n> > You now are running db3 instead of db1, if you didn't get bit by a\n> > pathological case. :-)\n\n> And how big/complex is the db1/3 system catalog we're talking\n> about exactly? \n\nWell, on a fully installed system it's about 44MB. The RPM database isn't \nterribly complicated, but it's not trivial, either.\n\nHowever, unless I am mistaken the generic db3 situation is easy migration.\n\n>How many rewrite rules have to be converted into\n> the new parsetree format during an RPM upgrade?\n\nDon't know if anything comparable exists.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Jul 2002 10:15:44 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wednesday 10 July 2002 09:11 am, Hannu Krosing wrote:\n> On Wed, 2002-07-10 at 01:09, Lamar Owen wrote:\n> > The wc utility isn't in the path in an OS install situation. The df\n> > utility isn't in the path, either. You can use python, though. :-) Not\n> > that that would be a good thing in this context, however.\n\n> Why not ?\n\n> The following is wc in python\n\n[snip]\n\n> And I have written custom postgres table dumpers in python without too\n> much effort (except reverse-engineering the page structure ;) for both\n> 6.x and 7.x database tables, so we could actually use python here too.\n\nI'm willing to look into this. However, the dump still has to be pulled with \na standalone backend -- no networking availability can be assumed.\n\n> The only problem with this approach is that it needs maintaining\n> separately from postgres proper. OTOH, this may also be a good thing, as\n> a separate reimplementation is only known working guarantee that we\n> actually know what our page format is ;) as the docs have always been\n> wrong about this.\n\nWell, I could deal with that.\n\n> > The pg_fsck program is a good beginning to such a program.\n\n> Where can I fing pg_fsck ?\n\n[looking in my bookmarks....]\nhttp://svana.org/kleptog/pgsql/pgfsck.html\n\n> It is not in recent CVS snapshots.\n\nMartijn hasn't submitted it yet (AFAICT) for inclusion. I believe if nothing \nelse it should be in contrib.\n\nContrary to some people's apparent perception, I'm actually fairly flexible on \nthis as long as the basic points can be dealt with.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Jul 2002 10:20:52 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wed, 2002-07-10 at 19:56, Lamar Owen wrote:\n> On Wednesday 10 July 2002 11:48 am, Hannu Krosing wrote:\n> > On Wed, 2002-07-10 at 16:20, Lamar Owen wrote:\n> > > On Wednesday 10 July 2002 09:11 am, Hannu Krosing wrote:\n> > > > And I have written custom postgres table dumpers in python without too\n> > > > much effort (except reverse-engineering the page structure ;) for both\n> > > > 6.x and 7.x database tables, so we could actually use python here too.\n> \n> > > I'm willing to look into this. However, the dump still has to be pulled\n> > > with a standalone backend -- no networking availability can be assumed.\n> \n> > Actually it works on raw table file ;)\n> \n> > the script is meant for quick and dirty resque operations, and requires\n> > that one writes their own data-field extractor code. I have used it\n> > mainly to resurrect accidentally deleted data.\n> \n> > it is for 7.x style pagefile layout\n> \n> Hmmm. This is interesting stuff. I'll have to take a look at it once I'm \n> finished re-learning Fortran 77 for a project I'm doing (34MB of DEC Fortran \n> source that g77 doesn't like very well) for work. I have a hard time \n> switching language gears. Particularly the Fortran 77 -> Python gear... :-) \n> Although at least the fixed-form paradigm stays there in the transition. :-) \n> It's been a very long time since I've done Fortran of this complexity. \n> Actually, I've never done Fortran of _this_ complexity -- this is serious \n> number-crunching stuff that uses all manners of higher math (tensors, even). \n> There is no direct C equivalent to some of the stuff this code is doing -- \n> which is part of the reason g77 is having problems. But I digress.\n\nOnce you understand what te code is doing you can port it to python\nusing Numerical Python (http://www.pfdubois.com/numpy/) and/or\nScientific Python (http://starship.python.net/~hinsen/ScientificPython/)\nto get a head-start in total conversion to python ;)\n\nYou may even try using F2PY ­ Fortran to Python Interface Generator \n(http://cens.ioc.ee/projects/f2py2e/).\n \n> Getting the %pre scriptlet to use a non-sh interpreter is undocumented, but \n> not hard. :-) (actually, I stumbled upon it by accident one time -- that \n> time it was a bug....) Now to see if it can be done consistently in both the \n> anaconda chroot as well as a standard rpm command line invocation.\n\nActually, if the python dumper can be made to work somewhat reiably it\ncan be run after install/upgrade without too much trouble.\n\n--------------\nHannu\n\n", "msg_date": "10 Jul 2002 19:26:55 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wednesday 10 July 2002 11:48 am, Hannu Krosing wrote:\n> On Wed, 2002-07-10 at 16:20, Lamar Owen wrote:\n> > On Wednesday 10 July 2002 09:11 am, Hannu Krosing wrote:\n> > > And I have written custom postgres table dumpers in python without too\n> > > much effort (except reverse-engineering the page structure ;) for both\n> > > 6.x and 7.x database tables, so we could actually use python here too.\n\n> > I'm willing to look into this. However, the dump still has to be pulled\n> > with a standalone backend -- no networking availability can be assumed.\n\n> Actually it works on raw table file ;)\n\n> the script is meant for quick and dirty resque operations, and requires\n> that one writes their own data-field extractor code. I have used it\n> mainly to resurrect accidentally deleted data.\n\n> it is for 7.x style pagefile layout\n\nHmmm. This is interesting stuff. I'll have to take a look at it once I'm \nfinished re-learning Fortran 77 for a project I'm doing (34MB of DEC Fortran \nsource that g77 doesn't like very well) for work. I have a hard time \nswitching language gears. Particularly the Fortran 77 -> Python gear... :-) \nAlthough at least the fixed-form paradigm stays there in the transition. :-) \nIt's been a very long time since I've done Fortran of this complexity. \nActually, I've never done Fortran of _this_ complexity -- this is serious \nnumber-crunching stuff that uses all manners of higher math (tensors, even). \nThere is no direct C equivalent to some of the stuff this code is doing -- \nwhich is part of the reason g77 is having problems. But I digress.\n\nGetting the %pre scriptlet to use a non-sh interpreter is undocumented, but \nnot hard. :-) (actually, I stumbled upon it by accident one time -- that \ntime it was a bug....) Now to see if it can be done consistently in both the \nanaconda chroot as well as a standard rpm command line invocation.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Jul 2002 10:56:06 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wed, 2002-07-10 at 16:20, Lamar Owen wrote:\n> On Wednesday 10 July 2002 09:11 am, Hannu Krosing wrote:\n> > On Wed, 2002-07-10 at 01:09, Lamar Owen wrote:\n> > > The wc utility isn't in the path in an OS install situation. The df\n> > > utility isn't in the path, either. You can use python, though. :-) Not\n> > > that that would be a good thing in this context, however.\n> \n> > Why not ?\n> \n> > The following is wc in python\n> \n> [snip]\n> \n> > And I have written custom postgres table dumpers in python without too\n> > much effort (except reverse-engineering the page structure ;) for both\n> > 6.x and 7.x database tables, so we could actually use python here too.\n> \n> I'm willing to look into this. However, the dump still has to be pulled with \n> a standalone backend -- no networking availability can be assumed.\n\nActually it works on raw table file ;)\n\nI attach code that dumps data from page file for table of 4 ints all NOT\nNULL, like\n\ncreate table fourints(\n i1 int not null,\n i2 int not null,\n i3 int not null,\n i4 int not null\n);\n\nthe script is meant for quick and dirty resque operations, and requires\nthat one writes their own data-field extractor code. I have used it\nmainly to resurrect accidentally deleted data.\n\nit is for 7.x style pagefile layout\n\n-------------------\nHannu", "msg_date": "10 Jul 2002 17:48:21 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wed, 2002-07-10 at 16:15, Lamar Owen wrote:\n> [cc: trimmed]\n> \n> On Wednesday 10 July 2002 03:42 am, Jan Wieck wrote:\n> >How many rewrite rules have to be converted into\n> > the new parsetree format during an RPM upgrade?\n> \n> Don't know if anything comparable exists.\n>\n\nIMHO the best solution here is to also keep the source code for anything\nthat is usually kept as parse trees or somesuch so that one can get at\nit without full backend running.\n\n-------------\nHannu\n\n", "msg_date": "10 Jul 2002 17:54:15 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wed, 2002-07-10 at 16:20, Lamar Owen wrote:\n> On Wednesday 10 July 2002 09:11 am, Hannu Krosing wrote:\n> \n> > The only problem with this approach is that it needs maintaining\n> > separately from postgres proper. OTOH, this may also be a good thing, as\n> > a separate reimplementation is only known working guarantee that we\n> > actually know what our page format is ;) as the docs have always been\n> > wrong about this.\n> \n> Well, I could deal with that.\n\nalso we must be aware that the page format is most likely\nplatform-dependant \n\n-----------\nHannu\n", "msg_date": "10 Jul 2002 17:56:27 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wednesday 10 July 2002 10:26 am, Hannu Krosing wrote:\n> Actually, if the python dumper can be made to work somewhat reiably it\n> can be run after install/upgrade without too much trouble.\n\nYes, yes, of course. My bad -- brain needs oil change... :-)\n\nThanks for the links to the python stuff, particularly the fortran to python \ntranslator.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Jul 2002 13:14:22 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "Lamar Owen wrote:\n\n> On Wednesday 10 July 2002 03:24 am, Jan Wieck wrote:\n> \n> > The problem why this conflicts with these package managers is,\n> > because they work package per package, instead of looking at the\n> > big picture. Who said you can replace package A before running\n> > the pre-upgrade script of dependent package B?\n> \n> How does this create the problem? The postgresql-server subpackages of two\n> versions are 'Package A' above. There is no package B.\n\nSomeone was talking about doing a complete OS upgrade and updating\nsomething the new PG release (that is scheduled for update later) needs\nbut what makes the current old release not functional any more. Maybe I\nmisunderstood something.\n\n> \n> Define 'the big picture' for all possible permutations of installed packages,\n> please.\n\nGot me on that. Sure, with all the possible permutations there is\nallways an unsolveable dependency. What I think is, that knowing all\npackages that are installed, that are to be added/removed/updated, it\nwould be possible to run pre-install, pre-update, pre-remove scripts for\nall packages first. They have to clean up, save info and the like (dump\nin our case, maybe install a new version of pg_dump runnable in the old\nenvironment), but NOT disable functionality of any package. Second\ninstall all binaries. Third run a second round of scripts for all\npackages, finalizing the packages action.\n\n> \n> > Somehow this looks\n> > like a foreign key violation to me. Oh, I forgot, RI constraints\n> > are for documentation purposes only ... Greetings from the MySQL\n> > documentation ;-)\n> \n> Is sarcasm really necessary?\n\nReally really! I am dependent on it. If I don't get my daily dosis of\nsarcasm, I become extremely ironic or sometimes cynic.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Wed, 10 Jul 2002 16:42:54 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: (A) native Windows port" }, { "msg_contents": "On Wednesday 10 July 2002 04:42 pm, Jan Wieck wrote:\n> Lamar Owen wrote:\n> > On Wednesday 10 July 2002 03:24 am, Jan Wieck wrote:\n> > > The problem why this conflicts with these package managers is,\n> > > because they work package per package, instead of looking at the\n> > > big picture. Who said you can replace package A before running\n> > > the pre-upgrade script of dependent package B?\n\n> > The postgresql-server subpackages of\n> > two versions are 'Package A' above. There is no package B.\n\n> Someone was talking about doing a complete OS upgrade and updating\n> something the new PG release (that is scheduled for update later) needs\n> but what makes the current old release not functional any more. Maybe I\n> misunderstood something.\n\nYes, you misunderstood. The whole release is upgraded, and its the database \nitself that breaks. How is the package manager supposed to know you had to \nmake a backup copy of an executable in order to cater to the broken upgrade \ncycle? (Is that sarcastic enough :-)... being that you like your daily dose \n:-)).\n\nThe backup executable no longer 'belongs' to any package as far as the rpm \ndatabase is concerned. \n\nSuppose the upgrade in question was from PostgreSQL 7.0.3-2 to 7.2.1-5 (the \n'-2' and '-5' are the release numbers of that particular RPMset -- a version \nnumber for the package independent of the upstream program). The backend \nitself belongs to package 'postgresql-server' in both versions. After \nchecking that postinstallation dependencies will be satisfied by its actions, \nthe upgrade proceeds to install postgresql-server-7.2.1-5, which has a %pre \nscriptlet that makes a copy of /usr/bin/postgres and links, along with libpq \nand pg_dump for that version, into /usr/lib/pgsql/backups (IIRC -- it's been \na long day and I haven't checked the accuracy of that detail). \n\nSaid %pre scriptlet runs, then rpm unpacks its payload, a cpio archive \ncontaining the files of the package. /usr/bin/postgres is one of the files \noverwritten in this process. There could be trigger scripts installed by \nother packages run at this time. Then the %post scriptlet is run, which in \npractice creates a postgres user and group, chowns a few directories, and \nruns ldconfig to get any new shared libraries. \n\nNow the postgresql-server-7.0.3-2 package gets uninstalled. First, the \n%preuninst scriptlet runs. Note that a conditional is available to \ndistinguish between an upgrade 'uninstall' and a real uninstall. Then any \nregistered triggers are run. Then any non-overwritten files are removed, and \nthe database entries for 7.0.3-2 are removed. Finally, the %postuninst \nscriptlet runs.\n\nYou now have the new package in place.\n\nDuring an OS upgrade the dependencies are finagled in such a way that the \n'satisfied dependencies' for postgresql-server-7.0.3-2, which is going to be \nreplaced by postgresql-server-7.2.1-5's, won't be required any more. Unless \nanother package requires the various shared libraries the 7.0.3-2 backend \nrequired, those shared libraries may get 'upgraded' out of the way -- the \nscriptlets have no way of communicating to the upgrade process 'hey! hold on \nto the dependency information for postgresql-server-7.0.3-2, even though that \npackage is no longer marked as being installed.'\n\nWhew.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Jul 2002 18:08:51 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: (A) native Windows port" } ]
[ { "msg_contents": "I have been reviewing Rod Taylor's pg_depend patch, which among other\nthings adds SQL-compliant DROP RESTRICT/CASCADE syntax and prevents\nyou from dropping things that other things depend on, as in ye olde\nnovice error of dropping a function used by a trigger.\n\nAs submitted, the patch gives elog(ERROR) as soon as it finds any\ndependency, if you've specified (or defaulted to) DROP RESTRICT\nbehavior. This means you only find out about one randomly-chosen\ndependency of the target object, and have no easy way to know what\nelse might get dropped if you say DROP CASCADE.\n\nI am thinking of changing the behavior so that it reports *all* the\ndependencies via NOTICEs before finally failing. So instead of this:\n\nDROP TYPE widget RESTRICT; -- fail\nERROR: Drop Restricted as Operator <% Depends on Type widget\n\nyou might see this:\n\nDROP TYPE widget RESTRICT; -- fail\nNOTICE: operator <% depends on type widget\nNOTICE: operator >% depends on type widget\nNOTICE: operator >=% depends on type widget\nERROR: Cannot drop type widget because other objects depend on it\n\tUse DROP ... CASCADE to drop the dependent objects too\n\nAny objections?\n\nAlso, would it be a good idea to make it *recursively* report all\nthe indirect as well as direct dependencies? The output might get\na little bulky, but if you really want to know what DROP CASCADE\nwill get you into, seems like that is the only way to know.\n\nTo work recursively without getting into an infinite loop in the case of\ncircular dependencies, we'd need to make DROP actually drop each object\nand CommandCounterIncrement, even in the RESTRICT case; it would rely on\nrolling back the entire transaction when we finally elog(ERROR). This\nmight make things a tad slow, too, for something with many dependencies\n... but I don't think we need to worry about making an error case fast.\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:18:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "User-friendliness for DROP RESTRICT/CASCADE" }, { "msg_contents": "Tom Lane wrote:\n> \n> DROP TYPE widget RESTRICT; -- fail\n> NOTICE: operator <% depends on type widget\n> NOTICE: operator >% depends on type widget\n> NOTICE: operator >=% depends on type widget\n> ERROR: Cannot drop type widget because other objects depend on it\n> \tUse DROP ... CASCADE to drop the dependent objects too\n> \n> Any objections?\n> \n> Also, would it be a good idea to make it *recursively* report all\n> the indirect as well as direct dependencies? The output might get\n> a little bulky, but if you really want to know what DROP CASCADE\n> will get you into, seems like that is the only way to know.\n> \n> To work recursively without getting into an infinite loop in the case of\n> circular dependencies, we'd need to make DROP actually drop each object\n> and CommandCounterIncrement, even in the RESTRICT case; it would rely on\n> rolling back the entire transaction when we finally elog(ERROR). This\n> might make things a tad slow, too, for something with many dependencies\n> ... but I don't think we need to worry about making an error case fast.\n> \n> Comments?\n\nIt would be nice if it is easy to do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 26 Jun 2002 14:06:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: User-friendliness for DROP RESTRICT/CASCADE" }, { "msg_contents": "Tom Lane wrote:\n> Also, would it be a good idea to make it *recursively* report all\n> the indirect as well as direct dependencies? The output might get\n> a little bulky, but if you really want to know what DROP CASCADE\n> will get you into, seems like that is the only way to know.\n> \n> To work recursively without getting into an infinite loop in the case of\n> circular dependencies, we'd need to make DROP actually drop each object\n> and CommandCounterIncrement, even in the RESTRICT case; it would rely on\n> rolling back the entire transaction when we finally elog(ERROR). This\n> might make things a tad slow, too, for something with many dependencies\n> ... but I don't think we need to worry about making an error case fast.\n> \n> Comments?\n> \n\nSeems like the best approach to me. There's nothing more annoying than \nfixing errors one at a time, just to see what the next one is.\n\nIt would be nice if the recursive dependency checking function was \navailable as an end user function too, so you could analyze dependencies \nbefore even trying to drop something, or even just to understand a \ndatabase schema you've inherited from someone else.\n\nJoe\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 11:10:25 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: User-friendliness for DROP RESTRICT/CASCADE" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> It would be nice if the recursive dependency checking function was \n> available as an end user function too, so you could analyze dependencies \n> before even trying to drop something, or even just to understand a \n> database schema you've inherited from someone else.\n\nIt'd be a pretty trivial exercise to build something that looks at the\npg_depend entries and generates whatever kind of display you want.\n\nDavid Kaplan reminded me that there is another UI issue to be\nconsidered: when we *are* doing a DROP CASCADE, should the dropped\ndependent objects be reported somehow? As it stands, Rod's patch emits\nelog(NOTICE) messages in this case, but I am wondering whether that will\nbe seen as useful or merely annoying chatter.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2002 18:30:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: User-friendliness for DROP RESTRICT/CASCADE " }, { "msg_contents": "On Wed, 2002-06-26 at 22:30, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > It would be nice if the recursive dependency checking function was \n> > available as an end user function too, so you could analyze dependencies \n> > before even trying to drop something, or even just to understand a \n> > database schema you've inherited from someone else.\n> \n> It'd be a pretty trivial exercise to build something that looks at the\n> pg_depend entries and generates whatever kind of display you want.\n> \n> David Kaplan reminded me that there is another UI issue to be\n> considered: when we *are* doing a DROP CASCADE, should the dropped\n> dependent objects be reported somehow? As it stands, Rod's patch emits\n> elog(NOTICE) messages in this case, but I am wondering whether that will\n> be seen as useful or merely annoying chatter.\n\nIf the notices about implicit drops (triggers on tables, etc.) has been\nfound to be useful in both creation and destruction then I would assume\nthat this information would be wanted as well.\n\nIf the above information has not been found to be useful in the past,\nthen I would expect it to continue as chatter.\n\nPersonally, I find it to be chatter and turn off NOTICES in general, but\nbelieve it to be consistent with similar messages in the past.\n\n\n\n", "msg_date": "26 Jun 2002 23:51:44 +0000", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: User-friendliness for DROP RESTRICT/CASCADE" }, { "msg_contents": "Rod Taylor wrote:\n> > David Kaplan reminded me that there is another UI issue to be\n> > considered: when we *are* doing a DROP CASCADE, should the dropped\n> > dependent objects be reported somehow? As it stands, Rod's patch emits\n> > elog(NOTICE) messages in this case, but I am wondering whether that will\n> > be seen as useful or merely annoying chatter.\n> \n> If the notices about implicit drops (triggers on tables, etc.) has been\n> found to be useful in both creation and destruction then I would assume\n> that this information would be wanted as well.\n> \n> If the above information has not been found to be useful in the past,\n> then I would expect it to continue as chatter.\n> \n> Personally, I find it to be chatter and turn off NOTICES in general, but\n> believe it to be consistent with similar messages in the past.\n\nAgreed. If you issue a single DROP that hits other objects, I think\npeople would want to see that, but then again, if you drop the table,\nyou would expect triggers and sequences to disappear with no mention.\n\nTough one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 26 Jun 2002 21:50:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: User-friendliness for DROP RESTRICT/CASCADE" }, { "msg_contents": "> DROP TYPE widget RESTRICT; -- fail\n> NOTICE: operator <% depends on type widget\n> NOTICE: operator >% depends on type widget\n> NOTICE: operator >=% depends on type widget\n> ERROR: Cannot drop type widget because other objects depend on it\n> Use DROP ... CASCADE to drop the dependent objects too\n> \n> Any objections?\n\nThat looks pretty sweet to me...\n\nChris\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 11:42:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: User-friendliness for DROP RESTRICT/CASCADE" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Matthew T. O'Connor [mailto:matthew@zeut.net] \n> Sent: 26 June 2002 14:44\n> To: Christopher Kings-Lynne; josh@agliodbs.com; Rod Taylor; \n> Bruce Momjian\n> Cc: James Hubbard; Dave Cramer; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Democracy and organisation : let's \n> make a revolution\n> \n> \n> > TOra uses QT and is cool. Unfortunately Windows version \n> costs money. \n> > It is utterly, totally awesome though. Don't know how good its \n> > Postgres support is working at the moment, tho.\n> \n> Is that true? There is QT Free for windows. It's not open \n> sourced at all but \n> is free as in beer.\n\nI just looked at that 5 minutes ago. The licence is truly horrible - in\nshort, if I were to rewrite pgAdmin using it, I would *not* be allowed\nto *use* (or develop) pgAdmin at work under the QT Free licence.\n\nThe free X version is another licence again but is not Windows :-(\n\nI'm currently playing with wxWindows which claims to support Windows, OS\nX, and *nix with more to come (Beos???).\n\nRegards, Dave.\n\n\n", "msg_date": "Wed, 26 Jun 2002 16:49:04 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a revolution" } ]
[ { "msg_contents": "I find myself repeatedly arguing for partial solutions, and having to\nstruggle with other developers who feel these solutions are hacks.\nLet me explain why I like these hacks.\n\nWhen we have a feature that users want, often we can't get it\nimplemented promptly in a clean way. It can take several releases for\nsomeone to focus on the problem, re-factor the code, and get the feature\nin there properly.\n\nWhile we are waiting months, often years, for a features to be properly\nimplemented, our users have no solution except to wait for us to\nimplement it.\n\nAt the same time, there often is a way to implement the feature\npartially, often unattractively, so that users can use the feature until\nwe get around to implementing it properly.\n\nWhen I think back on many of my code contributions, a lot of them were\nsuch hacks: temp tables, optimizer statistics, indexed LIKE in gram.y. \nMany people hated that last one, and I got all sorts of grief because it\nwas done in such an ugly way, but it was used for years until the\nfeature was properly implemented in the optimizer. Same for temp tables\nand optimizer statistics. That code is gone now, and that is fine. It\nwas easy to rip out once a proper solution was made, but it served its\npurpose.\n\nSo, when we review patches, we shouldn't be turning up our noses at\nimperfect solutions if the solution meets needs of our users. We had\nDROP COLUMN and NO CREATE TABLE solutions suggested many years ago, and\nbecause the solutions weren't perfect, we don't have those features, and\nusers who needed those features have had to move to other databases. \nHow many users have we lose just on those two features?\n\nSure, now we will have schemas in 7.3, but we could have given users _a_\nsolution years ago; not a prefect solution, but enough of a solution to\nkeep them using PostgreSQL until we implemented it properly.\n\nMaybe this is marketing, but when people repeatedly ask for a feature,\nand we can implement it with a partial solution now, I think we should\ndo it, rather than saying \"Oh, we can't do that properly so we will just\ndo nothing.\"\n\nIf we want to grow PostgreSQL, we need to meet users needs, even if that\nrequires stomaching some hack solutions from time to time. That's why I\nlike partial solutions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 26 Jun 2002 14:21:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Why I like partial solutions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So, when we review patches, we shouldn't be turning up our noses at\n> imperfect solutions if the solution meets needs of our users.\n\nI think our standards have gone up over the years, and properly so.\nThe fact that we put in hacks some years ago doesn't mean that we\nstill should.\n\nI don't really mind hacks^H^H^Hpartial solutions that are clean subsets\nof the functionality we want to have eventually. I do object to hacks\nthat will create a backwards-compatibility problem when we want to do it\nright.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2002 19:16:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why I like partial solutions " }, { "msg_contents": "On Wed, 26 Jun 2002, Tom Lane wrote:\n\n> I don't really mind hacks^H^H^Hpartial solutions that are clean subsets\n> of the functionality we want to have eventually. I do object to hacks\n> that will create a backwards-compatibility problem when we want to do it\n> right.\n\nIf the backwards compatability problem is just related to stuff from the\nusers (i.e., this keyword works in this release, but will not work in\nfuture releases), I don't see the problem. Just document it and move on.\nThe user can either use it and deal with the compatbility pain later,\nor not use it and be just where he would be if the hack were never\nimplmemented in the first place.\n\nOtherwise you only have to leave the feature in until the next major\nrelease, anyway, right? Because for major releases it's expected that\nyou will have to dump and restore you database anyway, hmm?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 19:45:23 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Why I like partial solutions " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > So, when we review patches, we shouldn't be turning up our noses at\n> > imperfect solutions if the solution meets needs of our users.\n> \n> I think our standards have gone up over the years, and properly so.\n> The fact that we put in hacks some years ago doesn't mean that we\n> still should.\n> \n> I don't really mind hacks^H^H^Hpartial solutions that are clean subsets\n> of the functionality we want to have eventually. I do object to hacks\n> that will create a backwards-compatibility problem when we want to do it\n> right.\n\nI absolutely agree on that. If we at some point want to have a given\nfeature, we need to avoid backward compatibility problems.\n\nAs for features that are independent, don't break anything, just\nadd-on's that can happily swim around in contrib (but stay out of the\ndeep water), we might want to become a bit more relaxed again.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Thu, 27 Jun 2002 10:13:06 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Why I like partial solutions" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Josh Berkus [mailto:josh@agliodbs.com]\n> Sent: Wednesday, June 26, 2002 9:18 AM\n> To: Curt Sampson; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Democracy and organisation : let's make a\n> \n> \n> Curt,\n> \n> You do point out some good areas in which PostgreSQL needs to improve\n> if we're going to go after the MS SQL market. The rest of this\n> e-mail, though, is a refutation of your comparison.\n> \n> As a professional MS SQL Server 7.0 manager, I have to disagree.\n> However, I have not used MS SQL 2000 extensively, so it's possible\n> that some of these issues have been dealt with by MS in the version\n> upgrade.\n> \n> > Uh...\"no way.\" I've found MS SQL Server is consistently faster when\n> > it\n> > comes to the crunch, due to things like writing a heck of a lot less\n> > to the log files, significantly less table overhead, having \n> clustered\n> > indexes, and so on. \n> \n> Up to about a million records. For some reason, when MS SQL \n> Server 7.0\n> reaches the 1,000,000 point, it slows down to a crawl \n> regardless of how\n> much RAM and processor power you throw at it (such as a Proliant 7000\n> with dual processors, 2 gigs of RAM and Raid-5 ... and still only one\n> person at a time can do summaries on the 3,000,000 record timecard\n> table. Bleah!)\n\nTotally false:\nhttp://www.microsoft.com/sql/evaluation/compare/benchmarks.asp\n \n> And clustered indexes are only really useful on tables that don't see\n> much write activity.\n\nFalse again. There is a problem if the clusted objects are added always\nto the end of the file, or are constantly hitting the same data page.\nThis is often solved by creation of a hashed index that is clustered.\nThen, the new writes are going to different pages.\n \n> > (Probably more efficient buffer management also\n> > helps a bit.) \n> \n> Also not in my experience. I've had quite a few occasions \n> where MS SQL\n> keeps chewing up RAM until it runs out of available RAM ... and then\n> keeps going, locking up the NT server and forcing an emergency reboot.\n\nThat's a configuration error.\n\n> MS SQL doesn't seem to be able to cope with limited RAM, even when\n> that limit is 1gb.\n> \n> > Other areas where postgres can't compare is backup and\n> > restore, \n> \n> Hmmm .... MS SQL has nice GUI tools including tape management, and\n> supports incremental backup and Point-in-time recovery. On the other\n> hand, MS SQL backup takes approximately 3x as long for a similar sized\n> database as PostgreSQL, the backup files are binary and can't \n> be viewed\n> or edited, sometimes the restore just fails for no good reason\n> corrupting your database and shutting down the system, restore to a\n> database with different security setup is sheer hell, and the database\n> files can't be moved on the disk without destroying them. \n> \n> I'd say we're at a draw with MS SQL as far as backup/restore goes.\n> Ours is more reliable, portable, and faster. Theirs has \n> lots of nice\n> admin tools and features.\n> \n> >ability to do transaction log shipping, \n> \n> Well, we don't have a transaction log in the SQL Server sense, so this\n> isn't relevant.\n> \n> >replication, \n> \n> This is a missing piece for Postgres that's been much \n> discussed on this\n> list.\n> \n> > access\n> > rights, \n> \n> We have these, especially with 7.3's new DB permissions.\n> \n> disk allocation (i.e., being able to determine on which disk\n> > you're going to put a given table), \n> \n> This is possible with Postgres, just rather manual. And, unlike MS\n> SQL, we can move the table without corrupting the database. Once\n> again, all we need is a good admin interface.\n> \n> > and so on. SQL Server's optimizer\n> > also seems to me to be better, though I could be wrong there.\n> \n> Having ported applications: You are wrong. There are a few things\n> SQL server does faster (straight selects with lots (>40) of JOINs is\n> the only one I've proven) but on anything complex, it bogs down.\n> Particularly things like nested subselects.\n\nCan you provide an example of a complex query containing subselects\nwhere PostgreSQL will outperform SQL Server 2000?\nI would like to see it.\n \n> Now, let me mention a few of MS SQL's defects that you've missed:\n> Poor/nonexistant network security (the port 1433 hole, hey?), huge\n> resource consumption, a byzantine authentication structure that\n> frequently requires hours of troubleshooting by an NT security expert,\n> weak implementation of the SQL standard with lots of proprietary\n> extensions, 8k data pages, no configuration of memory usage, and those\n> stupid, stupid READ locks that make many complex updates deadlock.\n\nIt's still a great product with better features than PostgreSQL.\nHowever, PostgreSQL is definitely catching up and could easily pass MS\nSQL Server. DB/2 is another story.\n\nI have worked as an MS SQL Server DBA (also database designer and\nprogrammer along with just about anything else that could be done with\nit) and am aware of the difficulties associated with SQL Server. It's a\nvery good product.\n\nCustomer support is also a big issue comparing free database systems\nwith commercial ones. I know that there are a couple groups that do\nthis, but that genre of businesses do not have a good track record of\nstaying in business. MS, Oracle, and IBM will be there five years down\nthe road to help.\n\nOne area where there is a monumental difference is in license fees. For\na single corporation, it does not matter. But for someone who writes\ndatabase applications that will be delivered to thousands of customers,\nit is an enormous advantage.\n\n\n", "msg_date": "Wed, 26 Jun 2002 12:01:18 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "\nDann,\n\n> Totally false:\n> http://www.microsoft.com/sql/evaluation/compare/benchmarks.asp\n\nThe microsoft benchmarks aren't worth the screen space they take up. I don't \nconsider these \"evidence\". I'm basing this on real experience of working \nwith real production databases, not some idealized benchmark database \ndirectly admined by the SQL Server developers in Redmond.\n\n> False again. There is a problem if the clusted objects are added always\n> to the end of the file, or are constantly hitting the same data page.\n> This is often solved by creation of a hashed index that is clustered.\n> Then, the new writes are going to different pages.\n\nDepends on your level of write activity, and the size of the records. \nClustered indexes work nicely for some tables. Not for others.\n\n> That's a configuration error.\n\nYes? And you're going to tell me how to fix it? I've tinkered with the \nmemory alloction in the SQL server config; the best I seem to be able to do \nis make SQL server crash instead of NT.\n\n> It's still a great product with better features than PostgreSQL.\n\nOnce again, I disagree. It has *different* features, and if your focus is \nGUI tools or Win32 tool integration, you might say it's \"better\" than \nPostgres. But you'd have to admit that Postgres has some features and \noptions that MS SQL can't match, such as SQL standard compliance, TOAST, \nhackable system tables, etc.\n\nAlso, as I said, I've not worked much with SQL Server 2000. MS may have \nimproved the product's reliability since 7.0.\n\n> I have worked as an MS SQL Server DBA (also database designer and\n> programmer along with just about anything else that could be done with\n> it) and am aware of the difficulties associated with SQL Server. It's a\n> very good product.\n\nUntil it crashes. Unrecoverably. Don't scoff. I've had it happen, and the \nonly thing that saved me was triplicate backup of the database.\n\n> Customer support is also a big issue comparing free database systems\n> with commercial ones. I know that there are a couple groups that do\n> this, but that genre of businesses do not have a good track record of\n> staying in business. MS, Oracle, and IBM will be there five years down\n> the road to help.\n\nIf you can afford the fees. Personally, I've received more help from the \nPGSQL-SQL list than I ever got out of my $3000/year MSDN subscription.\n\nAlso, PostgreSQL Inc. offers some great support. I've used it.\n\n> One area where there is a monumental difference is in license fees. For\n> a single corporation, it does not matter. But for someone who writes\n> database applications that will be delivered to thousands of customers,\n> it is an enormous advantage.\n\nYup.\n\n-- \n-Josh Berkus\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 13:22:00 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "Followup set to -advocacy\n\nOn Wed, Jun 26, 2002 at 12:01:18PM -0700, Dann Corbit wrote:\n\n> Customer support is also a big issue comparing free database systems\n> with commercial ones. I know that there are a couple groups that do\n> this, but that genre of businesses do not have a good track record of\n> staying in business. MS, Oracle, and IBM will be there five years down\n> the road to help.\n\nI normally wouldn't get involved in this one, since it's the sort of\nthing that turns into a flamefest. And anyway, I'm not sure -hackers\nis the place for it (hence the followup). But as a lowly user, I\ncannot let such a comment go unanswered.\n\nI've used several commercial products of different kinds. I've\nsupported various kinds of databases. I've worked (and, in fact,\ncurrently work) in shops with all kinds of different support\nagreements, including the magic-high-availability, we'll have it in 4\nhours ones. I've had contracts for support that were up for renewal,\nand ones that had been freshly signed with a six-month trial.\n\nBut I have never, _never_ had the sort of support that I get from the\nPostgreSQL community and developers. And it has been this way ever\nsince I started playing with PostgreSQL some time ago, when I didn't\neven know how SQL worked. I like to have commercial support, and to\nbe able to call on it -- we use the services of PostgreSQL, Inc. But\nyou cannot beat the PostgreSQL lists, nor the support directly from\nthe developers and other users. Everyone is unvarnished in their\nassessments of flaws and their plans for what is actually going to get\nprogrammed in. And they tell you when you're doing things wrong, and\nwhat they are.\n\nYou cannot, from _any_ commercial enterprise, no matter how much you\nare willing to pay, buy that kind of service. People find major,\nshowstopper bugs in the offerings of the companies you mention, and\nare brushed off until some time later, when the company is good and\nready. (I had one rep of a company I won't mention actually tell me,\n\"Oh, so you found that bug, eh?\" The way I found it was by\ndiscovering a hole in my network so big that Hannibal and his\nelephants could have walked through. But the company in question did\nnot think it necessary to mention this little bug until people found\nit. And our NDA prevented us from mentioning it.)\n\nAdditionally, I would counsel anyone who thinks they are protected by\na large company to consider the fate of the poor Informix users these\ndays. Informix was once a power-house. It was a Safe Choice. But if\nI were an Informix user today, I'd be spending much of my days trying\nto learn DB2, or whatever. Because I would know that, sooner or\nlater, IBM is going to pull out the dreaded \"EOL\" stamp. And I'd\nhave to change my platform.\n\nThe \"company supported\" argument might make some people in suits\ncomfortable, but I don't believe that they have any justification for\nthat comfort. I'd rather talk to the guy who wrote the code.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 18:36:58 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Support (was: Democracy and organisation)" }, { "msg_contents": "On Wed, 26 Jun 2002, Dann Corbit wrote:\n\n> I have worked as an MS SQL Server DBA (also database designer and\n> programmer along with just about anything else that could be done with\n> it) and am aware of the difficulties associated with SQL Server. It's a\n> very good product.\n\nYeah, I agree. Maybe it's good because it was originally built by\nSybase, not MS. :-)\n\n> Customer support is also a big issue comparing free database systems\n> with commercial ones.\n\nHa ha ha ha ha! I've dealt with MS customer support. If you like\nspending your first day or two trying to escalate your trouble\nticket past complete losers who don't know half what you do about\nSQL Server, their support is OK.\n\nIn the end, there's really no substitute for an extremely competent\nDBA. And having source code and direct access to the developers\nis a godsend.\n\n> One area where there is a monumental difference is in license fees. For\n> a single corporation, it does not matter.\n\nEven for a single corporation, it can matter. Deploying, say, ten\nsmallish Oracle servers is not exactly cheap.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 13:13:03 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" }, { "msg_contents": "On Wed, 26 Jun 2002, Josh Berkus wrote:\n\n> Depends on your level of write activity, and the size of the records.\n> Clustered indexes work nicely for some tables. Not for others.\n\nWell, I'm sure everyone would agree with that. The point is that\nSQL Server gives you the option, posgres doesn't.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 13:14:12 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Democracy and organisation : let's make a" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Dave Cramer [mailto:Dave@micro-automation.net] \n> Sent: 26 June 2002 19:01\n> To: Dave Page\n> Cc: josh@agliodbs.com; pgsql-hackers@postgresql.org; Rod Taylor\n> Subject: Re: [HACKERS] Postgres idea list\n> \n> \n> Dave,\n> \n> Would you consider java as a platform independant language? I \n> have started a project on sf.net called jpgadmin, but I see \n> the duplication of effort as a waste of time.\n\nI do, but I've had nothing but bad experiences with Java though I'm open\nto new evidence/persuasion. I do agree that duplication of effort is not\na good idea and I'm certainly not against collaborating on a new version\nthough I must point out that having written pgAdmin from scratch twice\nnow (three times if you cound my original proof of concept) over the\nlast 5-6 years, I have *very* specific ideas on how pgAdmin should work.\n\n\nI guess what I'm trying to say is that as it's been *my* project for\nyears (not forgetting the contributions from Jean-Michel & others),\nchanging that and working as another member of a team would be *very*\ndifficult for me.\n\nI hope you can understand this, having spent hundreds of hours and\nwritten 100,000+ lines of production code _almost_ single handedly it\ngets kinda personnal :-)\n\nLet me say now though, even if I do stay with my own version, if you\never need help don't hesitate to ask.\n\nRegards, Dave.\n\n\n", "msg_date": "Wed, 26 Jun 2002 20:21:32 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Postgres idea list" } ]
[ { "msg_contents": "I think that the people on this list tend to make a mistake.\n\nThey try to pick apart the competition by focusing on their weak points.\n\nFrom a growth standpoint, I think it is a much better idea to focus on\ntheir strong points. Look at the things each competitor can do best.\nTry to think of ways to get the same functionality from PostgreSQL. If\nit is impossible [or currently infeasible] to meet the functionality,\nthen close the gap.\n\nSuppose (for instance) that MySQL were faster at some particular\noperation by a factor of 5. If the difference cannot be eliminated or\novercome, can the gap be narrowed so that it is a factor of 2? If DB/2\nhas some special security feature, can the same feature be added to\nPostgreSQL? If there is an administrative tool for Oracle that provides\nessential functionality, can the same tool be created for PostgreSQL?\nBy careful examination of the *strong* points of the competition, you\ncan form a strategy to close the gap. By focusing on what they do\npoorly, how will progress be made?\n\nThe weak points are always going to be there, for any database system.\nBut the way to expand the functionality of PostgreSQL best would be to\nfocus on the *strong* points of the competition and try to achieve the\nsame level. For weak points, it is better to focus on the weak points\nof PostgreSQL than that of the competition. Admit they exist, and form\na plan to eliminate them. I would like to see the day when PostgreSQL\nis on every desktop in the world, as a superior replacement for Foxpro,\nMS Access, etc. I would also like to see the day when Postgresql is on\nevery server in the world as a superior replacement for DB/2, Oracle,\netc. I think the best way to meet those goals is to be realistic and\naim in the right place for strategic decisions.\n\nIMO-YMMV.\n\n\n", "msg_date": "Wed, 26 Jun 2002 12:25:05 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Database comparison ideas" }, { "msg_contents": "\nDann,\n\n> From a growth standpoint, I think it is a much better idea to focus on\n> their strong points. Look at the things each competitor can do best.\n> Try to think of ways to get the same functionality from PostgreSQL. If\n> it is impossible [or currently infeasible] to meet the functionality,\n> then close the gap.\n\nYou are, of course, correct. We will have to prioritize which \"gaps\" mean \nthe most to us. For example, if I was to make a \"top six list\":\n\n-- Lack of comprehensive GUI admin tools\n-- Lack of replication and point in time recovery\n-- PL/pgSQL does not 100% replace PL/SQL or T-SQL Stored Procedures\n-- Miscellaneous speed/optimization issues\n-- Need good GUI installer, including installer for Postgres+PHP+Apache\n-- Win32 Port\n\nBut what order would we want to tackle these in? For that matter, don't \nforget about Postgres goals to acheive features that nobody else has:\n\n-- 98% SQL-99 Compliance, including Schema, Domain, etc.\n-- 100% support of all data types and operators\n-- etc.\n\nAll of this is a moot point, though. Programmers work on what they want to \nwork on ... so even if, say, a GUI installer is really important to *me*, it \nain't gonna get done unless I do it myself.\n\n\n-- \n-Josh Berkus\n\n\n", "msg_date": "Wed, 26 Jun 2002 18:54:58 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Database comparison ideas" }, { "msg_contents": "On Wed, 2002-06-26 at 20:54, Josh Berkus wrote:\n> \n> Dann,\n> \n> > From a growth standpoint, I think it is a much better idea to focus on\n> > their strong points. Look at the things each competitor can do best.\n> > Try to think of ways to get the same functionality from PostgreSQL. If\n> > it is impossible [or currently infeasible] to meet the functionality,\n> > then close the gap.\n> \n> You are, of course, correct. We will have to prioritize which \"gaps\" mean \n> the most to us. For example, if I was to make a \"top six list\":\n> \n> -- Lack of comprehensive GUI admin tools\n> -- Lack of replication and point in time recovery\n> -- PL/pgSQL does not 100% replace PL/SQL or T-SQL Stored Procedures\n> -- Miscellaneous speed/optimization issues\n> -- Need good GUI installer, including installer for Postgres+PHP+Apache\n> -- Win32 Port\nI know I (not knowing Oracle PL/SQL) have a hard time find enough docs\non PL/pgSQL, even with buying Bruce's and the German PG Developers\nbooks. \n\nI'm personally having a hard time learning all the in's and out's of the\ntrigger/rule stuff. I know I can use more of them, but have a hard\ntime. \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n\n\n", "msg_date": "26 Jun 2002 22:25:47 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Database comparison ideas" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Jan Wieck [mailto:JanWieck@Yahoo.com] \n> Sent: 26 June 2002 15:45\n> To: HACKERS\n> Subject: [HACKERS] (A) native Windows port\n> \n> \n> As for project coordination, I am willing to setup and \n> maintain a page similar to the (horribly outdated) ones that \n> I did for Toast and RI. Summarizing project status, pointing \n> to resources, instructions, maybe a roadmap, TODO, you name it.\n> \n> Comments? Suggestions?\n\nGreat, can't wait to see your work. \n\nI can probably sort out an installer shortly after you have the first\ncode available - that way we can work out kinks in a binary\ndistribution, as well as hopefully get some more testers who may not\nhave compilers etc on their windows boxes. Let me know if you'd like me\nto work on that...\n\nRegards, Dave.\n\n\n\n", "msg_date": "Wed, 26 Jun 2002 20:31:43 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: (A) native Windows port" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Jeff MacDonald [mailto:jeff@tsunamicreek.com] \n> Sent: 26 June 2002 18:08\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Democracy and organisation : let's \n> make a revolution\n> \n> \n> what is gborg ? :)\n\nhttp://gborg.postgresql.org.\n\nIt is the resurrected project site that was originally provided by Great\nBridge (Great Bridge dot ORG). It's a kind of sourceforge type site only\nnot quite so horrendously slow and a bit more simple to use (which is a\ngood thing imho).\n\nRegards, Dave.\n\n\n", "msg_date": "Wed, 26 Jun 2002 21:47:16 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Democracy and organisation : let's make a revolution" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 26 June 2002 19:21\n> To: PostgreSQL-development\n> Subject: [HACKERS] Why I like partial solutions\n> \n>\n> If we want to grow PostgreSQL, we need to meet users needs, \n> even if that requires stomaching some hack solutions from \n> time to time. That's why I like partial solutions.\n\nApologies for the breach of netiquette:\n\nI agree.\n\nRegards, Dave.\n\n\n", "msg_date": "Wed, 26 Jun 2002 22:03:55 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Why I like partial solutions" } ]
[ { "msg_contents": "Hi everyone,\n\nThis is Jonah's explanation of what Nextgres is, as his response didn't\nmake it to the list (some kind of software or network problem).\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-------- Original Message --------\nSubject: Re: [HACKERS] Nextgres?\nDate: Wed, 26 Jun 2002 12:18:57 -0400 (EDT)\nFrom: <jharris@nightstarcorporation.com>\nReply-To: <jharris@nightstarcorporation.com>\nTo: \"Justin Clift\"\n<justin@postgresql.org>,<pgsql-hackers@postgresql.org>\n\nJustin,\n\nNEXTGRES is not an extension of PGSQL and is not based on PGSQL code. \nHowever, many of the methods used in PGSQL, Oracle, and similar systems\nhave been merged into it. The MVCC engine is similar to PGSQL's and\nOracle's in its statement/transaction-level read consistency and other\nitems... whereas have implemented a LRU engine more closely resembling\nOracle's. We are taking the best of all systems and incorporating it\ninto one system.\n\nAs for the SQL, you are correct. There are a few misspellings on the\npage, and for that I'm sorry. It was last updated *very* early in the\nmorning. We have added support for PostgreSQL's SQL grammar (from\n7.1.2). We have a SQL Dialect Interface (SDI) that allows the DBA or\nuser at run-time to execute a SQL block using SQL specific to different\ndatabase systems. What we have done is take the SQL parser and\nabstracted it so that during query-rewrite we convert it into our native\nsyntax, which more closely resembles PostgreSQL's with function\nextensions from Oracle (i.e. NVL, decode, etc.) System catalogs and\ntables are also aliased so that you could execute an Oracle \"SELECT\nobject_name FROM all_objects WHERE rownum < 50\" and it is converted to\nread the similar attribute from our aliased \"all_objects\" relation. \n\nI would be glad to support the open-source effort of PostgreSQL. It has\nperformed much better than any other open-source database I�ve ever\nused. I have personally thanked M. Stonebraker for the original\nPOSTGRES and would like to thank you all for the hard work put into\nPGSQL over time.\n\nI know you guys have representatives for the system, but if you ever\nneed another person to join you at a conference or similar event, just\nlet me know. I look forward to using PGSQL in the future and am sorry\nthat I�m too busy to assist you on the backend.\n\n-Jonah\n\n\n-- Original Message --\nFrom: Justin Clift <justin@postgresql.org>\nTo: \"Jonah H. Harris\" <jharris@nightstarcorporation.com>\nSend: 12:07 AM\nSubject: [HACKERS] Nextgres?\n\nHi Jonah,\n\nWas just looking around your company website, and it mentions a product\ncalled \"Nextgres\" which looks interesting :\n\nhttp://www.nightstarcorporation.com/?op=products\n\nHow do you guys implement the PostgreSQL SQL parser as well as the\nInterbase and Oracle parsers? Is it like an adaption of PostgreSQL with\naddons or something? Also it mentions its compatible with PostgreSQL\n7.2.2, so I'm wondering if that's a typo or something.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n\"Jonah H. Harris\" wrote:\n> \n> Comparing PGSQL to MySQL is like apples to oranges. I don't see why\none\n> would want to take a great project and ORDBMS such as PGSQL and make a\n> desktop version of it. When a desktop version is completely opposite\nof\n> what PGSQL is, a commercial-grade RDBMS. Sure it lacks some of the\nareas\n> when compared to Oracle and SQL Server... but I don't see how the PGSQL\nteam\n> is going to get as much money as Oracle/Microsoft to develop, perform\nR&D,\n> and compete against commercial rivals. Yet, I have never seen an\n> open-source database system as good as PGSQL, especially being as it is\n> developed on a volunteer basis.\n> \n> As far as MySQL goes, they can have their easy-to-install and manage\n> \"features\". I was on the MySQL-dev team for three months trying\nto convince\n> Monty, Sasha, and others that MySQL needed features found in commercial\n> systems (triggers, stored procs, transactions, constraints, etc.) They\n> explicitly and rudely told me that MySQL wasn't developed to perform in\n> these areas and to go elsewhere. Ever since then, I've been using PGSQL\nin\n> a production basis. The argument for easy-to-install systems is common\nwith\n> many MySQL users, and those who don't understand how databases work. \nSure\n> it would be nice to have the system do complete self-tuning but in\nreality,\n> the DBA should know how to make the database perform better under\ndifferent\n> situations. And, as for ease-of-install, I can download the PGSQL\npackage\n> for my OpenBSD boxes and it works perfectly, same on CYGWIN. If I want\nto\n> tune it, I can.\n> \n> The objective of a good RDBMS is to allow fast access to data while\nalso\n> maintaining data integrity (ACID properties). I personally think that\n> dumbing-down database systems only causes more problems. Look at\nMicrosoft\n> and NT/2K/XP. Now there are MCSEs all over the place acting like they\nare\n> network admins because they can point-and-click to start a IIS service.\n> Oooh, ahh. I would rather be on UNIX where I need to know exactly\nwhat's\n> going on. And, UNIX users don't just jump up and blame the software\nwhen\n> something goes wrong... as often happens with Windows and Access. The\nsame\n> follows with many MySQL users I've encountered. They don't have to do\n> anything with the system, but consider themselves experts. With all my\n> Oracle, SQL Server, and PostgreSQL boxes, I personally tune them to do\nwhat\n> tasks are designated for them. I think PGSQL, as the project goes, is\njust\n> fine as it is. A little commercial support and marketing could greatly\n> assist in furthering the usage of PGSQL, true. If the group agrees\nthat\n> this would be a good idea, then I would be willing to do this. I also\nthink\n> it would be a good idea to get a PostgreSQL foundation or similar\nnon-profit\n> that could accept donations, etc. to further development. Don't dumb\ndown\n> the system and create a limited version just for people that want an\n> open-source Access... they can use MySQL for that. Just my rant.\n> \n> Cordially,\n> \n> Jonah H. Harris, Chairman/CEO\n> NightStar Corporation\n> \"One company, one world, one BIG difference!\"\n___________________________________\nNightStar Corporate Web-mailer.\nOpen Source for Open Minds.\n\n\n", "msg_date": "Thu, 27 Jun 2002 13:14:13 +0930", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Nextgres?" } ]
[ { "msg_contents": "\nI am a student doing my graduation in India. I want to know what are the \nother OODBMS features ( other than inheritance ) available \nin PostGreSQL. It would be great if you can help me out with some \ninformation regarding this.\n\nThanks,\nNishkala \n\n-- \nBeing yourself in the world which is constantly trying to change you to something else is the biggest challenge\n\n\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 10:13:26 +0530 (IST)", "msg_from": "Nishkala <nishkala@gdit.iiit.net>", "msg_from_op": true, "msg_subject": "Object Oriented Features " }, { "msg_contents": "On Thu, Jun 27, 2002 at 10:13:26AM +0530, Nishkala wrote:\n> \n> I am a student doing my graduation in India. I want to know what are the \n> other OODBMS features ( other than inheritance ) available \n> in PostGreSQL. It would be great if you can help me out with some \n> information regarding this.\n\n The PostgreSQL is \"Object-Relational DBMS\" and not clean \"Object Oriented\".\n The good and short description about DBs types you can read at\n\n http://wwwinfo.cern.ch/db/aboutdbs/classification/\n\n I think most of the current used SQL DBs are \"Object-Relational\".\n \n\n OO in PostgreSQL means that you can create own operators, datetypes, functions...\n \n Something about really Object Oriented you can found at:\n\n http://www.odbmsfacts.com/\n\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n\n", "msg_date": "Thu, 27 Jun 2002 10:22:13 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Object Oriented Features" }, { "msg_contents": "On Fri, 2002-06-28 at 03:21, Josh Berkus wrote:\n> \n> Karel,\n> \n> > \n> > OO in PostgreSQL means that you can create own operators, datetypes, \n> functions...\n> \n> Last I checked, all of these things were part of the SQL spec. I believe our \n> only \"OO\" functionality is inheritance ...\n\nActually _single_ inheritance is also part of SQL99 \n\ncreate table ... under ...\n\n> which I have yet to find a use for.\n\nIt will become much more useful once implemented more thoroughly ;)\n\n---------------\nHannu\n\n\n\n", "msg_date": "28 Jun 2002 01:44:51 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Object Oriented Features" }, { "msg_contents": "\nKarel,\n\n> \n> OO in PostgreSQL means that you can create own operators, datetypes, \nfunctions...\n\nLast I checked, all of these things were part of the SQL spec. I believe our \nonly \"OO\" functionality is inheritance ... which I have yet to find a use \nfor.\n\nOf course, I agree with Fabian Pascal, who claims that every OODBMS \"feature\" \nhas an answer in the SQL spec that is more consistent and better thought out.\n\n-- \n-Josh Berkus\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 15:21:33 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Object Oriented Features" }, { "msg_contents": "> Last I checked, all of these things were part of the SQL spec. I believe\nour\n> only \"OO\" functionality is inheritance ... which I have yet to find a use\n> for.\n\n Well, it's lower maintenance than the 14-clause SELECT...UNION...UNION...\nI'd have to write for ``correct'' code, in my current project. :-)\n\n--\nChristopher Clark <clark@compudata-systems.com>\nPongidae, and proud of it.\n\n Darn it, who spiked my coffee with water?\n -- Larry Wall\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 22:44:54 -0000", "msg_from": "\"Christopher Clark\" <clark@compudata-systems.com>", "msg_from_op": false, "msg_subject": "Re: Object Oriented Features" }, { "msg_contents": "> > OO in PostgreSQL means that you can create own operators, datetypes, \n> functions...\n> \n> Last I checked, all of these things were part of the SQL spec. I believe our \n> only \"OO\" functionality is inheritance ... which I have yet to find a use \n> for.\n\nCan you tell me what the SQL99 spec says regarding creation of\noperators? I couldn't find them.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Fri, 28 Jun 2002 10:46:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Object Oriented Features" }, { "msg_contents": "> > > OO in PostgreSQL means that you can create own operators, datetypes,\n> > functions...\n> > Last I checked, all of these things were part of the SQL spec. I believe our\n> > only \"OO\" functionality is inheritance ... which I have yet to find a use\n> > for.\n> Can you tell me what the SQL99 spec says regarding creation of\n> operators? I couldn't find them.\n\nI haven't gone back and looked, but I recall that the spec makes some\nmention of operators in the context of defining new functions. I don't\nthink there is anything about defining operators not already in SQL, but\nonly (if anything at all) about extending existing operators to new data\ntypes.\n\n - Thomas\n\n\n", "msg_date": "Fri, 28 Jun 2002 18:14:17 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Object Oriented Features" } ]
[ { "msg_contents": "Kinda like the blind leading the blind, but:\n\nI'm assuming you will need to do something similar to what you did in\nthe previous version in gram.y. That is, create an expression node with\nthe appropriate equations and pass that through ExecEvalExpr().\n\nYou can probably return the true or false from ExecEvalExpr directly for\nthe between case, and simply invert for the not null case. Don't forget\nabout the NULL.\n\nSee ExecEvalNull, ExecEvalBool, ExecEvalAnd, ExecEvalOr tests for\nexample calls.\n\n\nOn Thu, 2002-06-27 at 06:31, Christopher Kings-Lynne wrote:\n> Hi,\n> \n> Based on recent discussion, I went thru and got together the work I'd done\n> on the BETWEEN node. It's not as far along as I thought. I ran into a few\n> hurdles:\n> \n> * ExecEvalBetweenExpr is probably beyond my powers - I've done my best and\n> marked my hopelessness with '@@' symbols. I don't know how to actually\n> evaluate the node properly, I don't know how to check that all the 3 types\n> are coercible to the same type and I don't know how to make it take rowvars\n> (sic?)instead of scalars, as per spec.\n> \n> Copy and Equal are done, I think.\n> \n> Out I've guessed at how to do it based on other examples, but I need\n> feedback. Read I haven't done at all cos I don't quite understand when/why\n> it's used or how to do it.\n> \n> The grammar has been updated to use the new BetweenExpr node, with new\n> syntax options.\n> \n> The new keywords have been added in the relevant places, and they are\n> reserved.\n> \n> nodes.h and parsenodes.h are aware of the new node.\n> \n> I have added a full regression test that I used in my old gram.y only\n> implementation, that didn't use a new node - it will be helpful!\n> \n> Where do we go from here?\n> \n> Chris\n> \n> ----\n> \n\n> ? GNUmakefile\n> ? between.diff.txt\n> ? config.log\n> ? config.status\n> ? contrib/spi/.deps\n> ? src/Makefile.global\n> ? src/backend/postgres\n> ? src/backend/access/common/.deps\n> ? src/backend/access/gist/.deps\n> ? src/backend/access/hash/.deps\n> ? src/backend/access/heap/.deps\n> ? src/backend/access/index/.deps\n> ? src/backend/access/nbtree/.deps\n> ? src/backend/access/rtree/.deps\n> ? src/backend/access/transam/.deps\n> ? src/backend/bootstrap/.deps\n> ? src/backend/catalog/.deps\n> ? src/backend/catalog/postgres.bki\n> ? src/backend/catalog/postgres.description\n> ? src/backend/commands/.deps\n> ? src/backend/commands/tablecmds.c.mystuff\n> ? src/backend/executor/.deps\n> ? src/backend/lib/.deps\n> ? src/backend/libpq/.deps\n> ? src/backend/main/.deps\n> ? src/backend/nodes/.deps\n> ? src/backend/optimizer/geqo/.deps\n> ? src/backend/optimizer/path/.deps\n> ? src/backend/optimizer/plan/.deps\n> ? src/backend/optimizer/prep/.deps\n> ? src/backend/optimizer/util/.deps\n> ? src/backend/parser/.deps\n> ? src/backend/port/.deps\n> ? src/backend/postmaster/.deps\n> ? src/backend/regex/.deps\n> ? src/backend/rewrite/.deps\n> ? src/backend/storage/buffer/.deps\n> ? src/backend/storage/file/.deps\n> ? src/backend/storage/freespace/.deps\n> ? src/backend/storage/ipc/.deps\n> ? src/backend/storage/large_object/.deps\n> ? src/backend/storage/lmgr/.deps\n> ? src/backend/storage/page/.deps\n> ? src/backend/storage/smgr/.deps\n> ? src/backend/tcop/.deps\n> ? src/backend/utils/.deps\n> ? src/backend/utils/adt/.deps\n> ? src/backend/utils/cache/.deps\n> ? src/backend/utils/error/.deps\n> ? src/backend/utils/fmgr/.deps\n> ? src/backend/utils/hash/.deps\n> ? src/backend/utils/init/.deps\n> ? src/backend/utils/mb/.deps\n> ? src/backend/utils/misc/.deps\n> ? src/backend/utils/mmgr/.deps\n> ? src/backend/utils/sort/.deps\n> ? src/backend/utils/time/.deps\n> ? src/bin/initdb/initdb\n> ? src/bin/initlocation/initlocation\n> ? src/bin/ipcclean/ipcclean\n> ? src/bin/pg_config/pg_config\n> ? src/bin/pg_ctl/pg_ctl\n> ? src/bin/pg_dump/.deps\n> ? src/bin/pg_dump/pg_dump\n> ? src/bin/pg_dump/pg_dumpall\n> ? src/bin/pg_dump/pg_restore\n> ? src/bin/pg_encoding/.deps\n> ? src/bin/pg_encoding/pg_encoding\n> ? src/bin/pg_id/.deps\n> ? src/bin/pg_id/pg_id\n> ? src/bin/psql/.deps\n> ? src/bin/psql/psql\n> ? src/bin/scripts/createlang\n> ? src/include/pg_config.h\n> ? src/include/stamp-h\n> ? src/interfaces/ecpg/lib/.deps\n> ? src/interfaces/ecpg/lib/libecpg.so.3\n> ? src/interfaces/ecpg/preproc/.deps\n> ? src/interfaces/ecpg/preproc/ecpg\n> ? src/interfaces/libpgeasy/.deps\n> ? src/interfaces/libpgeasy/libpgeasy.so.2\n> ? src/interfaces/libpq/.deps\n> ? src/interfaces/libpq/libpq.so.2\n> ? src/interfaces/libpq++/.deps\n> ? src/interfaces/libpq++/libpq++.so.4\n> ? src/pl/plpgsql/src/.deps\n> ? src/pl/plpgsql/src/libplpgsql.so.1\n> ? src/test/regress/.deps\n> ? src/test/regress/log\n> ? src/test/regress/pg_regress\n> ? src/test/regress/regression.diffs\n> ? src/test/regress/regression.out\n> ? src/test/regress/results\n> ? src/test/regress/tmp_check\n> ? src/test/regress/expected/constraints.out\n> ? src/test/regress/expected/copy.out\n> ? src/test/regress/expected/create_function_1.out\n> ? src/test/regress/expected/create_function_2.out\n> ? src/test/regress/expected/misc.out\n> ? src/test/regress/sql/constraints.sql\n> ? src/test/regress/sql/copy.sql\n> ? src/test/regress/sql/create_function_1.sql\n> ? src/test/regress/sql/create_function_2.sql\n> ? src/test/regress/sql/misc.sql\n> Index: src/backend/executor/execQual.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/executor/execQual.c,v\n> retrieving revision 1.94\n> diff -c -r1.94 execQual.c\n> *** src/backend/executor/execQual.c\t2002/06/20 20:29:27\t1.94\n> --- src/backend/executor/execQual.c\t2002/06/27 10:27:30\n> ***************\n> *** 60,65 ****\n> --- 60,67 ----\n> static Datum ExecEvalOr(Expr *orExpr, ExprContext *econtext, bool *isNull);\n> static Datum ExecEvalCase(CaseExpr *caseExpr, ExprContext *econtext,\n> \t\t\t bool *isNull, ExprDoneCond *isDone);\n> + static Datum ExecEvalBetweenExpr(BetweenExpr *btest, ExprContext *econtext,\n> + \t\t\t\t bool *isNull, ExprDoneCond *isDone);\n> static Datum ExecEvalNullTest(NullTest *ntest, ExprContext *econtext,\n> \t\t\t\t bool *isNull, ExprDoneCond *isDone);\n> static Datum ExecEvalBooleanTest(BooleanTest *btest, ExprContext *econtext,\n> ***************\n> *** 1110,1115 ****\n> --- 1112,1182 ----\n> }\n> \n> /* ----------------------------------------------------------------\n> + *\t\tExecEvalBetweenExpr\n> + *\n> + *\t\tEvaluate a BetweenExpr node. Result is\n> + *\t\ta boolean. If any of the three expression\n> + *\t\tparameters are NULL, result is NULL.\n> + * ----------------------------------------------------------------\n> + */\n> + static Datum\n> + ExecEvalBetweenExpr(BetweenExpr *btest,\n> + \t\t\t\t ExprContext *econtext,\n> + \t\t\t\t bool *isNull,\n> + \t\t\t\t ExprDoneCond *isDone)\n> + {\n> + \tDatum\t\texpr_result;\n> + \tDatum\t\tlexpr_result;\n> + \tDatum\t\trexpr_result;\n> + \n> + \t/* Evaluate subexpressons and test for NULL parameters */\n> + \texpr_result = ExecEvalExpr(btest->expr, econtext, isNull, isDone);\n> + \tif (*isNull) {\n> + \t\t*isNull = true;\n> + \t\treturn (Datum) 0;\n> + \t}\n> + \tlexpr_result = ExecEvalExpr(btest->lexpr, econtext, isNull, isDone);\n> + \tif (*isNull) {\n> + \t\t*isNull = true;\n> + \t\treturn (Datum) 0;\n> + \t}\n> + \trexpr_result = ExecEvalExpr(btest->rexpr, econtext, isNull, isDone);\n> + \tif (*isNull) {\n> + \t\t*isNull = true;\n> + \t\treturn (Datum) 0;\n> + \t}\n> + \n> + \t/* Make sure return value is what we think it is */\n> + \t*isNull = false;\n> + \n> + \t/* Now, depending on the symmetry, evaluate the\n> + \t BETWEEN expression */\n> + \n> + \tif (btest->symmetric)\n> + \t{\n> + \t\t/* @@ This is pseudocode - how do I copare the results? @@ */\n> + \t\tif ((expr_result >= lexpr_result &&\n> + \t\t\texpr_result <= rexpr_result) ||\n> + \t\t\t(expr_result >= rexpr_result &&\n> + \t\t\texpr_result <= lexpr_result))\n> + \t\t{\n> + \t\t\treturn BoolGetDatum(true);\n> + \t\t}\n> + \t\telse\n> + \t\t\treturn BoolGetDatum(false);\n> + \t}\n> + \telse {\n> + \t\tif (expr_result >= lexpr_result &&\n> + \t\t\texpr_result <= rexpr_result)\n> + \t\t{\n> + \t\t\treturn BoolGetDatum(true);\n> + \t\t}\n> + \t\telse\n> + \t\t\treturn BoolGetDatum(false);\n> + \t}\n> + }\n> + \n> + /* ----------------------------------------------------------------\n> *\t\tExecEvalNullTest\n> *\n> *\t\tEvaluate a NullTest node.\n> ***************\n> *** 1397,1402 ****\n> --- 1464,1475 ----\n> \t\t\t\t\t\t\t\t\tecontext,\n> \t\t\t\t\t\t\t\t\tisNull,\n> \t\t\t\t\t\t\t\t\tisDone);\n> + \t\t\tbreak;\n> + \t\tcase T_BetweenExpr:\n> + \t\t\tretDatum = ExecEvalBetweenExpr((BetweenExpr *) expression,\n> + \t\t\t\t\t\t\t\t\t\tecontext,\n> + \t\t\t\t\t\t\t\t\t\tisNull,\n> + \t\t\t\t\t\t\t\t\t\tisDone);\n> \t\t\tbreak;\n> \t\tcase T_NullTest:\n> \t\t\tretDatum = ExecEvalNullTest((NullTest *) expression,\n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.191\n> diff -c -r1.191 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t2002/06/20 20:29:29\t1.191\n> --- src/backend/nodes/copyfuncs.c\t2002/06/27 10:27:31\n> ***************\n> *** 1000,1005 ****\n> --- 1000,1025 ----\n> }\n> \n> /* ----------------\n> + * \t\t_copyBetweenExpr\n> + * ----------------\n> + */\n> + static BetweenExpr *\n> + _copyBetweenExpr(BetweenExpr *from)\n> + {\n> + \tBetweenExpr *newnode = makeNode(BetweenExpr);\n> + \n> + \t/*\n> + \t * copy remainder of node\n> + \t */\n> + \tNode_Copy(from, newnode, expr);\n> + \tnewnode->symmetric = from->symmetric;\n> + \tNode_Copy(from, newnode, lexpr);\n> + \tNode_Copy(from, newnode, rexpr);\n> + \n> + \treturn newnode;\n> + }\n> + \n> + /* ----------------\n> *\t\t_copyCaseWhen\n> * ----------------\n> */\n> ***************\n> *** 3043,3048 ****\n> --- 3063,3071 ----\n> \t\t\tbreak;\n> \t\tcase T_CaseExpr:\n> \t\t\tretval = _copyCaseExpr(from);\n> + \t\t\tbreak;\n> + \t\tcase T_BetweenExpr:\n> + \t\t\tretval = _copyBetweenExpr(from);\n> \t\t\tbreak;\n> \t\tcase T_CaseWhen:\n> \t\t\tretval = _copyCaseWhen(from);\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.138\n> diff -c -r1.138 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t2002/06/20 20:29:29\t1.138\n> --- src/backend/nodes/equalfuncs.c\t2002/06/27 10:27:31\n> ***************\n> *** 1752,1757 ****\n> --- 1752,1772 ----\n> }\n> \n> static bool\n> + _equalBetweenExpr(BetweenExpr *a, BetweenExpr *b)\n> + {\n> + if (!equal(a->expr, b->expr))\n> + return false;\n> + if (a->symmetric != b->symmetric)\n> + return false;\n> + if (!equal(a->lexpr, b->lexpr))\n> + return false;\n> + if (!equal(a->rexpr, b->rexpr))\n> + return false;\n> + \n> + return true;\n> + }\n> + \n> + static bool\n> _equalCaseWhen(CaseWhen *a, CaseWhen *b)\n> {\n> \tif (!equal(a->expr, b->expr))\n> ***************\n> *** 2198,2203 ****\n> --- 2213,2221 ----\n> \t\t\tbreak;\n> \t\tcase T_CaseExpr:\n> \t\t\tretval = _equalCaseExpr(a, b);\n> + \t\t\tbreak;\n> + \t\tcase T_BetweenExpr:\n> + \t\t\tretval = _equalBetweenExpr(a, b);\n> \t\t\tbreak;\n> \t\tcase T_CaseWhen:\n> \t\t\tretval = _equalCaseWhen(a, b);\n> Index: src/backend/nodes/outfuncs.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/nodes/outfuncs.c,v\n> retrieving revision 1.160\n> diff -c -r1.160 outfuncs.c\n> *** src/backend/nodes/outfuncs.c\t2002/06/20 20:29:29\t1.160\n> --- src/backend/nodes/outfuncs.c\t2002/06/27 10:27:32\n> ***************\n> *** 1466,1471 ****\n> --- 1466,1494 ----\n> \t_outNode(str, node->defresult);\n> }\n> \n> + /*\n> + * \tBetweenExpr\n> + */\n> + static void\n> + _outBetweenExpr(StringInfo str, BetweenExpr *node)\n> + {\n> + \tappendStringInfo(str, \" :expr \");\n> + \t_outNode(str, node->expr);\n> + \n> + \tappendStringInfo(str, \" BETWEEN \");\n> + \tif (node->symmetric)\n> + \t\tappendStringInfo(str, \"SYMMETRIC \");\n> + \t/* We don't write out ASYMMETRIC, as it's the default */\n> + \n> + \tappendStringInfo(str, \" :lexpr \");\n> + \t_outNode(str, node->lexpr);\n> + \n> + \tappendStringInfo(str, \" AND \");\n> + \n> + \tappendStringInfo(str, \" :rexpr \");\n> + \t_outNode(str, node->rexpr);\n> + }\n> + \n> static void\n> _outCaseWhen(StringInfo str, CaseWhen *node)\n> {\n> ***************\n> *** 1759,1764 ****\n> --- 1782,1790 ----\n> \t\t\t\tbreak;\n> \t\t\tcase T_CaseExpr:\n> \t\t\t\t_outCaseExpr(str, obj);\n> + \t\t\t\tbreak;\n> + \t\t\tcase T_BetweenExpr:\n> + \t\t\t\t_outBetweenExpr(str, obj);\n> \t\t\t\tbreak;\n> \t\t\tcase T_CaseWhen:\n> \t\t\t\t_outCaseWhen(str, obj);\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.334\n> diff -c -r2.334 gram.y\n> *** src/backend/parser/gram.y\t2002/06/22 02:04:45\t2.334\n> --- src/backend/parser/gram.y\t2002/06/27 10:27:35\n> ***************\n> *** 234,240 ****\n> \n> %type <list>\textract_list, overlay_list, position_list\n> %type <list>\tsubstr_list, trim_list\n> ! %type <ival>\topt_interval\n> %type <node>\toverlay_placing, substr_from, substr_for\n> \n> %type <boolean> opt_instead, opt_cursor\n> --- 234,240 ----\n> \n> %type <list>\textract_list, overlay_list, position_list\n> %type <list>\tsubstr_list, trim_list\n> ! %type <ival>\topt_interval, opt_symmetry\n> %type <node>\toverlay_placing, substr_from, substr_for\n> \n> %type <boolean> opt_instead, opt_cursor\n> ***************\n> *** 321,327 ****\n> /* ordinary key words in alphabetical order */\n> %token <keyword> ABORT_TRANS, ABSOLUTE, ACCESS, ACTION, ADD, AFTER,\n> \tAGGREGATE, ALL, ALTER, ANALYSE, ANALYZE, AND, ANY, AS, ASC,\n> ! \tASSERTION, ASSIGNMENT, AT, AUTHORIZATION,\n> \n> \tBACKWARD, BEFORE, BEGIN_TRANS, BETWEEN, BIGINT, BINARY, BIT, BOTH,\n> \tBOOLEAN, BY,\n> --- 321,327 ----\n> /* ordinary key words in alphabetical order */\n> %token <keyword> ABORT_TRANS, ABSOLUTE, ACCESS, ACTION, ADD, AFTER,\n> \tAGGREGATE, ALL, ALTER, ANALYSE, ANALYZE, AND, ANY, AS, ASC,\n> ! \tASSERTION, ASSIGNMENT, ASYMMETRIC, AT, AUTHORIZATION,\n> \n> \tBACKWARD, BEFORE, BEGIN_TRANS, BETWEEN, BIGINT, BINARY, BIT, BOTH,\n> \tBOOLEAN, BY,\n> ***************\n> *** 380,386 ****\n> \tSERIALIZABLE, SESSION, SESSION_USER, SET, SETOF, SHARE,\n> \tSHOW, SIMILAR, SIMPLE, SMALLINT, SOME, STABLE, START, STATEMENT,\n> \tSTATISTICS, STDIN, STDOUT, STORAGE, STRICT, SUBSTRING,\n> ! \tSYSID,\n> \n> \tTABLE, TEMP, TEMPLATE, TEMPORARY, THEN, TIME, TIMESTAMP,\n> \tTO, TOAST, TRAILING, TRANSACTION, TRIGGER, TRIM, TRUE_P,\n> --- 380,386 ----\n> \tSERIALIZABLE, SESSION, SESSION_USER, SET, SETOF, SHARE,\n> \tSHOW, SIMILAR, SIMPLE, SMALLINT, SOME, STABLE, START, STATEMENT,\n> \tSTATISTICS, STDIN, STDOUT, STORAGE, STRICT, SUBSTRING,\n> ! \tSYMMETRIC, SYSID,\n> \n> \tTABLE, TEMP, TEMPLATE, TEMPORARY, THEN, TIME, TIMESTAMP,\n> \tTO, TOAST, TRAILING, TRANSACTION, TRIGGER, TRIM, TRUE_P,\n> ***************\n> *** 5433,5449 ****\n> \t\t\t\t\tb->booltesttype = IS_NOT_UNKNOWN;\n> \t\t\t\t\t$$ = (Node *)b;\n> \t\t\t\t}\n> ! \t\t\t| a_expr BETWEEN b_expr AND b_expr\t\t\t%prec BETWEEN\n> \t\t\t\t{\n> ! \t\t\t\t\t$$ = (Node *) makeA_Expr(AND, NIL,\n> ! \t\t\t\t\t\t(Node *) makeSimpleA_Expr(OP, \">=\", $1, $3),\n> ! \t\t\t\t\t\t(Node *) makeSimpleA_Expr(OP, \"<=\", $1, $5));\n> ! \t\t\t\t}\n> ! \t\t\t| a_expr NOT BETWEEN b_expr AND b_expr\t\t%prec BETWEEN\n> ! \t\t\t\t{\n> ! \t\t\t\t\t$$ = (Node *) makeA_Expr(OR, NIL,\n> ! \t\t\t\t\t\t(Node *) makeSimpleA_Expr(OP, \"<\", $1, $4),\n> ! \t\t\t\t\t\t(Node *) makeSimpleA_Expr(OP, \">\", $1, $6));\n> \t\t\t\t}\n> \t\t\t| a_expr IN_P in_expr\n> \t\t\t\t{\n> --- 5433,5446 ----\n> \t\t\t\t\tb->booltesttype = IS_NOT_UNKNOWN;\n> \t\t\t\t\t$$ = (Node *)b;\n> \t\t\t\t}\n> ! \t\t\t| a_expr BETWEEN opt_symmetry b_expr AND b_expr\t\t\t%prec BETWEEN\n> \t\t\t\t{\n> ! \t\t\t\t\tBetweenExpr *n = makeNode(BetweenExpr);\n> ! \t\t\t\t\tn->expr = $1;\n> ! \t\t\t\t\tn->symmetric = $3;\n> ! \t\t\t\t\tn->lexpr = $4;\n> ! \t\t\t\t\tn->rexpr = $6;\n> ! \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> \t\t\t| a_expr IN_P in_expr\n> \t\t\t\t{\n> ***************\n> *** 5519,5524 ****\n> --- 5516,5526 ----\n> \t\t\t\t{ $$ = $1; }\n> \t\t;\n> \n> + opt_symmetry: SYMMETRIC\t\t\t\t{ $$ = TRUE; }\n> + \t\t| ASYMMETRIC\t\t\t\t{ $$ = FALSE; }\n> + \t\t| /* EMPTY */\t\t\t\t{ $$ = FALSE; /* default */ }\n> + \t\t;\n> + \n> /*\n> * Restricted expressions\n> *\n> ***************\n> *** 6844,6849 ****\n> --- 6846,6852 ----\n> \t\t\t| ANY\n> \t\t\t| AS\n> \t\t\t| ASC\n> + \t\t\t| ASYMMETRIC\n> \t\t\t| BOTH\n> \t\t\t| CASE\n> \t\t\t| CAST\n> ***************\n> *** 6868,6882 ****\n> \t\t\t| FOR\n> \t\t\t| FOREIGN\n> \t\t\t| FROM\n> ! \t\t\t| GRANT\n> ! \t\t\t| GROUP_P\n> ! \t\t\t| HAVING\n> ! \t\t\t| INITIALLY\n> ! \t\t\t| INTERSECT\n> ! \t\t\t| INTO\n> ! \t\t\t| LEADING\n> ! \t\t\t| LIMIT\n> ! \t\t\t| LOCALTIME\n> \t\t\t| LOCALTIMESTAMP\n> \t\t\t| NEW\n> \t\t\t| NOT\n> --- 6871,6885 ----\n> \t\t\t| FOR\n> \t\t\t| FOREIGN\n> \t\t\t| FROM\n> ! \t\t\t| GRANT \n> ! \t\t\t| GROUP_P \n> ! \t\t\t| HAVING \n> ! \t\t\t| INITIALLY \n> ! \t\t\t| INTERSECT \n> ! \t\t\t| INTO \n> ! \t\t\t| LEADING \n> ! \t\t\t| LIMIT \n> ! \t\t\t| LOCALTIME \n> \t\t\t| LOCALTIMESTAMP\n> \t\t\t| NEW\n> \t\t\t| NOT\n> ***************\n> *** 6894,6899 ****\n> --- 6897,6903 ----\n> \t\t\t| SELECT\n> \t\t\t| SESSION_USER\n> \t\t\t| SOME\n> + \t\t\t| SYMMETRIC\n> \t\t\t| TABLE\n> \t\t\t| THEN\n> \t\t\t| TO\n> Index: src/backend/parser/keywords.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/parser/keywords.c,v\n> retrieving revision 1.117\n> diff -c -r1.117 keywords.c\n> *** src/backend/parser/keywords.c\t2002/06/22 02:04:45\t1.117\n> --- src/backend/parser/keywords.c\t2002/06/27 10:27:35\n> ***************\n> *** 45,50 ****\n> --- 45,51 ----\n> \t{\"asc\", ASC},\n> \t{\"assertion\", ASSERTION},\n> \t{\"assignment\", ASSIGNMENT},\n> + \t{\"asymmetric\", ASYMMETRIC},\n> \t{\"at\", AT},\n> \t{\"authorization\", AUTHORIZATION},\n> \t{\"backward\", BACKWARD},\n> ***************\n> *** 271,276 ****\n> --- 272,278 ----\n> \t{\"storage\", STORAGE},\n> \t{\"strict\", STRICT},\n> \t{\"substring\", SUBSTRING},\n> + \t{\"symmetric\", SYMMETRIC},\n> \t{\"sysid\", SYSID},\n> \t{\"table\", TABLE},\n> \t{\"temp\", TEMP},\n> Index: src/include/nodes/nodes.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/include/nodes/nodes.h,v\n> retrieving revision 1.109\n> diff -c -r1.109 nodes.h\n> *** src/include/nodes/nodes.h\t2002/06/20 20:29:51\t1.109\n> --- src/include/nodes/nodes.h\t2002/06/27 10:27:36\n> ***************\n> *** 225,230 ****\n> --- 225,231 ----\n> \tT_GroupClause,\n> \tT_NullTest,\n> \tT_BooleanTest,\n> + \tT_BetweenExpr,\n> \tT_CaseExpr,\n> \tT_CaseWhen,\n> \tT_FkConstraint,\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.182\n> diff -c -r1.182 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t2002/06/20 20:29:51\t1.182\n> --- src/include/nodes/parsenodes.h\t2002/06/27 10:27:37\n> ***************\n> *** 174,179 ****\n> --- 174,192 ----\n> } A_Const;\n> \n> /*\n> + * BetweenExpr - an SQL99 BETWEEN expression\n> + */\n> + \n> + typedef struct BetweenExpr\n> + {\n> + \tNodeTag\t\ttype;\n> + \tNode\t\t*expr;\t\t\t/* Expression to check */\n> + \tint\t\tsymmetric;\t\t/* True if SYMMETRIC, false if ASYMMETRIC */\n> + \tNode\t\t*lexpr;\t\t\t/* First bound */\n> + \tNode\t\t*rexpr;\t\t\t/* Second bound */\n> + } BetweenExpr;\n> + \n> + /*\n> * TypeCast - a CAST expression\n> *\n> * NOTE: for mostly historical reasons, A_Const parsenodes contain\n> Index: src/test/regress/expected/select.out\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/test/regress/expected/select.out,v\n> retrieving revision 1.10\n> diff -c -r1.10 select.out\n> *** src/test/regress/expected/select.out\t2001/07/16 05:07:00\t1.10\n> --- src/test/regress/expected/select.out\t2002/06/27 10:27:38\n> ***************\n> *** 430,432 ****\n> --- 430,579 ----\n> mary | 8\n> (58 rows)\n> \n> + -- \n> + -- Test between syntax\n> + --\n> + SELECT 2 BETWEEN 1 AND 3;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT 2 BETWEEN 3 AND 1;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT 2 BETWEEN ASYMMETRIC 1 AND 3;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT 2 BETWEEN ASYMMETRIC 3 AND 1;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT 2 BETWEEN SYMMETRIC 1 AND 3;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT 2 BETWEEN SYMMETRIC 3 AND 1;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT 2 NOT BETWEEN 1 AND 3;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT 2 NOT BETWEEN 3 AND 1;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT 2 NOT BETWEEN ASYMMETRIC 1 AND 3;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT 2 NOT BETWEEN ASYMMETRIC 3 AND 1;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT 2 NOT BETWEEN SYMMETRIC 1 AND 3;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT 2 NOT BETWEEN SYMMETRIC 3 AND 1;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT -4 BETWEEN -1 AND -3;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT -4 BETWEEN -3 AND -1;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT -4 BETWEEN ASYMMETRIC -1 AND -3;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT -4 BETWEEN ASYMMETRIC -3 AND -1;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT -4 BETWEEN SYMMETRIC -1 AND -3;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT -4 BETWEEN SYMMETRIC -3 AND -1;\n> + ?column? \n> + ----------\n> + f\n> + (1 row)\n> + \n> + SELECT -4 NOT BETWEEN -1 AND -3;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT -4 NOT BETWEEN -3 AND -1;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT -4 NOT BETWEEN ASYMMETRIC -1 AND -3;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT -4 NOT BETWEEN ASYMMETRIC -3 AND -1;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT -4 NOT BETWEEN SYMMETRIC -1 AND -3;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> + SELECT -4 NOT BETWEEN SYMMETRIC -3 AND -1;\n> + ?column? \n> + ----------\n> + t\n> + (1 row)\n> + \n> Index: src/test/regress/sql/select.sql\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/test/regress/sql/select.sql,v\n> retrieving revision 1.6\n> diff -c -r1.6 select.sql\n> *** src/test/regress/sql/select.sql\t2001/07/16 05:07:00\t1.6\n> --- src/test/regress/sql/select.sql\t2002/06/27 10:27:39\n> ***************\n> *** 103,105 ****\n> --- 103,133 ----\n> --\n> SELECT p.name, p.age FROM person* p ORDER BY age using >, name;\n> \n> + -- \n> + -- Test between syntax\n> + --\n> + SELECT 2 BETWEEN 1 AND 3;\n> + SELECT 2 BETWEEN 3 AND 1;\n> + SELECT 2 BETWEEN ASYMMETRIC 1 AND 3;\n> + SELECT 2 BETWEEN ASYMMETRIC 3 AND 1;\n> + SELECT 2 BETWEEN SYMMETRIC 1 AND 3;\n> + SELECT 2 BETWEEN SYMMETRIC 3 AND 1;\n> + SELECT 2 NOT BETWEEN 1 AND 3;\n> + SELECT 2 NOT BETWEEN 3 AND 1;\n> + SELECT 2 NOT BETWEEN ASYMMETRIC 1 AND 3;\n> + SELECT 2 NOT BETWEEN ASYMMETRIC 3 AND 1;\n> + SELECT 2 NOT BETWEEN SYMMETRIC 1 AND 3;\n> + SELECT 2 NOT BETWEEN SYMMETRIC 3 AND 1;\n> + SELECT -4 BETWEEN -1 AND -3;\n> + SELECT -4 BETWEEN -3 AND -1;\n> + SELECT -4 BETWEEN ASYMMETRIC -1 AND -3;\n> + SELECT -4 BETWEEN ASYMMETRIC -3 AND -1;\n> + SELECT -4 BETWEEN SYMMETRIC -1 AND -3;\n> + SELECT -4 BETWEEN SYMMETRIC -3 AND -1;\n> + SELECT -4 NOT BETWEEN -1 AND -3;\n> + SELECT -4 NOT BETWEEN -3 AND -1;\n> + SELECT -4 NOT BETWEEN ASYMMETRIC -1 AND -3;\n> + SELECT -4 NOT BETWEEN ASYMMETRIC -3 AND -1;\n> + SELECT -4 NOT BETWEEN SYMMETRIC -1 AND -3;\n> + SELECT -4 NOT BETWEEN SYMMETRIC -3 AND -1;\n> + \n> ----\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n\n\n", "msg_date": "27 Jun 2002 00:43:51 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: BETWEEN SYMMETRIC" }, { "msg_contents": "Hi,\n\nBased on recent discussion, I went thru and got together the work I'd done\non the BETWEEN node. It's not as far along as I thought. I ran into a few\nhurdles:\n\n* ExecEvalBetweenExpr is probably beyond my powers - I've done my best and\nmarked my hopelessness with '@@' symbols. I don't know how to actually\nevaluate the node properly, I don't know how to check that all the 3 types\nare coercible to the same type and I don't know how to make it take rowvars\n(sic?)instead of scalars, as per spec.\n\nCopy and Equal are done, I think.\n\nOut I've guessed at how to do it based on other examples, but I need\nfeedback. Read I haven't done at all cos I don't quite understand when/why\nit's used or how to do it.\n\nThe grammar has been updated to use the new BetweenExpr node, with new\nsyntax options.\n\nThe new keywords have been added in the relevant places, and they are\nreserved.\n\nnodes.h and parsenodes.h are aware of the new node.\n\nI have added a full regression test that I used in my old gram.y only\nimplementation, that didn't use a new node - it will be helpful!\n\nWhere do we go from here?\n\nChris", "msg_date": "Thu, 27 Jun 2002 18:31:10 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "BETWEEN SYMMETRIC" } ]
[ { "msg_contents": "I have an slightly different perspective on this. I hope it will be a \nbit useful\n\nBackground:\nI'm a senior developer for a consulting firm. I too have experience with \nDB/2, Oracle, Sybase, Adabase, and M$ SQL.\nIn the last few years of work I've been moving from the technical side \nof things to be business side ( all together now: <eewwwww> ).\n\nI've been following PostgreSQL for a couple of years now. Absolutely \nlove it. I have never implemented it on a business project, though. Not \nby any personal desire to use or not to use it. Usually the db choice is \nout of my hands. I cannot say personally that PostgreSQL support is \namazing - ( once again, no experience at all to draw on ), however, I've \nbeen following the lists closely enough over the last few years that I \nbelieve the statement to be accurate. I can say that support services \nfrom the other vendors really aren't all that spectacular.\n\nPerspective:\nThere is one factor to database choice that I haven't seen listed here. \nCulpability & legal retribution. I'm not a lawyer, and don't claim to \nbe - so I welcome any corrections to the accuracy of the following. \nRegardless of its' legal accuracy, I can vouch for the common belief in \nthe following thought by corporate I.T. management.\n\nAny corporation, whether privately or publicly held, has various legal \nobligations to it's shareholders. Executive officers share in both the \nfinancial rewards of a successful company and in the legal \nresponsibility that the corporation has to it's shareholders.\n\nIf a catastrophic software failure results in a high percentage of lost \nrevenue, a corporation might be able to seek monetary compensation from \na commercial vendor. They could even be taken to court - depending upon \nlicensing, product descriptions, promises made in product literature, \netc. For cases like open source projects, like PostgreSQL, there is no \nlegal recourse available.\n\nSo - in the extreme case, if commercial Vendor V's database blows \nchunks, and causes company B to loose a lot of money. If Company B can \nprove that the fault lies squarely on the shoulders of Vendor V, Company \nC can sue Vendor V's a** off. Executive management isn't at fault - \nbecause they have performed due diligence and have forged a partnership \nwith vendor V who has a legal responsibility for the claims of their \nproduct.\n\nIf, however, the database was PostgreSQL, then Company C has no legal \nrecourse. Executive management has personally taken all responsibility \nfor any catastrophic software failures, and therefore have put \nthemselves in quite a precarious situation. No one else to take the \nblame but them!\n\nNow frankly I know that the above scenario is extreme. I was rolling my \neyes while *writing* it. But the truth is that these are the kinds of \nthings that technical auditors would report to a Board of Directors. \nThere is nothing wrong with executive management choosing to assume risk \n(outside of corporate politics, that is ). Many savvy members of \nmanagement realize that the real risk is quite low. Of course, the \ncomfort level goes way up when the database is supporting a non-vital \nbusiness process - or a process that is several steps away from the \nrevenue stream.\n\nStill - imagine a database system with data and transactional volume the \nsize of Google. In this case the volume of updates & inserts is much \nhigher. Now this database is a companies' main source of revenue \n( again, extreme, but we're talking examples ). Would you blame a \ncorporate exec if he wasn't willing to place his own personal assets on \nthe line by choosing PostgreSQL over Oracle?\n\nBTW - Oracle & other commercial vendors handle these contingencies by \nbuying insurance policies. If the above situation had occurred and \nOracle was the vendor, then the two companies would most likely settle \nout of court by dealing with the insurer. I dunno exactly how the claims \nprocess works on such a beast, but I know that such policies are \npurchased ( and you thought the annual support fee was just to cover the \nsupport staff's salaries?). Maybe Oracle would file a claim, an adjuster \nwould visit Oracle's customer, etc?\n\nClosing:\nI think PostgreSQL is a great database. I haven't explored it's good and \nbad points thoroughly enough to know what applications it serves best, \nand where it's weakest. I do hope to use it in enough scenarios to find \nout. I hope a lawyer reads this and tells me that regardless of what \nmanagement thinks is true, the above is hog-wash. Until someone does, I \ncan't ignore the fact that a commercial vendor has a legal \nresponsibility to support the claims of their product, while an open \nsource group does not. I think PostgreSQL specifically keeps all of \ntheir claims legitimate and reasonable, but that doesn't change the fact \nthat if someone makes an honest mistake, there is nothing that can be \ndone *legally* to make you correct your mistake or pay for the damage it \ncaused.\n\nAndrew Sullivan wrote:\n> Followup set to -advocacy\n>\n>> On Wed, Jun 26, 2002 at 12:01:18PM -0700, Dann Corbit wrote:\n>>\n>> Customer support is also a big issue comparing free database systems\n>> with commercial ones. I know that there are a couple groups that do\n>> this, but that genre of businesses do not have a good track record of\n>> staying in business. MS, Oracle, and IBM will be there five years down\n>> the road to help.\n>\n> I normally wouldn't get involved in this one, since it's the sort of\n> thing that turns into a flamefest. And anyway, I'm not sure -hackers\n> is the place for it (hence the followup). But as a lowly user, I\n> cannot let such a comment go unanswered.\n>\n> I've used several commercial products of different kinds. I've\n> supported various kinds of databases. I've worked (and, in fact,\n> currently work) in shops with all kinds of different support\n> agreements, including the magic-high-availability, we'll have it in 4\n> hours ones. I've had contracts for support that were up for renewal,\n> and ones that had been freshly signed with a six-month trial.\n>\n> But I have never, _never_ had the sort of support that I get from the\n> PostgreSQL community and developers. And it has been this way ever\n> since I started playing with PostgreSQL some time ago, when I didn't\n> even know how SQL worked. I like to have commercial support, and to\n> be able to call on it -- we use the services of PostgreSQL, Inc. But\n> you cannot beat the PostgreSQL lists, nor the support directly from\n> the developers and other users. Everyone is unvarnished in their\n> assessments of flaws and their plans for what is actually going to get\n> programmed in. And they tell you when you're doing things wrong, and\n> what they are.\n>\n> You cannot, from _any_ commercial enterprise, no matter how much you\n> are willing to pay, buy that kind of service. People find major,\n> showstopper bugs in the offerings of the companies you mention, and\n> are brushed off until some time later, when the company is good and\n> ready. (I had one rep of a company I won't mention actually tell me,\n> \"Oh, so you found that bug, eh?\" The way I found it was by\n> discovering a hole in my network so big that Hannibal and his\n> elephants could have walked through. But the company in question did\n> not think it necessary to mention this little bug until people found\n> it. And our NDA prevented us from mentioning it.)\n>\n> Additionally, I would counsel anyone who thinks they are protected by\n> a large company to consider the fate of the poor Informix users these\n> days. Informix was once a power-house. It was a Safe Choice. But if\n> I were an Informix user today, I'd be spending much of my days trying\n> to learn DB2, or whatever. Because I would know that, sooner or\n> later, IBM is going to pull out the dreaded \"EOL\" stamp. And I'd\n> have to change my platform.\n>\n> The \"company supported\" argument might make some people in suits\n> comfortable, but I don't believe that they have any justification for\n> that comfort. I'd rather talk to the guy who wrote the code.\n>\n> A\n>\n> --\n> ----\n> Andrew Sullivan 87 Mowat Avenue\n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M6K 3E3\n> +1 416 646 3304 x110", "msg_date": "Thu, 27 Jun 2002 00:41:26 -0600", "msg_from": "Tim Hart <tjhart@mac.com>", "msg_from_op": true, "msg_subject": "Re: Support (was: Democracy and organisation)" }, { "msg_contents": "Hmmm...\n\nI think this is a common fallacy. It's like arguing that if windoze crashes\nand you lose important data then you have some sort of legal recourse\nagainst Microsoft. Ever read one of their EULAs? $10 says that Oracle's\nlicense grants them absolute immunity to any kind of damages claim.\n\nChris\n\n-------------------\n\nTim Hart Wrote:\n\nIf a catastrophic software failure results in a high percentage of lost\nrevenue, a corporation might be able to seek monetary compensation from a\ncommercial vendor. They could even be taken to court - depending upon\nlicensing, product descriptions, promises made in product literature, etc.\nFor cases like open source projects, like PostgreSQL, there is no legal\nrecourse available.\n\nSo - in the extreme case, if commercial Vendor V's database blows chunks,\nand causes company B to loose a lot of money. If Company B can prove that\nthe fault lies squarely on the shoulders of Vendor V, Company C can sue\nVendor V's a** off. Executive management isn't at fault - because they have\nperformed due diligence and have forged a partnership with vendor V who has\na legal responsibility for the claims of their product.\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 15:08:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Support (was: Democracy and organisation)" }, { "msg_contents": "On Thu, Jun 27, 2002 at 12:41:26AM -0600, Tim Hart wrote:\n\n> If a catastrophic software failure results in a high percentage of lost \n> revenue, a corporation might be able to seek monetary compensation from \n> a commercial vendor. They could even be taken to court - depending upon \n> licensing, product descriptions, promises made in product literature, \n> etc. For cases like open source projects, like PostgreSQL, there is no \n> legal recourse available. \n\nThat is only sort of true. IANAL, though, so you should still get a\nlegal opinion.\n\nFirst, read the EULA of the commercial packages. I've never seen one\nthat didn't have something very similar to the following, which is\ntaken verbatim from the PostgreSQL license:\n\nTHE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,\nINCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE\nPROVIDED HEREUNDER IS ON AN \"AS IS\" BASIS, AND THE UNIVERSITY OF\nCALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT,\nUPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n\nEvery company puts that in their warranties, precisely to head off\nsuch lawsuits in the first place.\n\nThe problem is that (a) in some states or other locales, such\ndisclaimers are illegal, and (b) disclaiming an implied warranty of\nfitness is tricky if employees of your company have made explicit\npromises that a system will do such-and-thus (in casual parlance,\nsuch promises are called \"sales calls\").\n\nSo you're right that it is sometimes possible to try to sue the\nlicensor of the software for damages. Whether you have a hope of\nwinning, or (more importantly) winning anything other than a Pyhrric\nvictory, is another question. I suspect that imagining one might sue\na software vendor is silly not because it is impossible, but because\nit would almost always be totally impractical. IBM fought the\nJustice Department for 30 years. Microsoft has been doing the same\nfor at least 10. And they'd have _way more_ interest in fighting any\nattempt to make them liable for flaws in their programs, and would be\nbeing sued by people with much shallower pockets than the DoJ. \n\nIt's also true that several bits of legislation (UCITA most\nobviously) have attempted to protect software publishers from\n_explicit malfeasance_, not just incompetence. There is currently a\nmove afoot by some of the security community to make it possible to\nhold companies legally liable for consequential damages of their\nsoftware's behaviour. Both of these items suggest that a lawsuit\nwould have next to no chance of winning.\n\nFinally, I note that, in spite of the suggestions of a lawyer back\nwhen Great Bridge was starting up (see\n<http://archives.postgresql.org/pgsql-general/2000-07/msg00024.php>),\nthe \"exculpatory language\" of the PostgreSQL license never was\nextended to the PostgreSQL Global Development Group. Therefore, it\nstrikes me that PostgreSQL developers could still be sued under the\ncurrent license, but I haven't read through that whole thread again\n(I remember when it happened the first time, and I've little wish to\nre-read all the UCITA arguments again), so maybe there was some\nconclusion that the exculpatory language was extended by implication.\n\nA \n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 10:41:00 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Support (was: Democracy and organisation)" }, { "msg_contents": "Tim,\n\n> If a catastrophic software failure results in a high percentage of\n> lost revenue, a corporation might be able to seek monetary\n> compensation from a commercial vendor. They could even be taken to\n> court - depending upon licensing, product descriptions, promises made\n> in product literature, etc. For cases like open source projects, like\n> PostgreSQL, there is no legal recourse available.\n\nWell, there's the perception and the reality. I can't argue that\ncompany lawyers and auditors will *not* make the above argument; they\nvery well may, especially if they are personally pro-MS or pro-Oracle.\n You may be on to something there.\n\nHowever, the argument is hogwash from a practical perspective. In\npratice, it is nearly impossible to sue a company for bad software\n(witness various class actions against Microsoft). SO much so that one\nof the hottest-debated portions of the vastly flawed UCITA is software\nliability and \"lemon laws\". Plus in some states, the vendor's EULA\n(which always disclaims secondary liability) is more powerful than\nlocal consumer law.\n\nOr from a financial perspective: An enterprise MS SQL 2000 user can\nexpect to pay, under Licensing 6.0, about $10,000 - $20,000 a year in\nlicnesing fees -- *not including any support*. Just $2000-$5000 buys\nyou a pretty good $10 million software failure insurance policy. Do\nthe math.\n\nAs I said, I don't disreagard your argument. Just because it's hogwash\ndoesn't mean that people don't believe it.\n\n-Josh Berkus\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 09:07:30 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Support (was: Democracy and organisation)" } ]
[ { "msg_contents": "\n\nBegin forwarded message:\n\nI said:\n> BTW - Oracle & other commercial vendors handle these contingencies by \n> buying insurance policies.\n\nI think I should probably correct the above statement. I think Oracle \nspecifically has a large enough revenue stream that they have no need to \npurchase an insurance policy. It is technically possible for them, or \nany other vendor, to do so if they chose to. Many insurance companies \noffer insurance products to offset the legal responsibility for the \nperformance of a software package. Many such policies are sold each year.\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 00:52:22 -0600", "msg_from": "Tim Hart <timjhart@shaw.ca>", "msg_from_op": true, "msg_subject": "Fwd: Support (was: Democracy and organisation)" }, { "msg_contents": "On Thu, 2002-06-27 at 02:52, Tim Hart wrote:\n> \n> \n> Begin forwarded message:\n> \n> I said:\n> > BTW - Oracle & other commercial vendors handle these contingencies by \n> > buying insurance policies.\n> \n> I think I should probably correct the above statement. I think Oracle \n> specifically has a large enough revenue stream that they have no need to \n> purchase an insurance policy. It is technically possible for them, or \n> any other vendor, to do so if they chose to. Many insurance companies \n> offer insurance products to offset the legal responsibility for the \n> performance of a software package. Many such policies are sold each year.\n\nPerhaps, but in this case who protects Oracle from the insurance company\nwhen the insurance agency Oracle based database corrupts and loses the\nOracle policy?\n\nThis is why I think Oracle should promote PostgreSQL for instances where\na database issue could be conflicting ;)\n\n\n\n", "msg_date": "27 Jun 2002 08:40:53 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Support (was: Democracy and organisation)" } ]
[ { "msg_contents": "Begin forwarded message:\n\nI said:\n> BTW - Oracle & other commercial vendors handle these contingencies by \n> buying insurance policies.\n\nI think I should probably correct the above statement. I think Oracle \nspecifically has a large enough revenue stream that they have no need to \npurchase an insurance policy. It is technically possible for them, or \nany other vendor, to do so if they chose to. Many insurance companies \noffer insurance products to offset the legal responsibility for the \nperformance of a software package. Many such policies are sold each year.\n\n\n\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 00:58:41 -0600", "msg_from": "Tim Hart <tjhart@mac.com>", "msg_from_op": true, "msg_subject": "Re: Stalled post to pgsql-hackers" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au] \n> Sent: 27 June 2002 08:08\n> To: pgsql-hackers@postgresql.org; Tim Hart\n> Cc: Andrew Sullivan; pgsql-advocacy@postgresql.org\n> Subject: Re: [HACKERS] Support (was: Democracy and organisation)\n> \n> \n> Hmmm...\n> \n> I think this is a common fallacy. It's like arguing that if \n> windoze crashes and you lose important data then you have \n> some sort of legal recourse against Microsoft. Ever read one \n> of their EULAs? $10 says that Oracle's license grants them \n> absolute immunity to any kind of damages claim.\n\nI'm inclined to agree, though if it were the case, just buy Red Hat\nDatabase.\n\nRegards, Dave.\n\n\n", "msg_date": "Thu, 27 Jun 2002 08:30:47 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Support (was: Democracy and organisation)" } ]
[ { "msg_contents": "I think PostgreSQL's standards are a bit too high. From my point of \nview, the team as a whole has no desire to build the worlds best open \nsource database from the point of view of functionality. They seem more \ninterested in the writing the open source database with the world's most \naesthetically pleasing source code.\n\nNow - in all fairness, I do software architecture for a living, and I \ncan't stand hacks. I fight against them *almost* every opportunity that \nI get, because I'm loathe to produce such slop. I know that the more \nslop gets in my code, the harder it is to enhance and maintain, and the \nmore likely it is to actually break code & slow down the pace of \ndevelopment.\n\nI also must admit that aesthetically pleasing source code almost \n*always* means that the functionality that is there is rock solid. That \nfunctionality was also 'purchased' at the highest price possible.\n\nBut I also know that functionality has value to the customer. Customers \nhave very little concern for the aesthetics of proper design and \nimplementation. The customer I work with right now has a slogan that I \nthink summed it up well for all customers in general: ( I want it all, \nand I want it now ). All the valid technical arguments I have don't mean \na thing. To the customer, functionality A translates to work savings B. \nThe process can be well defined. Implement it. When I tell her that the \ncost of implementation is some high value 'X' ( cost in terms of time \nand/or $$ ), she doesn't say 'I'll wait'. She says. \"Hmm... what can I \nget for X/4?\" When I tell her, she then says: \"Can I get A/4 now, and \ncan you give me most of the rest of A in 4 months? That's more important \nto me than functionality Y, and I can do without this bit of spit and \npolish that was part of A.\"\n\nSo I deliver A/4 now, and she uses it now. She receives immediate \nbenefit. She uses the product. She's happy. I clean up my hack while I \ndeliver the other portion of A that she wanted.\n\nNow I know that business processes are a far cry from database features. \nThey are less complex and adding a new feature doesn't always carry the \npotential repercussions that a poorly thought out database feature could \ncause.\n\nNonetheless, you tell me today that I can shrink indexes with tool X, \nbut tool X is a hack and likely to change, and I'll use tool X because \nthe value of shrinking outweighs the cost of changing to the \nchrome-plated tool Y when it comes out next year. I may choose not to \nuse another tool because it's also a hack and not that important to my \nimplementation. My choice. In fact, I've found it less costly to deal \nwith vendors cleaning up their hacks( i.e., breaking backwards \ncompatibility ) than in trying to implement my own solution for said \nfeature and trying to replace it when the database finally implements \nthe feature.\n\nI'm not advocating that you put in every hack. There's always a balance \nbetween judging a whim and a genuine need. A good development effort can \nalso tolerate only a limited number of 'unresolved hacks' at a time. \nFair enough. But an application developer with a need for a database \nfeature is going to pick the database solution with that feature set \nimplemented *today*. Whether or not it's a hack will not keep them from \nusing it. It will keep a seasoned developer from relying *too heavily* \non it. But there's only so much you can do to protect the users from \nthemselves. Warning labels on tools is fair warning.\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> So, when we review patches, we shouldn't be turning up our noses at\n>> imperfect solutions if the solution meets needs of our users.\n>\n> I think our standards have gone up over the years, and properly so.\n> The fact that we put in hacks some years ago doesn't mean that we\n> still should.\n>\n> I don't really mind hacks^H^H^Hpartial solutions that are clean subsets\n> of the functionality we want to have eventually. I do object to hacks\n> that will create a backwards-compatibility problem when we want to do it\n> right.\n>\n> \t\t\tregards, tom lane", "msg_date": "Thu, 27 Jun 2002 01:32:53 -0600", "msg_from": "Tim Hart <tjhart@mac.com>", "msg_from_op": true, "msg_subject": "Re: Why I like partial solutions" } ]
[ { "msg_contents": "The cvs docs say that we support the 'WITH CHECK OPTION' on views, but the\nTODO says we don't...\n\nChris\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 17:40:00 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "mistake in sql99 compatibility?" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> The cvs docs say that we support the 'WITH CHECK OPTION' on views, but the\n> TODO says we don't...\n\nTODO updated. Not sure when it was added but I see it in SGML docs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 28 Jun 2002 14:57:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: mistake in sql99 compatibility?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Christopher Kings-Lynne wrote:\n>> The cvs docs say that we support the 'WITH CHECK OPTION' on views, but the\n>> TODO says we don't...\n\n> TODO updated. Not sure when it was added but I see it in SGML docs.\n\nA moment's examination of gram.y would have convinced you that the\ndocs are wrong ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Jun 2002 15:21:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mistake in sql99 compatibility? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Christopher Kings-Lynne wrote:\n> >> The cvs docs say that we support the 'WITH CHECK OPTION' on views, but the\n> >> TODO says we don't...\n> \n> > TODO updated. Not sure when it was added but I see it in SGML docs.\n> \n> A moment's examination of gram.y would have convinced you that the\n> docs are wrong ...\n\nOh, OK. In the future, the \"A moment's examination\" swipe isn't\nrequired. :-( I suppose it makes you feel better.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Fri, 28 Jun 2002 15:29:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: mistake in sql99 compatibility?" }, { "msg_contents": "On Fri, Jun 28, 2002 at 02:57:27PM -0400, Bruce Momjian wrote:\n> Christopher Kings-Lynne wrote:\n> > The cvs docs say that we support the 'WITH CHECK OPTION' on views, but the\n> > TODO says we don't...\n> \n> TODO updated. Not sure when it was added but I see it in SGML docs.\n\nOn a related note, the SQL99 feature list in the development docs says\nthat we support the SQL99 UNIQUE predicate. AFAIK we don't -- should\nthe docs be updated?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n\n", "msg_date": "Fri, 28 Jun 2002 15:35:21 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": false, "msg_subject": "Re: mistake in sql99 compatibility?" }, { "msg_contents": "Sure? I don't see it. In fact, I only see it in the 'SQL92 features we\ndon't have section'.\n\nhttp://developer.postgresql.org/docs/postgres/sql-createview.html\n\nChris\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nCc: <pgsql-hackers@postgresql.org>\nSent: Saturday, June 29, 2002 2:57 AM\nSubject: Re: [HACKERS] mistake in sql99 compatibility?\n\n\n> Christopher Kings-Lynne wrote:\n> > The cvs docs say that we support the 'WITH CHECK OPTION' on views, but\nthe\n> > TODO says we don't...\n>\n> TODO updated. Not sure when it was added but I see it in SGML docs.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n\n\n", "msg_date": "Sat, 29 Jun 2002 12:14:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: mistake in sql99 compatibility?" } ]
[ { "msg_contents": "I've just come across a case in Oracle 8.0.6 where important queries\ncould have been several orders of magnitude faster if only the optimizer\nhad realized that it was doing case-insensitive comparisons against a\nconstant that wasn't affected by case (a string of all digits).\n\nThe query was of the general form\n\n\tSELECT * FROM table\n\tWHERE upper(id) = '001234'\n\n...causing a full index scan (there was a non-unique index on id). What\nthe optimizer could perhaps have done was something like\n\n\tif (upper('001234') == lower('001234'))\n\t\tSELECT * FROM table\n\t\tWHERE id = '001234';\n\telse\n\t\tSELECT * FROM table\n\t\tWHERE upper(id) = '001234';\n\nEven without the index I guess that would have saved it a lot of work.\nIn this case, of course, the user wasn't doing the smartest thing by\ngiving millions of records a numerical id but storing it as varchar.\nOTOH there may also be a lot of cases like\n\n\tSELECT * FROM table\n\tWHERE upper(name) LIKE '%'\n\nbeing generated by not-too-bright applications out there.\n\nDoes PostgreSQL do this kind of optimization? If not, how easy and how\nuseful would it be to build it? I suppose this sort of thing ought to\nbe in src/backend/optimizer/prep/ somewhere, but I couldn't find\nanything like it.\n\n\nJeroen\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 12:05:21 +0200", "msg_from": "\"Jeroen T. Vermeulen\" <jtv@xs4all.nl>", "msg_from_op": true, "msg_subject": "Case sensitive searches" } ]
[ { "msg_contents": "Hi\ni just upgrading postgres from 7.0 to 7.2 and i have the folowing error in\nconfiguration process\nwhat happen ? how to fix this problem ?\n\nEnter default encoding (SQL_ASCII):\nNow installing the PostgreSQL database files in /var/lib/postgres/data\nsu - postgres -c cd /var/lib/postgres; . ./.profile; LANG= initdb --encoding\nSQL_ASCII --pgdata /var/lib/postgres/data\n/usr/lib/postgresql/bin/pg_encoding: relocation error:\n/usr/lib/postgresql/bin/pg_encoding: undefined symbol: pg_char_to_encoding\ninitdb: pg_encoding failed\n\n\n--\n_______________________________\nFouad Fezzi\nIngenieur R�seau\nIUP Institut Universitaire Professionnalis�\nUniversite d'Avignon et des Pays de Vaucluse\n339 ch. des Meinajaries\ntel : (+33/0) 4 90 84 35 50\nBP 1228 - 84911 AVIGNON CEDEX 9\nfax : (+33/0) 4 90 84 35 01\nhttp://www.iup.univ-avignon.fr\n_________________________________\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 12:57:54 +0200", "msg_from": "\"Fouad Fezzi\" <fezzi@iup.univ-avignon.fr>", "msg_from_op": true, "msg_subject": "encoding problem" }, { "msg_contents": "\"Fouad Fezzi\" <fezzi@iup.univ-avignon.fr> writes:\n> i just upgrading postgres from 7.0 to 7.2 and i have the folowing error in\n> configuration process\n> what happen ? how to fix this problem ?\n\n> Enter default encoding (SQL_ASCII):\n> Now installing the PostgreSQL database files in /var/lib/postgres/data\n> su - postgres -c cd /var/lib/postgres; . ./.profile; LANG= initdb --encoding\n> SQL_ASCII --pgdata /var/lib/postgres/data\n> /usr/lib/postgresql/bin/pg_encoding: relocation error:\n> /usr/lib/postgresql/bin/pg_encoding: undefined symbol: pg_char_to_encoding\n> initdb: pg_encoding failed\n\nI think that you configured 7.2 with multibyte support but that the old\n7.0 installation didn't have it, and for some reason the dynamic linker\nis trying to bind the old libpq.so instead of the new one. Check where\nyou've installed the library, check ldconfig path, etc.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2002 09:40:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: encoding problem " } ]
[ { "msg_contents": ">...\n>\tif (upper('001234') == lower('001234'))\n>\t\tSELECT * FROM table\n>\t\tWHERE id = '001234';\n>\telse\n>\t\tSELECT * FROM table\n>\t\tWHERE upper(id) = '001234';\n>\n>Even without the index I guess that would have saved it a lot of work.\n\nI'm no expert, but I can't image this will be easy, because the optimizer\ndoes not know any relation between lower() and upper().\nI think an index on upper(id) (create index idxname on table(upper(id)))\nshould work well.\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 13:10:57 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Case sensitive searches" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Dave Cramer [mailto:Dave@micro-automation.net] \n> Sent: 27 June 2002 12:12\n> To: Dave Page\n> Subject: RE: [HACKERS] Postgres idea list\n> \n> \n> Dave,\n> \n> Thanks for the response.\n> \n> On Wed, 2002-06-26 at 15:21, Dave Page wrote:\n> > \n> > \n> > I do, but I've had nothing but bad experiences with Java though I'm \n> > open to new evidence/persuasion. I do agree that \n> duplication of effort \n> > is not a good idea and I'm certainly not against collaborating on a \n> > new version though I must point out that having written \n> pgAdmin from \n> > scratch twice now (three times if you cound my original proof of \n> > concept) over the last 5-6 years, I have *very* specific \n> ideas on how \n> > pgAdmin should work.\n> \n> I've heard this \"bad experience\" thing a few times and I \n> would like to understand this better. I have been developing \n> in java for quite some time now, and have no worse, or better \n> time with it.\n\nMost recently, the Cisco Visual Switch manager app that's in the\nfirmware of my 2950-24 switches which won't run on any Linux or Win32\nsystem I've got within 6 feet of me. You'd think they'd get it right.\n\nI have often found that applets from various places give exception\nerrors and refuse to run. Others are extremely slow.\n\nOn the plus side, there is a Java Telnet app that I used to use that was\n*very* good.\n\n\n> > Let me say now though, even if I do stay with my own \n> version, if you \n> > ever need help don't hesitate to ask.\n> > \n> Thanks very much for the offer, actually your code is quite helpful.\n\n:-)\n\nRegards, Dave.\n\n\n", "msg_date": "Thu, 27 Jun 2002 14:30:30 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Postgres idea list" } ]
[ { "msg_contents": "Marc, did you do anything to the format of the individual archive\nmessage pages? The top index pages look great, but when I go to,\nsay,\nhttp://archives.postgresql.org/pgsql-hackers/2002-05/index.php\nI see only a blank page. \"View Source\" shows there is stuff there,\nbut my browser ain't coping. Maybe a missing end-tag or something?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2002 10:25:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Can't read archives anymore :-(" }, { "msg_contents": "\nshows up fine for me ... browser issue? :( is there a tag missing that\nyou can pick out in view source? *raised eyebrow*\n\nOn Thu, 27 Jun 2002, Tom Lane wrote:\n\n> Marc, did you do anything to the format of the individual archive\n> message pages? The top index pages look great, but when I go to,\n> say,\n> http://archives.postgresql.org/pgsql-hackers/2002-05/index.php\n> I see only a blank page. \"View Source\" shows there is stuff there,\n> but my browser ain't coping. Maybe a missing end-tag or something?\n>\n> \t\t\tregards, tom lane\n>\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 11:54:24 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Can't read archives anymore :-(" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> shows up fine for me ... browser issue? :( is there a tag missing that\n> you can pick out in view source? *raised eyebrow*\n\nLooks like table problems. I tried W3C's validator on it, and it had a ton\nof minor gripes, but the missing table end-tag is probably the killer:\n\nhttp://validator.w3.org/check?uri=http%3A%2F%2Farchives.postgresql.org%2Fpgsql-hackers%2F2002-05%2Findex.php&charset=iso-8859-1+%28Western+Europe%29&doctype=HTML+4.01+Strict\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2002 11:07:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Can't read archives anymore :-( " }, { "msg_contents": "On Thu, 27 Jun 2002, Tom Lane wrote:\n\n> ...but the missing table end-tag is probably the killer:\n\nThat kills me in Netscape 4.78 all the time. It's a well known problem.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n", "msg_date": "Fri, 28 Jun 2002 12:28:00 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Can't read archives anymore :-( " }, { "msg_contents": "On Fri, 28 Jun 2002, Curt Sampson wrote:\n\n> On Thu, 27 Jun 2002, Tom Lane wrote:\n> \n> > ...but the missing table end-tag is probably the killer:\n> \n> That kills me in Netscape 4.78 all the time. It's a well known problem.\n\nAlways toss your pages here to see if they're valid:\n\nhttp://validator.w3.org/\n\n-- \n\"Force has no place where there is need of skill.\", \"Haste in every \nbusiness brings failures.\", \"This is the bitterest pain among men, to have \nmuch knowledge but no power.\" -- Herodotus\n\n\n\n\n", "msg_date": "Fri, 28 Jun 2002 11:03:31 -0600 (MDT)", "msg_from": "Scott Marlowe <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Can't read archives anymore :-( " } ]
[ { "msg_contents": "Could very well be. As I said, I'm not a lawyer. I do know that depending upon the laws in a region, EULAs can be proven to be legally invalid.\n\nI do personally find it hard to believe that Oracle could be legally immune from *all* damages claims. In practice proving fault could be very hard to do ( \"It was the DBA's fault - incorrect configuration\", or \"The OS has a bug in it\"), but in general when a fee is paid for a good or service, there is an implied legal contract that at times can supercede any EULA. The good or service provider has some legal responsibility for the accuracy of their claims regarding the service provided, or the functionality of the project delivered. For example, the only clause that Ford Motor company could use in a sales contract that would absolve them from lemon laws is basically \"The product you are buying is a lemon\".\n\nYour point is taken, though - I don't think one could succesfully sue Microsoft if Windows crashes from time to time. However, if M$ promises that product X is a complete COTS datacenter, and you buy X and find that X is nowhere near stable as the industry norm, you have a legal case - both for the cost of the product and in the resulting lost revenue.\n\nI probably failed to convey in my initial post that I don't think the scenario is likely. Building and maintaining a db app involves technical talent on the part of the client, reliable hardware, networking, appropriate facilities, blah, blah, blah. So it's likely that blame can't be placed on one thing - and no single fault is probably large enough to be outside the industry norms for reliability of the product. I was merely trying to convey managements mindset. I feel the thinking is flawed as well.\n\nOn Thursday, 27, 2002, at 01:08AM, Christopher Kings-Lynne <chriskl@familyhealth.com.au> wrote:\n\n>Hmmm...\n>\n>I think this is a common fallacy. It's like arguing that if windoze crashes\n>and you lose important data then you have some sort of legal recourse\n>against Microsoft. Ever read one of their EULAs? $10 says that Oracle's\n>license grants them absolute immunity to any kind of damages claim.\n>\n>Chris\n>\n>-------------------\n>\n>Tim Hart Wrote:\n>\n>If a catastrophic software failure results in a high percentage of lost\n>revenue, a corporation might be able to seek monetary compensation from a\n>commercial vendor. They could even be taken to court - depending upon\n>licensing, product descriptions, promises made in product literature, etc.\n>For cases like open source projects, like PostgreSQL, there is no legal\n>recourse available.\n>\n>So - in the extreme case, if commercial Vendor V's database blows chunks,\n>and causes company B to loose a lot of money. If Company B can prove that\n>the fault lies squarely on the shoulders of Vendor V, Company C can sue\n>Vendor V's a** off. Executive management isn't at fault - because they have\n>performed due diligence and have forged a partnership with vendor V who has\n>a legal responsibility for the claims of their product.\n>\n>\n>\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 09:01:45 -0700 (PDT)", "msg_from": "Tim Hart <tjhart@mac.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Support (was: Democracy and organisation)" }, { "msg_contents": "\nIs this sort of like Oracle guaranteeing its uncrackable, but as soon as\nsomeone comes to them to prove it is, Oracle's response is \"but DBA didn't\nenable the obscure security feature that can be found here, that is\ndisabled by default?\"\n\nOn Thu, 27 Jun 2002, Tim Hart wrote:\n\n> Could very well be. As I said, I'm not a lawyer. I do know that depending upon the laws in a region, EULAs can be proven to be legally invalid.\n>\n> I do personally find it hard to believe that Oracle could be legally immune from *all* damages claims. In practice proving fault could be very hard to do ( \"It was the DBA's fault - incorrect configuration\", or \"The OS has a bug in it\"), but in general when a fee is paid for a good or service, there is an implied legal contract that at times can supercede any EULA. The good or service provider has some legal responsibility for the accuracy of their claims regarding the service provided, or the functionality of the project delivered. For example, the only clause that Ford Motor company could use in a sales contract that would absolve them from lemon laws is basically \"The product you are buying is a lemon\".\n>\n> Your point is taken, though - I don't think one could succesfully sue Microsoft if Windows crashes from time to time. However, if M$ promises that product X is a complete COTS datacenter, and you buy X and find that X is nowhere near stable as the industry norm, you have a legal case - both for the cost of the product and in the resulting lost revenue.\n>\n> I probably failed to convey in my initial post that I don't think the scenario is likely. Building and maintaining a db app involves technical talent on the part of the client, reliable hardware, networking, appropriate facilities, blah, blah, blah. So it's likely that blame can't be placed on one thing - and no single fault is probably large enough to be outside the industry norms for reliability of the product. I was merely trying to convey managements mindset. I feel the thinking is flawed as well.\n>\n> On Thursday, 27, 2002, at 01:08AM, Christopher Kings-Lynne <chriskl@familyhealth.com.au> wrote:\n>\n> >Hmmm...\n> >\n> >I think this is a common fallacy. It's like arguing that if windoze crashes\n> >and you lose important data then you have some sort of legal recourse\n> >against Microsoft. Ever read one of their EULAs? $10 says that Oracle's\n> >license grants them absolute immunity to any kind of damages claim.\n> >\n> >Chris\n> >\n> >-------------------\n> >\n> >Tim Hart Wrote:\n> >\n> >If a catastrophic software failure results in a high percentage of lost\n> >revenue, a corporation might be able to seek monetary compensation from a\n> >commercial vendor. They could even be taken to court - depending upon\n> >licensing, product descriptions, promises made in product literature, etc.\n> >For cases like open source projects, like PostgreSQL, there is no legal\n> >recourse available.\n> >\n> >So - in the extreme case, if commercial Vendor V's database blows chunks,\n> >and causes company B to loose a lot of money. If Company B can prove that\n> >the fault lies squarely on the shoulders of Vendor V, Company C can sue\n> >Vendor V's a** off. Executive management isn't at fault - because they have\n> >performed due diligence and have forged a partnership with vendor V who has\n> >a legal responsibility for the claims of their product.\n> >\n> >\n> >\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n>\n>\n\n\n\n", "msg_date": "Thu, 27 Jun 2002 15:23:04 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Support (was: Democracy and organisation)" } ]
[ { "msg_contents": " \nOn Thursday, 27, 2002, at 10:07AM, Josh Berkus <josh@agliodbs.com> wrote:\n\n>Or from a financial perspective: An enterprise MS SQL 2000 user can\n>expect to pay, under Licensing 6.0, about $10,000 - $20,000 a year in\n>licnesing fees -- *not including any support*. Just $2000-$5000 buys\n>you a pretty good $10 million software failure insurance policy. Do\n>the math.\n>\n>-Josh Berkus\n\nThe statement above has brought something to light that I had never really considered...\nWill an insurance company issue a software failure policy against PostgreSQL? If so, that may help me in my own struggles to convince managment that they're current approach to mitigating their risk is not only flawed, but *financially impracticle*.\n\n\n", "msg_date": "Thu, 27 Jun 2002 09:57:57 -0700 (PDT)", "msg_from": "Tim Hart <tjhart@mac.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Support (was: Democracy and organisation)" } ]
[ { "msg_contents": "Hello PostgreSQL developers and Admins of news.postgresql.org,\n\nrecently I located messages of this PostgreSQL groups on\ngroups.google.com.\n\nHowever, after asking the local newsserver admin of my company to host\nthose groups as well, I was told that those groups here are\n*unauthorized* and clearly violating current Usenet rules for the\nnamespace of the \"Big 8\" hierarchies, that is \"comp.*\" and others, thus\nthe hosting of the groups had to be refused.\n\nApparently, no RfD (Request for discussion) and voting has ever been\nmade to officially introduce those groups to the Usenet.\n\nMay I kindly ask, why this procedure has not been followed?\n\nOr, if you did not intend to do that, why have the groups not been given\na name outside of the forbidden namespace of the \"Big 8\"?\n\nI am sure, a lot of people would be happy, if those groups were\nofficially introduced and hosted on many international newservers.\n\nRegards,\n\nGuido\n\n\n", "msg_date": "Thu, 27 Jun 2002 20:32:25 +0200", "msg_from": "Guido Ostkamp <Guido.Ostkamp@gmx.de>", "msg_from_op": true, "msg_subject": "Are these groups \"unauthorized\"?" }, { "msg_contents": "Guido Ostkamp <Guido.Ostkamp@gmx.de> writes:\n> I am sure, a lot of people would be happy, if those groups were\n> officially introduced and hosted on many international newservers.\n\nYup. Are you volunteering to be the proponent who shepherds a vote\nthrough the official process?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Jun 2002 10:46:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Are these groups \"unauthorized\"? " }, { "msg_contents": "On Friday 28 June 2002 10:46 am, Tom Lane wrote:\n> Guido Ostkamp <Guido.Ostkamp@gmx.de> writes:\n> > I am sure, a lot of people would be happy, if those groups were\n> > officially introduced and hosted on many international newservers.\n\n> Yup. Are you volunteering to be the proponent who shepherds a vote\n> through the official process?\n\nMaybe more like 'martyr' who shepherds a vote through. I remember going \nthrough a few votes years ago, and the memories are not fond ones.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n\n", "msg_date": "Fri, 28 Jun 2002 12:17:00 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Are these groups \"unauthorized\"?" }, { "msg_contents": "On Fri, 28 Jun 2002, Lamar Owen wrote:\n\n> On Friday 28 June 2002 10:46 am, Tom Lane wrote:\n> > Guido Ostkamp <Guido.Ostkamp@gmx.de> writes:\n> > > I am sure, a lot of people would be happy, if those groups were\n> > > officially introduced and hosted on many international newservers.\n>\n> > Yup. Are you volunteering to be the proponent who shepherds a vote\n> > through the official process?\n>\n> Maybe more like 'martyr' who shepherds a vote through. I remember going\n> through a few votes years ago, and the memories are not fond ones.\n\nThat's why I never bothered ... I've been admin'ng Usenet for enough years\nnow to know that if you create them and propogate them to your neighbors,\nthey will eventually get propogaated out and created ... there are a few\nadmin out there that are anal about creating 'unauthorized' groups, but\nmost out there just let them pass ...\n\nThat said, if anyone wants to provide an open NNTP server for the\nc.d.postgresql.* hierarchy, please let me know and we'll add ou on ...\n\n\n\n", "msg_date": "Sat, 29 Jun 2002 14:49:56 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Are these groups \"unauthorized\"?" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Guido Ostkamp <Guido.Ostkamp@gmx.de> writes:\n>> I am sure, a lot of people would be happy, if those groups were\n>> officially introduced and hosted on many international newservers.\n> \n> Yup. Are you volunteering to be the proponent who shepherds a vote\n> through the official process?\n\nNo. \n\nIf you look closely at the 'comp.databases.*' hierarchy you will find\nthat most of the databases listed have only one group, with the\nexception of the big players like Oracle. That means, the maximum you\nwould be able to get is a 'comp.databases.postgresql', but not the bunch\nof groups which is available here. I don't believe admins here would\nagree to throw away all others.\n\nWhat I recommend to do, is that the names of the groups here gets\nchanged by stripping of the 'comp.databases' prefix. The group names\nwould then make up their own main hierarchy ('postgres.*') like it\nexists for other stuff or companies as well (like 'microsoft.*') etc.\n\nThat would AFAIK no longer violate any rules, and allow webmasters from\noutside to host these groups. Only the people reading these groups\nwould need a small and easy reconfiguration of their subscribed lists\nwhich could be announced by a posting before its done, that's all.\n\nWhat do you think?\n\nBTW: I see you belong to the core development team. Are you responsible\nfor running this server news.postgresql.org?\n\nRegards,\n\nGuido\n\n\n", "msg_date": "Sat, 29 Jun 2002 21:05:45 +0200", "msg_from": "Guido Ostkamp <Guido.Ostkamp@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Are these groups \"unauthorized\"?" }, { "msg_contents": "On Sat, 29 Jun 2002, Guido Ostkamp wrote:\n\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Guido Ostkamp <Guido.Ostkamp@gmx.de> writes:\n> >> I am sure, a lot of people would be happy, if those groups were\n> >> officially introduced and hosted on many international newservers.\n> >\n> > Yup. Are you volunteering to be the proponent who shepherds a vote\n> > through the official process?\n>\n> No.\n>\n> If you look closely at the 'comp.databases.*' hierarchy you will find\n> that most of the databases listed have only one group, with the\n> exception of the big players like Oracle. That means, the maximum you\n> would be able to get is a 'comp.databases.postgresql', but not the bunch\n> of groups which is available here. I don't believe admins here would\n> agree to throw away all others.\n>\n> What I recommend to do, is that the names of the groups here gets\n> changed by stripping of the 'comp.databases' prefix. The group names\n> would then make up their own main hierarchy ('postgres.*') like it\n> exists for other stuff or companies as well (like 'microsoft.*') etc.\n>\n> That would AFAIK no longer violate any rules, and allow webmasters from\n> outside to host these groups. Only the people reading these groups\n> would need a small and easy reconfiguration of their subscribed lists\n> which could be announced by a posting before its done, that's all.\n>\n> What do you think?\n>\n> BTW: I see you belong to the core development team. Are you responsible\n> for running this server news.postgresql.org?\n\nNope, I am ... and no, we won't be changing the group names ...\n\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 00:44:41 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Are these groups \"unauthorized\"?" } ]
[ { "msg_contents": "The attached patch implements per-backend prepareable statements.\n\nThe syntax is:\n\n PREPARE name_of_stmt(param_types) FROM <some query>;\n\n EXECUTE name_of_stmt [INTO relation] [USING args];\n\n DEALLOCATE [PREPARE] name_of_stmt;\n\nI don't really like the 'FROM' keyword in PREPARE (I was planning to\nuse 'AS'), but that's what SQL92 specifies.\n\nThe PREPARE keyword in DEALLOCATE is ignored, for SQL92 compliance.\n\nYou can specify EXECUTE ... INTO, using the same syntax as SELECT\nINTO, to store the result set from the EXECUTE in a relation.\n\nThe syntax is largely SQL92 compliant, but not totally. I'm not sure how\nthe SQL spec expects parameters to be set up in PREPARE, but I doubt\nit's the same way I used. And the SQL92 spec for EXECUTE is functionally\nsimilar, but uses a different syntax (EXECUTE ... USING INTO <rel>, I\nthink). If someone can decipher the spec on these two points and\ncan suggest what the proper syntax should be, let me know.\n\nParameters are fully supported -- for example:\n\n PREPARE q1(text) FROM SELECT * FROM pg_class WHERE relname = $1;\n\n EXECUTE q1 USING 'abc';\n\nFor simple queries such as the preceding one, using PREPARE followed\nby EXECUTE is about 10% faster than continuosly using SELECT (when\nexecuting 100,000 statements). When executing more complex statements\n(such as the monstrous 12 table join used by the JDBC driver for\ngetting some meta-data), the performance improvement is more drastic\n(IIRC it was about 100x in that case, when executing 75 statements).\n\nI've included some regression tests for the work -- when/if the\npatch is applied I'll write the documentation.\n\nThe patch stores queries in a hash table in TopMemoryContext. I\nconsidered replacing the hash table with a linked list and\nsearching through that linearly, but I decided it wasn't worth\nthe bother (since the # of prepared statements is likely to be\nvery small, I would expect a linked list to outperform a hash\ntable in the common case). If you feel strongly one way or another,\nlet me know.\n\nAlso, I'm not entirely sure my approach to memory management is\ncorrect. Each entry in the hash table stores its data in its\nown MemoryContext, which is deleted when the statement is\nDEALLOCATE'd. When actually running the prepared statement\nthrough the executor, CurrentMemoryContext is used. Let me know\nif there's a better way to do this.\n\nThis patch is based on Karel Zak's qCache patch for 7.0, but it's\ncompletely new code (it's also a lot simpler, and doesn't bother\nwith caching plans in shared memory, as discussed on -hackers).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Fri, 28 Jun 2002 13:41:54 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "prepareable statements" }, { "msg_contents": "> The syntax is largely SQL92 compliant, but not totally. I'm not sure how\n> the SQL spec expects parameters to be set up in PREPARE, but I doubt\n> it's the same way I used. And the SQL92 spec for EXECUTE is functionally\n> similar, but uses a different syntax (EXECUTE ... USING INTO <rel>, I\n> think). If someone can decipher the spec on these two points and\n> can suggest what the proper syntax should be, let me know.\n\nI'll have a read of the spec for you to see if I can decode something out of\nit! I think it's pretty essential we have full standard compliance on this\none!\n\nChris\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 09:31:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: prepareable statements" }, { "msg_contents": "On Fri, Jun 28, 2002 at 01:41:54PM -0400, Neil Conway wrote:\n> The attached patch implements per-backend prepareable statements.\n\nCan someone comment on when this will be reviewed and/or applied?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 19 Jul 2002 16:17:52 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: prepareable statements" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> On Fri, Jun 28, 2002 at 01:41:54PM -0400, Neil Conway wrote:\n>> The attached patch implements per-backend prepareable statements.\n\n> Can someone comment on when this will be reviewed and/or applied?\n\nIt's on my to-look-at list, but I'm deathly behind on reviewing patches.\n\nI guess the good news is that lots of great stuff is coming in from a\nlot of fairly new contributors. The bad news is that we're getting way\nbehind on reviewing it. I think I've spent all my reviewing time this\nmonth just on stuff from Rod Taylor...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Jul 2002 16:32:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prepareable statements " }, { "msg_contents": "Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > On Fri, Jun 28, 2002 at 01:41:54PM -0400, Neil Conway wrote:\n> >> The attached patch implements per-backend prepareable statements.\n> \n> > Can someone comment on when this will be reviewed and/or applied?\n> \n> It's on my to-look-at list, but I'm deathly behind on reviewing patches.\n> \n> I guess the good news is that lots of great stuff is coming in from a\n> lot of fairly new contributors. The bad news is that we're getting way\n> behind on reviewing it. I think I've spent all my reviewing time this\n> month just on stuff from Rod Taylor...\n\nYes, we are backed up. I am applying stuff that Tom doesn't claim after\na few days, but even then Tom will go back and review them. Not sure\nwhat we can do except to say everything will be in before 7.3 beta, and\nwe regret that a few items can't get in sooner.\n\nThe good news is that it is only a few patches that are held up. The\nothers are getting applied in a timely manner.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 Jul 2002 20:17:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: prepareable statements" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> The attached patch implements per-backend prepareable statements.\n\nFinally some feedback:\n\n> The syntax is:\n> PREPARE name_of_stmt(param_types) FROM <some query>;\n> EXECUTE name_of_stmt [INTO relation] [USING args];\n> DEALLOCATE [PREPARE] name_of_stmt;\n\n> I don't really like the 'FROM' keyword in PREPARE (I was planning to\n> use 'AS'), but that's what SQL92 specifies.\n\nActually not. SQL92 defines this command as\n\n <prepare statement> ::=\n PREPARE <SQL statement name> FROM <SQL statement variable>\n\n <SQL statement variable> ::= <simple value specification>\n\nwhere\n\n <simple value specification> ::=\n <parameter name>\n | <embedded variable name>\n\n(the normal <literal> case for <simple value specification> is\ndisallowed). So what they are really truly defining here is an\nembedded-SQL operation in which the statement-to-prepare comes from\nsome kind of string variable in the client program. (SQL99 makes this\neven clearer by moving PREPARE into Part 5, Host Language Bindings.)\n\nAFAICT, the syntax we are setting up with actual SQL following the\nPREPARE keyword is *not* valid SQL92 nor SQL99. It would be a good\nidea to look and see whether any other DBMSes implement syntax that\nis directly comparable to the feature we want. (Oracle manuals handy,\nanyone?)\n\nAssuming we do not find any comparable syntax to steal, my inclination\nwould be to go back to your original syntax and use \"AS\" as the\ndelimiter. That way we're not creating problems for ourselves if we\never want to implement the truly spec-compliant syntax (in ecpg, say).\n\n> The syntax is largely SQL92 compliant, but not totally. I'm not sure how\n> the SQL spec expects parameters to be set up in PREPARE, but I doubt\n> it's the same way I used.\n\nI can't see any hint of specifying parameter types in SQL's PREPARE at \nall. So we're on our own there, unless we can take some guidance\nfrom other systems.\n\n> And the SQL92 spec for EXECUTE is functionally\n> similar, but uses a different syntax (EXECUTE ... USING INTO <rel>, I\n> think).\n\nIt's not really similar at all. Again, the assumed context is an\nembedded SQL program, and the real targets of INTO are supposed to be\nhost-program variable names. (plpgsql's use of SELECT INTO is a lot\nmore similar to the spec than our main grammar's use of it.)\n\nWhile I won't strongly object to implementing EXECUTE INTO as you've\nshown it, I think a good case could be made for leaving it out, on the\ngrounds that our form of SELECT INTO is a mistake and a compatibility\nproblem, and we shouldn't propagate it further. Any opinions out there?\n\nIn general, this is only vaguely similar to what SQL92 contemplates,\nand you're probably better off not getting too close to their syntax...\n\n\n\nMoving on to coding issues of varying significance:\n\n> The patch stores queries in a hash table in TopMemoryContext.\n\nFine with me. No reason to change to a linked list. (But see note below.)\n\n> Also, I'm not entirely sure my approach to memory management is\n> correct. Each entry in the hash table stores its data in its\n> own MemoryContext, which is deleted when the statement is\n> DEALLOCATE'd. When actually running the prepared statement\n> through the executor, CurrentMemoryContext is used. Let me know\n> if there's a better way to do this.\n\nI think it's all right. On entry to ExecuteQuery, current context\nshould be TransactionCommandContext, which is a perfectly fine place\nfor constructing the querytree-to-execute. You do need to copy the\nquerytree as you're doing because of our lamentable tendency to scribble\non querytrees in the executor.\n\n\n* In PrepareQuery: plan_list must be same len as query list (indeed you\nhave an Assert for that later); this code will blow it if a UTILITY_CMD\nis produced by the rewriter. (Can happen: consider a NOTIFY produced\nby a rule.) Insert a NULL into the plan list to keep the lists in step.\n\n* In StoreQuery, the MemoryContextSwitchTo(TopMemoryContext) should be\nunnecessary. The hashtable code stuffs its stuff into its own context.\nYou aren't actually storing anything into TopMemoryContext, only into\nchildren thereof.\n\n* DeallocateQuery is not prepared for uninitialized hashtable.\n\n* RunQuery should NOT do BeginCommand; that was done by postgres.c.\n\n* Sending output only for last query is wrong; this makes incorrect\nassumptions about what the rewriter will produce. AFAIK there is no\ngood reason you should not execute all queries with the passed-in dest;\nthat's what postgres.c does.\n\n* Is it really appropriate to be doing Show_executor_stats stuff here?\nI think only postgres.c should do that.\n\n* This is certainly not legal C:\n\n+ \t\t\tif (Show_executor_stats)\n+ \t\t\t\tResetUsage();\n+ \n+ \t\t\tQueryDesc *qdesc = CreateQueryDesc(query, plan, dest, NULL);\n+ \t\t\tEState *state = CreateExecutorState();\n\nYou must be using a C++ compiler.\n\n* The couple of pfrees at the bottom of ExecuteQuery are kinda silly\nconsidering how much else got allocated and not freed there.\n\n* transformPrepareStmt is not doing the right thing with extras_before\nand extras_after. Since you only allow an OptimizableStmt in the\nsyntax, probably these will always remain NIL, but I'd suggest throwing\nin a test and elog.\n\n* What if the stored query is replaced between the time that\ntransformExecuteStmt runs and the time the EXECUTE stmt is actually\nexecuted? All your careful checking of the parameters could be totally\nwrong --- and ExecuteQuery contains absolutely no defenses against a\nmismatch. One answer is to store the expected parameter typelist\n(array) in the ExecuteStmt node during transformExecuteStmt, and then\nverify that this matches after you look up the statement in\nExecuteQuery.\n\n* transformExecuteStmt must disallow subselects and aggregate functions\nin the parameter expressions, since you aren't prepared to generate\nquery plans for them. Compare the processing of default or\ncheck-constraint expressions. BTW, you might as well do the fix_opids\ncall at transform time not runtime, too.\n\n* In gram.y: put the added keywords in the appropriate keyword-list\nproduction (hopefully the unreserved one).\n\n* Syntax for prepare_type_list is not good; it allows\n\t\t\t( , int )\nProbably best to push the () case into prepare_type_clause.\n\n* typeidToString is bogus. Use format_type_be instead.\n\n* Why does QueryData contain a context field?\n\n* prepare.h should contain a standard header comment.\n\n* You missed copyfuncs/equalfuncs support for the three added node types.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Jul 2002 22:00:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements " }, { "msg_contents": "On Sat, Jul 20, 2002 at 10:00:01PM -0400, Tom Lane wrote:\n> AFAICT, the syntax we are setting up with actual SQL following the\n> PREPARE keyword is *not* valid SQL92 nor SQL99. It would be a good\n> idea to look and see whether any other DBMSes implement syntax that\n> is directly comparable to the feature we want. (Oracle manuals handy,\n> anyone?)\n\nI couldn't find anything on the subject in the Oracle docs -- they have\nPREPARE for use in embedded SQL, but I couldn't see a reference to\nPREPARE for usage in regular SQL. Does anyone else know of an Oracle\nequivalent?\n\n> Assuming we do not find any comparable syntax to steal, my inclination\n> would be to go back to your original syntax and use \"AS\" as the\n> delimiter. That way we're not creating problems for ourselves if we\n> ever want to implement the truly spec-compliant syntax (in ecpg, say).\n\nOk, sounds good to me.\n\n> * This is certainly not legal C:\n> \n> + \t\t\tif (Show_executor_stats)\n> + \t\t\t\tResetUsage();\n> + \n> + \t\t\tQueryDesc *qdesc = CreateQueryDesc(query, plan, dest, NULL);\n> + \t\t\tEState *state = CreateExecutorState();\n> \n> You must be using a C++ compiler.\n\nWell, it's legal C99 I believe. I'm using gcc 3.1 with the default\nCFLAGS, not a C++ compiler -- I guess it's a GNU extension... In any\ncase, I've fixed this.\n\n> * What if the stored query is replaced between the time that\n> transformExecuteStmt runs and the time the EXECUTE stmt is actually\n> executed?\n\nGood point ... perhaps the easiest solution would be to remove\nDEALLOCATE. Since the backend's prepared statements are flushed when the\nbackend dies, there is little need for deleting prepared statements\nearlier than that. Users who need to prevent name clashes for\nplan names can easily achieve that without using DEALLOCATE.\n\nRegarding the syntax for EXECUTE, it occurs to me that it could be made\nto be more similar to the PREPARE syntax -- i.e.\n\nPREPARE foo(text, int) AS ...;\n\nEXECUTE foo('a', 1);\n\n(rather than EXECUTE USING -- the effect being that prepared statements\nnow look more like function calls on a syntactical level, which I think\nis okay.)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 22 Jul 2002 17:39:13 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "\n\nNeil Conway wrote:\n\n>On Sat, Jul 20, 2002 at 10:00:01PM -0400, Tom Lane wrote:\n> \n>\n>>AFAICT, the syntax we are setting up with actual SQL following the\n>>PREPARE keyword is *not* valid SQL92 nor SQL99. It would be a good\n>>idea to look and see whether any other DBMSes implement syntax that\n>>is directly comparable to the feature we want. (Oracle manuals handy,\n>>anyone?)\n>> \n>>\n>\n>I couldn't find anything on the subject in the Oracle docs -- they have\n>PREPARE for use in embedded SQL, but I couldn't see a reference to\n>PREPARE for usage in regular SQL. Does anyone else know of an Oracle\n>equivalent?\n> \n>\nOracle doesn't have this functionality exposed at the SQL level. In \nOracle the implementation is at the protocol level (i.e. sqlnet). \n Therefore the SQL syntax is the same when using prepared statements or \nwhen not using them. The client implementation of the sqlnet protocol \ndecides to use prepared statements or not. As of Oracle 8, I think \npretty much all of the Oracle clients use prepared statements for all \nthe sql statements. The sqlnet protocol exposes 'open', 'prepare', \n 'describe', 'bind', 'fetch' and 'close'. None of these are exposed out \ninto the SQL syntax.\n\nthanks,\n--Barry\n\n\n", "msg_date": "Mon, 22 Jul 2002 15:47:56 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> Regarding the syntax for EXECUTE, it occurs to me that it could be made\n> to be more similar to the PREPARE syntax -- i.e.\n\n> PREPARE foo(text, int) AS ...;\n\n> EXECUTE foo('a', 1);\n\n> (rather than EXECUTE USING -- the effect being that prepared statements\n> now look more like function calls on a syntactical level, which I think\n> is okay.)\n\nHmm, maybe *too* much like a function call. Is there any risk of a\nconflict with syntax that we might want to use to invoke stored\nprocedures? If not, this is fine with me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Jul 2002 11:34:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements " }, { "msg_contents": "On Tue, 2002-07-23 at 11:34, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > Regarding the syntax for EXECUTE, it occurs to me that it could be made\n> > to be more similar to the PREPARE syntax -- i.e.\n> \n> > PREPARE foo(text, int) AS ...;\n> \n> > EXECUTE foo('a', 1);\n> \n> > (rather than EXECUTE USING -- the effect being that prepared statements\n> > now look more like function calls on a syntactical level, which I think\n> > is okay.)\n> \n> Hmm, maybe *too* much like a function call. Is there any risk of a\n> conflict with syntax that we might want to use to invoke stored\n> procedures? If not, this is fine with me.\n\nStored procedures would use PERFORM would they not?\n\nI like the function syntax. It looks and acts like a temporary 'sql'\nfunction.\n\n\n\n", "msg_date": "23 Jul 2002 11:47:57 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "On Sat, Jul 20, 2002 at 10:00:01PM -0400, Tom Lane wrote:\n> * In gram.y: put the added keywords in the appropriate keyword-list\n> production (hopefully the unreserved one).\n\nI think the patch already does this, doesn't it? If not, what else\nneeds to be modified?\n\n> * Syntax for prepare_type_list is not good; it allows\n> \t\t\t( , int )\n\nErm, I don't see that it does. The syntax is:\n\nprep_type_list: Typename { $$ = makeList1($1); }\n | prep_type_list ',' Typename \n { $$ = lappend($1, $3); }\n ;\n\n(i.e. there's no ' /* EMPTY */ ' case)\n\n> * Why does QueryData contain a context field?\n\nBecause the context in which the query data is stored needs to be\nremembered so that it can be deleted by DeallocateQuery(). If\nDEALLOCATE goes away, this should also be removed.\n\nI've attached a revised patch, which includes most of Tom's suggestions,\nwith the exception of the three mentioned above. The syntax is now:\n\nPREPARE q1(int, float, text) AS ...;\n\nEXECUTE q1(5, 10.0, 'foo');\n\nDEALLOCATE q1;\n\nI'll post an updated patch to -patches tomorrow that gets rid of\nDEALLOCATE. I also need to check if there is a need for executor_stats.\nFinally, should the syntax for EXECUTE INTO be:\n\nEXECUTE q1(...) INTO foo;\n\nor\n\nEXECUTE INTO foo q1(...);\n\nThe current patch uses the former, which I personally prefer, but\nI'm not adamant about it.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC", "msg_date": "Tue, 23 Jul 2002 12:46:15 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "Rod Taylor wrote:\n> \n> On Tue, 2002-07-23 at 11:34, Tom Lane wrote:\n> > nconway@klamath.dyndns.org (Neil Conway) writes:\n> > > Regarding the syntax for EXECUTE, it occurs to me that it could be made\n> > > to be more similar to the PREPARE syntax -- i.e.\n> >\n> > > PREPARE foo(text, int) AS ...;\n> >\n> > > EXECUTE foo('a', 1);\n> >\n> > > (rather than EXECUTE USING -- the effect being that prepared statements\n> > > now look more like function calls on a syntactical level, which I think\n> > > is okay.)\n> >\n> > Hmm, maybe *too* much like a function call. Is there any risk of a\n> > conflict with syntax that we might want to use to invoke stored\n> > procedures? If not, this is fine with me.\n> \n> Stored procedures would use PERFORM would they not?\n> \n> I like the function syntax. It looks and acts like a temporary 'sql'\n> function.\n\nFWIW, Oracle uses EXECUTE to execute stored procedures. It is not apart\nof the SQL language, but a SQL*Plus command:\n\nEXECUTE my_procedure();\n\nThe Oracle call interface defines a function to call stored procedures:\n\nOCIStmtExecute();\n\nLikewise, the privilege necessary to execute a stored procedure is\n'EXECUTE' as in:\n\nGRANT EXECUTE ON my_procedure TO mascarm;\n\nAgain, FWIW.\n\nMike Mascari\nmascarm@mascari.com\n", "msg_date": "Tue, 23 Jul 2002 12:55:23 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "Mike Mascari wrote:\n> FWIW, Oracle uses EXECUTE to execute stored procedures. It is not apart\n> of the SQL language, but a SQL*Plus command:\n> \n> EXECUTE my_procedure();\n> \n\nAlso with Transact SQL (i.e. MSSQL and Sybase)\n\nSyntax\nExecute a stored procedure:\n[[EXEC[UTE]]\n\t{\n\t\t[@return_status =]\n\t\t\t{procedure_name [;number] | @procedure_name_var\n\t}\n\t[[@parameter =] {value | @variable [OUTPUT] | [DEFAULT]]\n\t\t[,...n]\n[WITH RECOMPILE]\n\n\nHowever, as Peter E. has pointed out, SQL99 uses the keyword CALL:\n\n15.1 <call statement>\nFunction\nInvoke an SQL-invoked routine.\nFormat\n<call statement> ::= CALL <routine invocation>\n\nFWIW,\n\nJoe\n\n", "msg_date": "Tue, 23 Jul 2002 16:23:30 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "To expand on the Oracle implementation, the EXECUTE command in SQL*Plus \nresults in an anonymous pl/sql block (as opposed to a named procedure). \nbeing sent over the wire such as the following:\n\nbegin\nmy_procedure();\nend;\n\nAs mentioned in the previous post, the EXECUTE command is only a \nSQL*Plus keyword (well, Server Manager too but that was killed in 9i).\n\nMike Mascari wrote:\n> Rod Taylor wrote:\n> \n>>On Tue, 2002-07-23 at 11:34, Tom Lane wrote:\n>>\n>>>nconway@klamath.dyndns.org (Neil Conway) writes:\n>>>\n>>>>Regarding the syntax for EXECUTE, it occurs to me that it could be made\n>>>>to be more similar to the PREPARE syntax -- i.e.\n>>>\n>>>>PREPARE foo(text, int) AS ...;\n>>>\n>>>>EXECUTE foo('a', 1);\n>>>\n>>>>(rather than EXECUTE USING -- the effect being that prepared statements\n>>>>now look more like function calls on a syntactical level, which I think\n>>>>is okay.)\n>>>\n>>>Hmm, maybe *too* much like a function call. Is there any risk of a\n>>>conflict with syntax that we might want to use to invoke stored\n>>>procedures? If not, this is fine with me.\n>>\n>>Stored procedures would use PERFORM would they not?\n>>\n>>I like the function syntax. It looks and acts like a temporary 'sql'\n>>function.\n> \n> \n> FWIW, Oracle uses EXECUTE to execute stored procedures. It is not apart\n> of the SQL language, but a SQL*Plus command:\n> \n> EXECUTE my_procedure();\n> \n> The Oracle call interface defines a function to call stored procedures:\n> \n> OCIStmtExecute();\n> \n> Likewise, the privilege necessary to execute a stored procedure is\n> 'EXECUTE' as in:\n> \n> GRANT EXECUTE ON my_procedure TO mascarm;\n> \n> Again, FWIW.\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n", "msg_date": "Wed, 24 Jul 2002 02:05:57 -0400", "msg_from": "Marc Lavergne <mlavergne-pub@richlava.com>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "I've two queries -\n\n1. emrxdbs=# explain select * from patient A where exists (select NULL from\npatient B where B.mrn=A.mrn and B.dob=A.dob and B.sex=A.sex and\nB.lastname=A.lastname and B.firstname=A.firstname group by B.mrn, B.dob,\nB.sex, B.lastname, B.firstname having A.patseq < max(B.patseq)) limit 10;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..121.50 rows=10 width=141)\n -> Seq Scan on patient a (cost=0.00..6955296.53 rows=572430 width=141)\n SubPlan\n -> Aggregate (cost=6.03..6.05 rows=1 width=42)\n -> Group (cost=6.03..6.05 rows=1 width=42)\n -> Sort (cost=6.03..6.03 rows=1 width=42)\n -> Index Scan using patient_name_idx on patient\nb (cost=0.00..6.02 rows=1 width=42)\n\n2. emrxdbs=# explain select * from patient A where exists (select NULL from\npatient B where B.mrn=A.mrn and B.dob=A.dob and B.sex=A.sex and\nB.lastname=A.lastname and B.firstname=A.firstname and B.mrn='3471585' group\nby B.mrn, B.dob, B.sex, B.lastname, B.firstname having A.patseq <\nmax(B.patseq)) limit 10;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..121.45 rows=10 width=141)\n -> Seq Scan on patient a (cost=0.00..6951997.59 rows=572430 width=141)\n SubPlan\n -> Aggregate (cost=6.03..6.05 rows=1 width=42)\n -> Group (cost=6.03..6.04 rows=1 width=42)\n -> Sort (cost=6.03..6.03 rows=1 width=42)\n -> Index Scan using patient_mrnfac_idx on\npatient b (cost=0.00..6.02 rows=1 width=42)\n\nThe first query results come back fairly quick, the 2nd one just sits there\nforever.\nIt looks similar in the two query plans.\n\nLet me know.\n\nthanks.\njohnl\n\n", "msg_date": "Thu, 25 Jul 2002 08:55:53 -0500", "msg_from": "\"John Liu\" <johnl@synthesys.com>", "msg_from_op": false, "msg_subject": "why?" }, { "msg_contents": "On Thu, 2002-07-25 at 15:55, John Liu wrote:\n> I've two queries -\n> \n> 1. emrxdbs=# explain select * from patient A where exists (select NULL from\n> patient B where B.mrn=A.mrn and B.dob=A.dob and B.sex=A.sex and\n> B.lastname=A.lastname and B.firstname=A.firstname group by B.mrn, B.dob,\n> B.sex, B.lastname, B.firstname having A.patseq < max(B.patseq)) limit 10;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..121.50 rows=10 width=141)\n> -> Seq Scan on patient a (cost=0.00..6955296.53 rows=572430 width=141)\n> SubPlan\n> -> Aggregate (cost=6.03..6.05 rows=1 width=42)\n> -> Group (cost=6.03..6.05 rows=1 width=42)\n> -> Sort (cost=6.03..6.03 rows=1 width=42)\n> -> Index Scan using patient_name_idx on patient\n> b (cost=0.00..6.02 rows=1 width=42)\n> \n> 2. emrxdbs=# explain select * from patient A where exists (select NULL from\n> patient B where B.mrn=A.mrn and B.dob=A.dob and B.sex=A.sex and\n> B.lastname=A.lastname and B.firstname=A.firstname and B.mrn='3471585' group\n> by B.mrn, B.dob, B.sex, B.lastname, B.firstname having A.patseq <\n> max(B.patseq)) limit 10;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..121.45 rows=10 width=141)\n> -> Seq Scan on patient a (cost=0.00..6951997.59 rows=572430 width=141)\n> SubPlan\n> -> Aggregate (cost=6.03..6.05 rows=1 width=42)\n> -> Group (cost=6.03..6.04 rows=1 width=42)\n> -> Sort (cost=6.03..6.03 rows=1 width=42)\n> -> Index Scan using patient_mrnfac_idx on\n> patient b (cost=0.00..6.02 rows=1 width=42)\n> \n> The first query results come back fairly quick, the 2nd one just sits there\n> forever.\n> It looks similar in the two query plans.\n\nIt seems that using patient_mrnfac_idx instead of patient_name_idx is\nnot a good choice in your case ;(\n\ntry moving the B.mrn='3471585' from FROM to HAVING and hope that this\nmakes the DB use the same plan as for the first query\n\nselect *\n from patient A \n where exists (\n select NULL\n from patient B\n where B.mrn=A.mrn\n and B.dob=A.dob\n and B.sex=A.sex\n and B.lastname=A.lastname\n and B.firstname=A.firstname\n group by B.mrn, B.dob, B.sex, B.lastname, B.firstname\n having A.patseq < max(B.patseq)\n and B.mrn='3471585'\n ) limit 10;\n\n-----------\nHannu\n\n\n", "msg_date": "25 Jul 2002 19:09:08 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: why?" }, { "msg_contents": "Neil Conway writes:\n\n> Regarding the syntax for EXECUTE, it occurs to me that it could be made\n> to be more similar to the PREPARE syntax -- i.e.\n>\n> PREPARE foo(text, int) AS ...;\n>\n> EXECUTE foo('a', 1);\n>\n> (rather than EXECUTE USING -- the effect being that prepared statements\n> now look more like function calls on a syntactical level, which I think\n> is okay.)\n\nI'm not sure I like that. It seems too confusing. Why not keep it as the\nstandard says? (After all, it is the PREPARE part that we're adjusting,\nnot EXECUTE.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 25 Jul 2002 22:54:04 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "On Thu, Jul 25, 2002 at 10:54:04PM +0200, Peter Eisentraut wrote:\n> I'm not sure I like that. It seems too confusing. Why not keep\n> it as the standard says? (After all, it is the PREPARE part that\n> we're adjusting, not EXECUTE.)\n\nI think it's both, isn't it? My understanding of Tom's post is that the\nfeatures described by SQL92 are somewhat similar to the patch, but not\ndirectly related.\n\nOn the other hand, if other people also find it confusing, that would be\na good justification for changing it. Personally, I think it's pretty\nclear, but I'm not adamant about it.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 25 Jul 2002 17:00:24 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "Re: [PATCHES] prepareable statements" }, { "msg_contents": "Neil Conway writes:\n\n> On Thu, Jul 25, 2002 at 10:54:04PM +0200, Peter Eisentraut wrote:\n> > I'm not sure I like that. It seems too confusing. Why not keep\n> > it as the standard says? (After all, it is the PREPARE part that\n> > we're adjusting, not EXECUTE.)\n>\n> I think it's both, isn't it? My understanding of Tom's post is that the\n> features described by SQL92 are somewhat similar to the patch, but not\n> directly related.\n\nWhat I was trying to say is this: There is one \"prepared statement\"\nfacility in the standards that allows you to prepare a statement defined\nin a host variable, whereas you are proposing one that specifies the\nstatement explicitly. However, both of these are variants of the same\nconcept, so the EXECUTE command doesn't need to be different.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 28 Jul 2002 17:21:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] prepareable statements" } ]
[ { "msg_contents": "The following query crashes the backend if compiled with\n\"--enable-integer-datetimes\":\n\nselect 'T0.405065555555533333333333333333333333333333355555555555555555555555333333333335555555555555789 -08'::time with time zone;\n\nBacktrace is:\n\n#0 0x08179304 in DecodeTimeOnly (field=0xbfffebb0, ftype=0xbfffeb30,\nnf=3, dtype=0xbfffebac, tm=0x33333333, fsec=0xbfffeca0, tzp=0xbfffec68)\nat datetime.c:1529\n#1 0x081765f9 in timetz_in (fcinfo=0xbfffecf0) at date.c:1272\n#2 0x081cdf44 in OidFunctionCall3 (functionId=1350, arg1=140379844,\narg2=0, arg3=4294967295) at fmgr.c:1250\n#3 0x080d459d in stringTypeDatum (tp=0x403abe10, string=0x85e06c4\n\"T0.4050655555555\", '3' <repeats 30 times>, '5' <repeats 23 times>, '3'\n<repeats 11 times>, '5' <repeats 13 times>, \"789 -08\", atttypmod=-1) at\nparse_type.c:389\n#4 0x080ce9a6 in parser_typecast_constant (expr=0x85e0754,\ntypename=0x85e076c) at parse_expr.c:1113\n#5 0x080cd1bd in transformExpr (pstate=0x85e08e8, expr=0x85e0750) at\nparse_expr.c:149\n#6 0x080d56c0 in transformTargetEntry (pstate=0x85e08e8,\nnode=0x85e0750, expr=0x0, colname=0x0, resjunk=0 '\\0') at\nparse_target.c:59\n#7 0x080d59e3 in transformTargetList (pstate=0x85e08e8,\ntargetlist=0x85e0824) at parse_target.c:191\n#8 0x080bbd4a in transformSelectStmt (pstate=0x85e08e8, stmt=0x85e0840)\nat analyze.c:2004\n#9 0x080b8ea4 in transformStmt (pstate=0x85e08e8, parseTree=0x85e0840,\nextras_before=0xbfffef6c, extras_after=0xbfffef68) at analyze.c:301\n#10 0x080b8ab3 in parse_analyze (parseTree=0x85e0840,\nparentParseState=0x0) at analyze.c:145\n#11 0x0816b1b9 in pg_analyze_and_rewrite (parsetree=0x85e0840) at\npostgres.c:413\n#12 0x0816b4bc in pg_exec_query_string (query_string=0x85e0404,\ndest=Remote, parse_context=0x85b6a38) at postgres.c:698\n#13 0x0816c877 in PostgresMain (argc=4, argv=0xbffff220,\nusername=0x85b6299 \"nconway\") at postgres.c:1916\n#14 0x08148d91 in DoBackend (port=0x85b6168) at postmaster.c:2229\n#15 0x081484f5 in BackendStartup (port=0x85b6168) at postmaster.c:1863\n#16 0x081473bf in ServerLoop () at postmaster.c:972\n#17 0x08146ea8 in PostmasterMain (argc=3, argv=0x859b460) at\npostmaster.c:754\n#18 0x081183a7 in main (argc=3, argv=0xbffffba4) at main.c:204\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n\n", "msg_date": "Fri, 28 Jun 2002 18:14:43 -0400", "msg_from": "nconway@klamath.dyndns.org (Neil Conway)", "msg_from_op": true, "msg_subject": "bug in new timestamp code" } ]
[ { "msg_contents": " From the ToDo list:\nVacuum: \t* Provide automatic running of vacuum in the background (Tom)\n\nAs of 7.2 we have lazy vacuum. The next logical step is setting up vacuum to \nrun automatically in the background either as some type of daemon or as \nsomething kicked off by the postmaster.\n\nI am interested in working on this to do item, although I see it is assigned \nto Tom right now. \n\nFirst: is this something we still want (I assume it is since its in the \ntodo.). \n\nSecond: There was some discussion \n(http://archives.postgresql.org/pgsql-hackers/2002-05/msg00970.php) about \nthis not being neede once UNDO is on place, what is the current view on this?\n\nMatthew\n\n\n", "msg_date": "Sat, 29 Jun 2002 16:50:04 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": true, "msg_subject": "Vacuum Daemon" }, { "msg_contents": "\"Matthew T. O'Connor\" <matthew@zeut.net> writes:\n> As of 7.2 we have lazy vacuum. The next logical step is setting up vacuum to\n> run automatically in the background either as some type of daemon or as \n> something kicked off by the postmaster.\n\n> I am interested in working on this to do item, although I see it is assigned \n> to Tom right now. \n\nIt's sufficiently far down my to-do list that I'm happy to let someone\nelse do it ;-).\n\n> Second: There was some discussion \n> (http://archives.postgresql.org/pgsql-hackers/2002-05/msg00970.php) about \n> this not being neede once UNDO is on place, what is the current view on this?\n\nI do not think that is the case; and anyway we've pretty much rejected\nVadim's notion of going to an Oracle-style UNDO buffer. I don't foresee\nVACUUM going away anytime soon --- what we need is to make it less\nobtrusive. 7.2 made some progress in that direction, but we need more.\n\nLaunching VACUUMs on some automatic schedule, preferably using feedback\nabout where space needs to be reclaimed, seems like a pretty\nstraightforward small-matter-of-programming. The thing that would\nreally be needed to make it unobtrusive is to find a way to run the\nvacuum processing at low priority, or at least when the system is not\nheavily loaded. I don't know a good way to do that. Nice'ing the\nvacuum process won't work because of priority-inversion problems.\nMaking it suspend itself when load gets high might do; but how to\ndetect that in a reasonably portable fashion?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Jun 2002 20:14:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum Daemon " }, { "msg_contents": "On Sat, 2002-06-29 at 20:14, Tom Lane wrote:\n> \"Matthew T. O'Connor\" <matthew@zeut.net> writes:\n\n> > Second: There was some discussion \n> > (http://archives.postgresql.org/pgsql-hackers/2002-05/msg00970.php) about \n> > this not being neede once UNDO is on place, what is the current view on this?\n> \n> I do not think that is the case; and anyway we've pretty much rejected\n> Vadim's notion of going to an Oracle-style UNDO buffer. I don't foresee\n> VACUUM going away anytime soon --- what we need is to make it less\n> obtrusive. 7.2 made some progress in that direction, but we need more.\n> \n\nCould someone point me to this discussion, or summarize what the problem\nwas? Was his proposal to keep tuple versions in the UNDO AM, or only\npointers to them?\n\nThe referred-to message seems to be about something else.\n\n;jrnield\n \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n\n\n", "msg_date": "29 Jun 2002 21:09:51 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum Daemon" }, { "msg_contents": "\"J. R. Nield\" <jrnield@usol.com> writes:\n>> I do not think that is the case; and anyway we've pretty much rejected\n>> Vadim's notion of going to an Oracle-style UNDO buffer.\n\n> Could someone point me to this discussion, or summarize what the problem\n> was?\n\nI'm too lazy to dig through the archives at the moment, but the main\npoints were (a) a finite-size UNDO buffer chokes on large transactions\nand (b) the Oracle approach requires live transaction processing to\ndo the cleanup work that our approach can push off to hopefully-not-\ntime-critical vacuum processing.\n\nUNDO per se doesn't eliminate VACUUM anyhow; it only reclaims space\nfrom tuples written by aborted transactions. If you want to get rid\nof VACUUM then you need another way to get rid of the old versions of\nperfectly good committed tuples that are obsoleted by updates from\nlater transactions. That essentially means you need an overwriting\nstorage manager, which is a concept that doesn't mix well with MVCC.\n\nOracle found a solution to that conundrum, but it's really not obvious\nto me that their solution is better than ours. Also, they have\npatents that we'd probably run afoul of if we try to imitate their\napproach too closely.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Jun 2002 21:55:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum Daemon " }, { "msg_contents": "Tom Lane wrote:\n> \"J. R. Nield\" <jrnield@usol.com> writes:\n> >> I do not think that is the case; and anyway we've pretty much rejected\n> >> Vadim's notion of going to an Oracle-style UNDO buffer.\n> \n> > Could someone point me to this discussion, or summarize what the problem\n> > was?\n> \n> I'm too lazy to dig through the archives at the moment, but the main\n> points were (a) a finite-size UNDO buffer chokes on large transactions\n> and (b) the Oracle approach requires live transaction processing to\n> do the cleanup work that our approach can push off to hopefully-not-\n> time-critical vacuum processing.\n> \n> UNDO per se doesn't eliminate VACUUM anyhow; it only reclaims space\n> from tuples written by aborted transactions. If you want to get rid\n> of VACUUM then you need another way to get rid of the old versions of\n> perfectly good committed tuples that are obsoleted by updates from\n> later transactions. That essentially means you need an overwriting\n> storage manager, which is a concept that doesn't mix well with MVCC.\n> \n> Oracle found a solution to that conundrum, but it's really not obvious\n> to me that their solution is better than ours. Also, they have\n> patents that we'd probably run afoul of if we try to imitate their\n> approach too closely.\n\nDon't forget reclaiming space from transactions that delete tuples.\nUNDO doesn't help there either.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sat, 29 Jun 2002 22:12:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum Daemon" }, { "msg_contents": "On Sat, 2002-06-29 at 21:55, Tom Lane wrote:\n> \"J. R. Nield\" <jrnield@usol.com> writes:\n> >> I do not think that is the case; and anyway we've pretty much rejected\n> >> Vadim's notion of going to an Oracle-style UNDO buffer.\n> \n> > Could someone point me to this discussion, or summarize what the problem\n> > was?\n> \n> I'm too lazy to dig through the archives at the moment, but the main\n> points were (a) a finite-size UNDO buffer chokes on large transactions\n> \n\nYes this is a good point. Oracle was always lame with its \"ROLLBACK\nSEGMENTS\". SolidDB (SolidWorks? It's been a while...) used a btree-like\nstructure for this that was not of fixed size. Oracle supposedly moved\nto the same method in its 9i release, but I don't know the details.\n\nI could never figure out how they did this, until I realized that UNDO\ndoesn't need to be in the WAL log. You just use any access method you\nfeel like, and make sure the method is itself protected by REDO. Just\ninsert REDO entries to protect the insert into the UNDO AM, and REDO log\nwhen you delete. That makes it easy to have the recovery code be\nidempotent, to catch the case of a system crash during recovery.\n\n> and (b) the Oracle approach requires live transaction processing to\n> do the cleanup work that our approach can push off to hopefully-not-\n> time-critical vacuum processing.\n\nI'm not sure which way I'm leaning on this. On the one hand, it requires\nextra work to clean up while the system is live, in addition to writing\nthe undo records, though the cleanup is not necessarily by the same\ntransaction that committed the work (the cleanup needs to be deferred\nuntil it's out of an active snapshot anyway).\n\nOn the other hand, you can clean-up without a full table scan, because\nyou know which tuples need to be changed. This can be a big advantage on\ngigantic tables. Also, it lets you remove deleted tuples quickly, so the\nspace can be reused, and eliminates the xid wraparound problem.\n\nOf course, any kind of undo is worse for performance with bulk\ninserts/updates, so you either end up committing every few thousand\ninserts, or you use some special extension to disable undo logging for a\nbulk load (or if you really want to be able to roll it back, you live\nwith it :-)\n\nHow slow is it to vacuum a >1 TB database with postgres? Do we even have\nany users who could test this?\n\nAlso, I would never advocate that we do what I'm pretty sure Oracle\ndoes, and keep old values in the \"Rollback Segment\". Only (RelFileNode,\nItemDataPointer) addresses would need to be kept in the UNDO AM, if we\nwent this route.\n\n> \n> UNDO per se doesn't eliminate VACUUM anyhow; it only reclaims space\n> from tuples written by aborted transactions. If you want to get rid\n> of VACUUM then you need another way to get rid of the old versions of\n> perfectly good committed tuples that are obsoleted by updates from\n> later transactions. That essentially means you need an overwriting\n> storage manager, which is a concept that doesn't mix well with MVCC.\n\nWell, you can keep the UNDO records after commit to do a fast\nincremental vacuum as soon as the transaction that deleted the tuples\nbecomes older than the oldest snapshot. If this is always done whenever\nan XID becomes that old, then you never need to vacuum, and you never\nneed a full table scan.\n\nBecause postgres never overwrites (except at vacuum), I think it\nactually makes us a BETTER candidate for this to be implemented cleanly\nthen with an overwriting storage manager. We will never need to keep\ntuple values in UNDO!\n\n> \n> Oracle found a solution to that conundrum, but it's really not obvious\n> to me that their solution is better than ours.\n\nTheir approach was worse, because they had an overwriting storage\nmanager before they tried to implement it (I'm guessing). :-)\n\n> Also, they have\n> patents that we'd probably run afoul of if we try to imitate their\n> approach too closely.\n> \n\nGiven the current state of affairs here in the US, PostgreSQL probably\nviolates hundreds or even thousands of software patents. It probably\nviolates tens of patents that have been upheld in court. The only thing\nkeeping companies from shutting down postgres, linux, OpenOffice, and a\nhundred other projects is fear of adverse publicity, and the fact that\ndevelopment would move overseas and continue to be a thorn in their\nside. \n\nWe'll see how long this lasts, given the fear some vendors have of\ncertain maturing open-source/GPL projects, but I don't think PostgreSQL\nwill be first, since anyone can take this code and become an instant\nproprietary database vendor! (No, I'm not complaining. Please, nobody\nstart a license fight because of this)\n\n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n\n\n", "msg_date": "30 Jun 2002 00:01:51 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum Daemon" }, { "msg_contents": "On Saturday 29 June 2002 08:14 pm, Tom Lane wrote:\n> Launching VACUUMs on some automatic schedule, preferably using feedback\n> about where space needs to be reclaimed, seems like a pretty\n> straightforward small-matter-of-programming. The thing that would\n> really be needed to make it unobtrusive is to find a way to run the\n> vacuum processing at low priority, or at least when the system is not\n> heavily loaded. I don't know a good way to do that. Nice'ing the\n> vacuum process won't work because of priority-inversion problems.\n> Making it suspend itself when load gets high might do; but how to\n> detect that in a reasonably portable fashion?\n\nAre we sure we want it to be unobtrusive? If vacuum is performed only where \nand when it's needed, it might be better for overall throughput to have it \nrun even when the system is loaded. Such as a constantly updated table.\n\nAs for a portable way to identify system load (if this is what we want) I was \nthinking of looking at the load average (such as the one reported by the top \ncommand) but I don't know much about portability issues. \n\nSince there appears to be sufficient interest in some solution, I'll start \nworking on it. I would like to hear a quick description of what \nsmall-matter-of-programming means. Do you have specific ideas about what how \nbest to get that feedback?\n\nMatthew\n\n\n", "msg_date": "Sun, 30 Jun 2002 01:29:44 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": true, "msg_subject": "Re: Vacuum Daemon" }, { "msg_contents": "Matthew T. O'Connor wrote:\n> On Saturday 29 June 2002 08:14 pm, Tom Lane wrote:\n> > Launching VACUUMs on some automatic schedule, preferably using feedback\n> > about where space needs to be reclaimed, seems like a pretty\n> > straightforward small-matter-of-programming. The thing that would\n> > really be needed to make it unobtrusive is to find a way to run the\n> > vacuum processing at low priority, or at least when the system is not\n> > heavily loaded. I don't know a good way to do that. Nice'ing the\n> > vacuum process won't work because of priority-inversion problems.\n> > Making it suspend itself when load gets high might do; but how to\n> > detect that in a reasonably portable fashion?\n> \n> Are we sure we want it to be unobtrusive? If vacuum is performed only where \n> and when it's needed, it might be better for overall throughput to have it \n> run even when the system is loaded. Such as a constantly updated table.\n> \n> As for a portable way to identify system load (if this is what we want) I was \n> thinking of looking at the load average (such as the one reported by the top \n> command) but I don't know much about portability issues. \n> \n> Since there appears to be sufficient interest in some solution, I'll start \n> working on it. I would like to hear a quick description of what \n> small-matter-of-programming means. Do you have specific ideas about what how \n> best to get that feedback?\n\nAnother idea is that the statistics tables keep information on table\nactivity, so that could be used to determine what needs vacuuming.\n\nAs far as collecting info on which rows are expired, I think a table\nscan is pretty quick and the cleanest solution to finding them. Trying\nto track the exact tuples and when they aren't visible to anyone is just\na major pain, while with a table scan it is very easy.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 30 Jun 2002 10:51:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum Daemon" } ]
[ { "msg_contents": "Machine translation with my minor edition is available from\nhttp://www.sai.msu.su/~megera/postgres/gist/tree/README.tree.english\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Sun, 30 Jun 2002 19:24:05 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "english doc for tree module" }, { "msg_contents": "\nVince, we should have this site on the \"interfacing to PostgreSQL\"\nsection of the docs. It is all about GIST:\n\n\thttp://www.sai.msu.su/~megera/postgres/gist/\n\n\n---------------------------------------------------------------------------\n\nOleg Bartunov wrote:\n> Machine translation with my minor edition is available from\n> http://www.sai.msu.su/~megera/postgres/gist/tree/README.tree.english\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 Jul 2002 21:01:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] english doc for tree module" } ]
[ { "msg_contents": "Hi All,\n\nI've been thinking about this DROP COLUMN business (sorry to start another\nspammy, flamey thread!). I'm taking ideas from lots of sources here.\n\nHow does this sound for a process?\n\n1.\nA new column is added to pg_attribute called 'attisdropped'. It, of course,\ndefaults to false.\n\n2.\nThe column expansion (*) code and the code that checks for valid column\nreferences everywhere in the codebase is changed to also check the\nattisdropped field. Does someone have a comprehensive list of places to be\nchanged?\n\n3.\nThe DROP COLUMN command does nothing but set the attisdropped of a column to\ntrue, and rename the column to something like DELETED_old_col_name. The\ncolumn renaming will help people using non-attisdropped aware admin programs\nsee what's what, plus it will allow people to create a new column with the\nsame name as the column just dropped.\n\nNow the dropped column will be invisible. As you update rows, etc. the\nspace will be reclaimed in the table as NULLs are put in where the old value\nused to be. Is this correct?\n\n4.\nA new command, something like \"ALTER TABLE tab RECLAIM;\" will be able to be\nrun on tables. It will basically go through the entire table and rewrite\nevery row as is, NULLifying all dropped columns in the table. This gives\nthe DBA the option of recovering his/her space if they want.\n\nNotes\n-----\na. What happens with TOASTed columns that are dropped?\nb. Would it be worth implementing an 'UNDROP' command...?\nc. Do we need an 'attisreclaimed' field in pg_attribute to indicate that a\nfield as been fully reclaimed, or do we just let people run it whenever they\nwant (even if it has no effect other than to waste time)?\nd. Are there any other comments?\n\nBasically, I would like to come up with a 'white paper' implementation that\nwe can all agree on. Then, I will try to code some parts myself, and\nsolicit help from others for other parts. Hopefully, together we can get a\nDROP COLUMN implementation. The most important step, however, is to agree\non an implementation spec.\n\nHopefully I can get the www person to set up a project page (like the\nproposed win32 project page) to coordinate things.\n\nComments?\n\nRegards,\n\nChris\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 15:47:01 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "DROP COLUMN Proposal" }, { "msg_contents": "> 2.\n> The column expansion (*) code and the code that checks for valid column\n> references everywhere in the codebase is changed to also check the\n> attisdropped field. Does someone have a comprehensive list of\n> places to be\n> changed?\n\nActually - did Hiroshi(?)'s original HACK have this code - we can re-use\nthat.\n\nChris\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 16:32:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Proposal" }, { "msg_contents": "On Mon, 2002-07-01 at 09:47, Christopher Kings-Lynne wrote:\n> Hi All,\n> \n> I've been thinking about this DROP COLUMN business (sorry to start another\n> spammy, flamey thread!). I'm taking ideas from lots of sources here.\n> \n> How does this sound for a process?\n> \n> 1.\n> A new column is added to pg_attribute called 'attisdropped'. It, of course,\n> defaults to false.\n> \n> 2.\n> The column expansion (*) code and the code that checks for valid column\n> references everywhere in the codebase is changed to also check the\n> attisdropped field. Does someone have a comprehensive list of places to be\n> changed?\n\nIt seems at least easy to test/debug incrementally:\n\ni.e put in the 'attisdropped' column with default 0 and _not_ the actual\nDROP COLUMN command. then test by manually setting and unsetting it\nuntil everything works, then switch on the command.\n\n> 3.\n> The DROP COLUMN command does nothing but set the attisdropped of a column to\n> true, \n\nThis will probably require a full lock on system tables to avoid nasty\nborder conditions when updating caches. But we probably have something\nlike it for drop table already.\n\n> and rename the column to something like DELETED_old_col_name.\n\nWith some number appended for the case when we want to drop several\ncolumns with the same same name.\n\nThe name might be '-old_col_name' to save space ( not to overrun\nMAX_IDENTIFIER_LENGTH ) or even '-ld_col_name'\n\n> The\n> column renaming will help people using non-attisdropped aware admin programs\n> see what's what, plus it will allow people to create a new column with the\n> same name as the column just dropped.\n> \n> Now the dropped column will be invisible. As you update rows, etc. the\n> space will be reclaimed in the table as NULLs are put in where the old value\n> used to be. \n\nYou probably have to set DEFAULT for this column to NULL to achieve it.\nAnd dropping / modifying indexes and constraints that reference the\ndeleted column .\n\n> Is this correct?\n> \n> 4.\n> A new command, something like \"ALTER TABLE tab RECLAIM;\" will be able to be\n> run on tables. It will basically go through the entire table and rewrite\n> every row as is, NULLifying all dropped columns in the table. This gives\n> the DBA the option of recovering his/her space if they want.\n\nCould it not just be an oprion to \"VACUUM table \"?\n\n> Notes\n> -----\n> a. What happens with TOASTed columns that are dropped?\n\nWhat happens currently when rows with TOASTed cols are deleted/updated ?\n\n> b. Would it be worth implementing an 'UNDROP' command...?\n\nI don't think so. Better to resurrect some form of limited time travel\non system level, so that one can get back the data if it is really\nneeded.\n\n> c. Do we need an 'attisreclaimed' field in pg_attribute to indicate that a\n> field as been fully reclaimed, or do we just let people run it whenever they\n> want (even if it has no effect other than to waste time)?\n> d. Are there any other comments?\n> \n> Basically, I would like to come up with a 'white paper' implementation that\n> we can all agree on. Then, I will try to code some parts myself, and\n> solicit help from others for other parts. Hopefully, together we can get a\n> DROP COLUMN implementation. The most important step, however, is to agree\n> on an implementation spec.\n\nIronically, often the most important step in reaching agreement is\nshowing clean working code ;)\n\n> Hopefully I can get the www person to set up a project page (like the\n> proposed win32 project page) to coordinate things.\n\n---------------\nHannu\n\n\n\n\n", "msg_date": "01 Jul 2002 12:31:28 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Proposal" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> The DROP COLUMN command does nothing but set the attisdropped of a column to\n>> true, \n\n> This will probably require a full lock on system tables to avoid nasty\n> border conditions when updating caches.\n\nAFAICS it's no different from any other ALTER TABLE command: exclusive\nlock on the table being modified is necessary and sufficient.\n\n>> Now the dropped column will be invisible. As you update rows, etc. the\n>> space will be reclaimed in the table as NULLs are put in where the old value\n>> used to be. \n\n> You probably have to set DEFAULT for this column to NULL to achieve it.\n\nRight, get rid of any default.\n\n> And dropping / modifying indexes and constraints that reference the\n> deleted column .\n\nThis part should fall out of Rod Taylor's pg_depend stuff pretty easily.\nWe still need to debate about the behavior, though. If for example there\nis a unique index on column B, do you need \"DROP B CASCADE\" to get rid\nof it, or is \"DROP B RESTRICT\" good enough? Does your answer change if\nthe unique index is on two columns (A,B)? I'm not real sure where the\nboundary is between attributes of the column (okay to drop as part of\nthe column) and independent objects that ought to be treated as\nrequiring CASCADE.\n\n>> A new command, something like \"ALTER TABLE tab RECLAIM;\" will be able to be\n>> run on tables.\n\n> Could it not just be an oprion to \"VACUUM table \"?\n\nI thought the same. It certainly doesn't belong with ALTER TABLE...\n\n>> a. What happens with TOASTed columns that are dropped?\n\n> What happens currently when rows with TOASTed cols are deleted/updated ?\n\nNo different from anything else, AFAICS.\n\nThe nice thing about this implementation approach is that most of the\nbackend need not be aware of deleted columns. There are a few places in\nthe parser (probably few enough to count on one hand) that will have to\nexplicitly check for and reject references to dropped columns, and\nyou're done. The rewriter, planner and executor are blissfully ignorant\nof the whole deal.\n\nYou might have some problems with code in psql, pg_dump, or other\nclients that examines the system tables; it'd have to be fixed to pay\nattention to attisdropped as well.\n\n>> c. Do we need an 'attisreclaimed' field in pg_attribute to indicate that a\n>> field as been fully reclaimed, or do we just let people run it whenever they\n>> want (even if it has no effect other than to waste time)?\n\nDon't think we need it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2002 09:40:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP COLUMN Proposal " }, { "msg_contents": "> This part should fall out of Rod Taylor's pg_depend stuff pretty easily.\n> We still need to debate about the behavior, though. If for example there\n> is a unique index on column B, do you need \"DROP B CASCADE\" to get rid\n> of it, or is \"DROP B RESTRICT\" good enough? Does your answer change if\n> the unique index is on two columns (A,B)? I'm not real sure where the\n> boundary is between attributes of the column (okay to drop as part of\n> the column) and independent objects that ought to be treated as\n> requiring CASCADE.\n\n>From SQL92:\n\n\"If RESTRICT is specified, then C shall not be referenced in\nthe <query expression> of any view descriptor or in the <search\ncondition> of any constraint descriptor other than a table con-\nstraint descriptor that contains references to no other column\nand that is included in the table descriptor of T.\"\n\nSo I guess that means that if the unique index is only on the dropped\ncolumn, then restrict mode will still be able to drop it...\n\nChris\n\n\n\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 23:19:28 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Proposal " }, { "msg_contents": "> The nice thing about this implementation approach is that most of the\n> backend need not be aware of deleted columns. There are a few places in\n> the parser (probably few enough to count on one hand) that will have to\n> explicitly check for and reject references to dropped columns, and\n> you're done. The rewriter, planner and executor are blissfully ignorant\n> of the whole deal.\n\nIf you can enumerate these places without much effort, it'd be appreciated!\n\nI found:\n\nexpandRTE() in parser/parse_relation.c\n\nWhat else?\n\nChris\n\n\n\n", "msg_date": "Tue, 2 Jul 2002 16:32:43 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DROP COLUMN Proposal " } ]
[ { "msg_contents": "Hi Florian,\n\n> > The most recent patches were submitted by me, so I guess you\n> could call me\n> > the defacto \"maintainer\".\n>\n> Okay - glad someone answered me :)\n\nActually, I replied to you 5 minutes after you posted, but I think my emails\nwere being stalled somewhere...\n\n> I will - please give me a few days for an up to date documentation\n> concerning the changed and new features.\n>\n> And yes - I really appreciate your offer for code review!\n\nTo generate the diff, do this:\n\ncd contrib/fulltextindex\ncvs diff -c > ftidiff.txt\n\nThen email -hackers the ftidiff.txt.\n\n> > > The changes made include:\n> > >\n> > > + Changed the split up behaviour from checking via isalpha to\n> > > using a list of delimiters as isalpha is a pain used with\n> > > data containing german umlauts, etc. ATM this list contains:\n> > >\n> > > \" ,;.:-_#/*+~^�!?\\\"\\\\�$%&()[]{}=<>|0123456789\\n\\r\\t@�\"\n> >\n> > Good idea. Is there a locale-aware version of isalpha anywhere?\n>\n> If there is - I couldn't find it. I did find a lot of frustated\n> posts about\n> isalpha and locale-awareness although.\n\nYeah, I can't find anything in the man pages either. Maybe we can ask the\nlist. People?\n\n> > List: what should we do about the backward compatibility problem?\n>\n> I think the only reasonable way for keeping backward compatibiliy might be\n> to leave the old fti function alone and introduce a new one with\n> the changes\n> (e.g. ftia). Even another fti parameter which activates the new features\n> breaks the compatibility concerning the call. Activiation via DEFINE is\n> another option, but this requires messing around with the source code\n> (although very little) on the user side. Maybe a ./configure option is a\n> good way (but this is beyond my C and friends skills).\n\nI think that creating a new function, called ftia or ftix or something is\nthe best solution. I think I can handle doing that...\n\nChris\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 15:50:13 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Good idea. Is there a locale-aware version of isalpha anywhere?\n>> \n>> If there is - I couldn't find it. I did find a lot of frustated\n>> posts about isalpha and locale-awareness although.\n\n> Yeah, I can't find anything in the man pages either. Maybe we can ask the\n> list. People?\n\nHuh? isalpha() *is* locale-aware according to the ANSI C spec.\nFor instance, the attached test program finds 52 alpha characters\nin C locale and 114 in fr_FR locale under HPUX.\n\nI am not at all sure that this aspect of Florian's change is a good\nidea, as it appears to eliminate locale-awareness in favor of a hard\ncoded delimiter list.\n\n\t\t\tregards, tom lane\n\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <locale.h>\n\nint main(int argc, char **argv)\n{\n int i;\n\n setlocale(LC_ALL, \"\");\n\n for (i = 0; i < 256; i++)\n if (isalpha(i))\n printf(\"%d\t%c\\n\", i, i);\n\n return 0;\n}\n\n\n", "msg_date": "Mon, 01 Jul 2002 09:28:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex " }, { "msg_contents": "Hi.\n\n> Huh? isalpha() *is* locale-aware according to the ANSI C spec.\n> For instance, the attached test program finds 52 alpha characters\n> in C locale and 114 in fr_FR locale under HPUX.\n>\n> I am not at all sure that this aspect of Florian's change is a good\n> idea, as it appears to eliminate locale-awareness in favor of a hard\n> coded delimiter list.\n\nJust tried your example - you're right of course! I will remove the hard\ncoded delimited list and replace it with the proper calls as shown in the\ncode you've sent.\n\nFlorian\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 15:56:29 +0200", "msg_from": "\"Florian Helmberger\" <f.helmberger@uptime.at>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex " }, { "msg_contents": "\"Florian Helmberger\" <f.helmberger@uptime.at> writes:\n> Just tried your example - you're right of course! I will remove the hard\n> coded delimited list and replace it with the proper calls as shown in the\n> code you've sent.\n\nWell, that was a quick hack not clean code. Coding rules for stuff\ninside the backend are\n\t- don't do setlocale; it's already been done.\n\t- explicitly cast the argument of any ctype.h macro to\n\t (unsigned char).\n\nWithout the latter you have portability problems depending on whether\nchars are signed or unsigned.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2002 10:05:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I think that creating a new function, called ftia or ftix or something is\n> the best solution. I think I can handle doing that...\n\nWhy change the name? If it's got a different argument list then you\ncan just overload the same old name.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2002 10:11:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex " }, { "msg_contents": "> > I am not at all sure that this aspect of Florian's change is a good\n> > idea, as it appears to eliminate locale-awareness in favor of a hard\n> > coded delimiter list.\n>\n> Just tried your example - you're right of course! I will remove the hard\n> coded delimited list and replace it with the proper calls as shown in the\n> code you've sent.\n\nOK Florian, submit a diff with your changes and I'll give them a run.\n\nI forgot that we could just overload functions with different parameter\nlists! That sounds like a good idea.\n\nChris\n\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 23:00:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex " }, { "msg_contents": "\nFlorian, I haven't seen this patch yet. Did you send it in?\n\n---------------------------------------------------------------------------\n\nFlorian Helmberger wrote:\n> Hi.\n> \n> > Huh? isalpha() *is* locale-aware according to the ANSI C spec.\n> > For instance, the attached test program finds 52 alpha characters\n> > in C locale and 114 in fr_FR locale under HPUX.\n> >\n> > I am not at all sure that this aspect of Florian's change is a good\n> > idea, as it appears to eliminate locale-awareness in favor of a hard\n> > coded delimiter list.\n> \n> Just tried your example - you're right of course! I will remove the hard\n> coded delimited list and replace it with the proper calls as shown in the\n> code you've sent.\n> \n> Florian\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 18:01:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "Yeah, I've got it Bruce - I still haven't had time to look into it and I\nreally don't know what to do about the backward compatibility issue. How do\nI set up 2 identically named C functions with different parameter lists?\n\nChris\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, 12 July 2002 6:02 AM\n> To: Florian Helmberger\n> Cc: Tom Lane; Christopher Kings-Lynne; Hackers\n> Subject: Re: [HACKERS] [PATCHES] Changes in /contrib/fulltextindex\n>\n>\n>\n> Florian, I haven't seen this patch yet. Did you send it in?\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> Florian Helmberger wrote:\n> > Hi.\n> >\n> > > Huh? isalpha() *is* locale-aware according to the ANSI C spec.\n> > > For instance, the attached test program finds 52 alpha characters\n> > > in C locale and 114 in fr_FR locale under HPUX.\n> > >\n> > > I am not at all sure that this aspect of Florian's change is a good\n> > > idea, as it appears to eliminate locale-awareness in favor of a hard\n> > > coded delimiter list.\n> >\n> > Just tried your example - you're right of course! I will remove the hard\n> > coded delimited list and replace it with the proper calls as\n> shown in the\n> > code you've sent.\n> >\n> > Florian\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> >\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Fri, 12 Jul 2002 09:34:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Yeah, I've got it Bruce - I still haven't had time to look into it and I\n> really don't know what to do about the backward compatibility issue. How do\n> I set up 2 identically named C functions with different parameter lists?\n\nOh, that is easy. When you CREATE FUNCTION, you just specify the\ndifferent params. However, if you are calling it _from_ C, then it is\nimpossible. Just break backward compatibility, I think was Tom's\nsuggestion, and I agree.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 23:34:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "> Christopher Kings-Lynne wrote:\n> > Yeah, I've got it Bruce - I still haven't had time to look into it and I\n> > really don't know what to do about the backward compatibility\n> issue. How do\n> > I set up 2 identically named C functions with different parameter lists?\n>\n> Oh, that is easy. When you CREATE FUNCTION, you just specify the\n> different params. However, if you are calling it _from_ C, then it is\n> impossible. Just break backward compatibility, I think was Tom's\n> suggestion, and I agree.\n\nI mean, can I code up 2 functions called \"fti\" and put them both in the\nfti.c and then have them both in the fti.so? Then when CREATE FUNCTION is\nrun it will link to the correct function in the fti.so depending on the\nparameter list?\n\nIt's easy for you guys to say \"break backward\", but you aren't using it ;)\n\nChris\n\n", "msg_date": "Fri, 12 Jul 2002 11:38:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Christopher Kings-Lynne wrote:\n> > > Yeah, I've got it Bruce - I still haven't had time to look into it and I\n> > > really don't know what to do about the backward compatibility\n> > issue. How do\n> > > I set up 2 identically named C functions with different parameter lists?\n> >\n> > Oh, that is easy. When you CREATE FUNCTION, you just specify the\n> > different params. However, if you are calling it _from_ C, then it is\n> > impossible. Just break backward compatibility, I think was Tom's\n> > suggestion, and I agree.\n> \n> I mean, can I code up 2 functions called \"fti\" and put them both in the\n> fti.c and then have them both in the fti.so? Then when CREATE FUNCTION is\n> run it will link to the correct function in the fti.so depending on the\n> parameter list?\n\nCall them different C names, but name them the same in CREATE FUNCTION\nfuncname. Just use a different symbol name here:\n\n CREATE [ OR REPLACE ] FUNCTION name ( [ argtype [, ...] ] )\n ^^^^ same here\n RETURNS rettype\n AS 'obj_file', 'link_symbol'\n ^^^^^^^^^^^^^ different here\n LANGUAGE langname\n [ WITH ( attribute [, ...] ) ]\n\n\nDoes that help?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 23:40:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "> Call them different C names, but name them the same in CREATE FUNCTION\n> funcname. Just use a different symbol name here:\n> \n> CREATE [ OR REPLACE ] FUNCTION name ( [ argtype [, ...] ] )\n> ^^^^ same here\n> RETURNS rettype\n> AS 'obj_file', 'link_symbol'\n> ^^^^^^^^^^^^^ different here\n> LANGUAGE langname\n> [ WITH ( attribute [, ...] ) ]\n> \n> \n> Does that help?\n\nYes, I get it now - I should be able to set it up quite nicely.\n\nChris\n\n", "msg_date": "Fri, 12 Jul 2002 11:41:35 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Call them different C names, but name them the same in CREATE FUNCTION\n> > funcname. Just use a different symbol name here:\n> > \n> > CREATE [ OR REPLACE ] FUNCTION name ( [ argtype [, ...] ] )\n> > ^^^^ same here\n> > RETURNS rettype\n> > AS 'obj_file', 'link_symbol'\n> > ^^^^^^^^^^^^^ different here\n> > LANGUAGE langname\n> > [ WITH ( attribute [, ...] ) ]\n> > \n> > \n> > Does that help?\n> \n> Yes, I get it now - I should be able to set it up quite nicely.\n\nYea, this function overloading is a nifty feature. No wonder C++ has\nit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Jul 2002 23:43:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "Hi.\n\n> Florian, I haven't seen this patch yet. Did you send it in?\n\nYes, I sent it to Christopher for reviewing, as allready mentioned by\nhimself :)\nI still had not the time to update the docs though, hope to get this done\nnext week.\n\nFlorian\n\n", "msg_date": "Fri, 12 Jul 2002 10:38:50 +0200", "msg_from": "\"Florian Helmberger\" <f.helmberger@uptime.at>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" }, { "msg_contents": "Florian Helmberger wrote:\n> Hi.\n> \n> > Florian, I haven't seen this patch yet. Did you send it in?\n> \n> Yes, I sent it to Christopher for reviewing, as allready mentioned by\n> himself :)\n> I still had not the time to update the docs though, hope to get this done\n> next week.\n\nYes, I had an email exchange with Christopher last night and he is\nworking on the backward compatibility issues with overloaded function\nparameters.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Jul 2002 10:26:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Changes in /contrib/fulltextindex" } ]
[ { "msg_contents": "We have been discussing heap tuple header changes for a while now.\nHere is my proposal for omitting the oid, when it is not needed:\n\nFirst let's eliminate t_oid from HeapTupleHeaderData.\n\nThen add the oid to the end of the structure, if and only if it is\nneeded. The tricky part here is that there is a variable length field\n(t_bits) and the oid has to be properly aligned.\n\nThis pseudo code snippet illustrates what I plan to do:\n\n\tlen = offsetof(HeapTupleHeaderData, t_bits); /* 23 */\n\tif (hasnulls) {\n\t\tlen += BITMAPLEN(NumberOfAttributes);\n\t}\n\tif (hasoid) {\n\t\tlen += sizeof(Oid);\n\t}\n\tlen = MAXALIGN(len);\n\thoff = len;\n\toidoff = hoff - sizeof(Oid);\n\n#define HeapTupleHeaderGetOid(hth) \\\n\t( *((Oid *)((char *)(hth) + (hth)->t_hoff - sizeof(Oid))) )\n\nAnd this is how the structure would look like:\n 1 2 3\n 0 4 0 0 34 78 2\nnow oooo<---------fix--------->.x___X___\n+oid <---------fix--------->.oooox___ MAXALIGN 4\n+oid <---------fix--------->.....ooooX___ MAXALIGN 8\n-oid <---------fix--------->.X___\n\n1:\nnow oooo<---------fix--------->bx___X___\n+oid <---------fix--------->boooox___ MAXALIGN 4\n+oid <---------fix--------->b....ooooX___ MAXALIGN 8\n-oid <---------fix--------->bX___\n\n2:\nnow oooo<---------fix--------->bb...X___\n+oid <---------fix--------->bb...ooooX___ MAXALIGN 4 und 8\n-oid <---------fix--------->bb...x___X___\n 3 4 \n6: 2 6 0\nnow oooo<---------fix--------->bbbbbb...x___X___\n+oid <---------fix--------->bbbbbb...oooox___ MAXALIGN 4\n+oid <---------fix--------->bbbbbb.......ooooX___ MAXALIGN 8\n-oid <---------fix--------->bbbbbb...X___\n\nwhere\n<---------fix---------> fixed sized part without oid, 23 bytes\noooo oid, 4 bytes\nb one bitmap byte\n. one padding byte\nx start of data area (= hoff) with 4-byte-alignment\nX start of data area (= hoff) with 8-byte-alignment\n\nBytes saved on architectures with 4/8 byte alignment:\n hoff bytes\nnatts bitmaplen hoff72 oidoff woo saved\n 0 28/32 24 24/24 4/8\n1-8 1 28/32 24 24/24 4/8\n9-40 2-5 32/32 28 28/32 4/0\n41-72 6-9 36/40 32 32/32 4/8\n\nAs a first step I've already posted a patch that eliminates direct\naccess to t_oid. The final patch will change not much more than the\ngetter and setter macros.\n\nProblems I have identified so far:\n\n. heap_formtuple needs a parameter bool withoid\n. Does heap_addheader *always* create a header with oid?\n. Have to check heap_xlog_xxxx routines\n. Occasionally a heap tuple header is copied by memmove.\n\nComments?\n\nServus\n Manfred\n\n\n", "msg_date": "Mon, 01 Jul 2002 12:40:35 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "HeapTupleHeader withoud oid" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> . Does heap_addheader *always* create a header with oid?\n\nNo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2002 10:20:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HeapTupleHeader withoud oid " }, { "msg_contents": "On Mon, 01 Jul 2002 12:40:35 +0200, I wrote:\n>Bytes saved on architectures with 4/8 byte alignment:\n> hoff bytes\n>natts bitmaplen hoff72 oidoff woo saved\n> 0 28/32 24 24/24 4/8\n>1-8 1 28/32 24 24/24 4/8\n>9-40 2-5 32/32 28 28/32 4/0\n>41-72 6-9 36/40 32 32/32 4/8\n\nIn this table oidoff contains wrong values, it is from my first\napproach, where I tried to put oid at the first INTALIGNed position\nafter t_bits. The table should be:\n\n bitmap hoff bytes\nnatts len hoff1 hoff2 oidoff woo saved\n 0 32 28/32 24/28 24 4/8\n1-8 1 32 28/32 24/28 24 4/8\n9-40 2-5 36/40 32 28 28/32 4/0\n41-72 6-9 40 36/40 32/36 32 4/8\n\nwhere hoff1 is the MAXALIGNed length of the tuple header with a v7.2\ncompatible tuple header format (with bitmaplen patch included);\nhoff2 is the header size after the Xmin/Cid/Xmax patch, which is still\nbeing discussed on -patches and -hackers;\nwith this proposal, if a table has oids, oidoff is the offset of the\noid and header size equals hoff2;\nhoff woo is the header size without oid;\nbytes saved is relative to hoff2.\n\nI apologize for the confusion.\n\nServus\n Manfred\n\n\n", "msg_date": "Mon, 01 Jul 2002 16:22:27 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: HeapTupleHeader without oid" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Here is my proposal for omitting the oid, when it is not needed:\n\nI do not think you can make this work unless \"has oids\" is added to\nTupleDescs. There are too many places where tuples are manipulated\nwith only a tupdesc for reference.\n\nIt might also be necessary to add a \"has oid\" bit to t_infomask,\nso that a tuple's OID can be fetched with *no* outside information,\nbut I'd prefer to avoid that if possible. I think adding a tupledesc\nparameter to heap_getsysattr might be enough to avoid it.\n\nI'd suggest reworking your \"Wrap access to Oid\" patch, which currently\nincreases instead of reducing the dependency on access to a Relation\nfor the tuple. Also, you could be a little more conservative about\nadding Asserts --- those are not free, at least not from a development\npoint of view, so I object to adding multiple redundant Asserts in\nhotspot routines.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2002 10:40:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HeapTupleHeader withoud oid " } ]
[ { "msg_contents": "George,\n\nI like your updated version and have put it on my site.\nAre you willing to contribute some examples and test data,\nso we (me, Teodor and you) could arrange separate page for this module.\nAs I wrote, we'll continue developing and you're welcome.\n\n\tOleg\nOn Sun, 30 Jun 2002, George Essig wrote:\n\n> Oleg,\n>\n> Attached is my latest version. When something was unclear to me, I took the liberty to substitute\n> what I thought it meant.\n>\n> George\n>\n> --- Oleg Bartunov <oleg@sai.msu.su> wrote:\n> > Thanks George,\n> >\n> > I've edit it a little bit, but still it's incomplete and looks wacky :)\n> > Could you correct english ?\n> >\n> > Hope it will helps.\n> >\n> > \tOleg\n> > On Fri, 28 Jun 2002, George Essig wrote:\n> >\n> > > I used AltaVista's Babel Fish site, http://babelfish.altavista.com/, to translate the README\n> > file\n> > > in tree.tar.gz available at http://www.sai.msu.su/~megera/postgres/gist/. The english version\n> > is\n> > > attached. The translation is not perfect. You might want to make some changes.\n> > >\n> > > George Essig\n> > >\n>\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Yahoo! - Official partner of 2002 FIFA World Cup\n> http://fifaworldcup.yahoo.com\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Mon, 1 Jul 2002 18:52:11 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: Translated README.tree in tree.tar.gz" } ]