threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "This came into the jdbc list\n\nApparently bigint is not really 8 bytes???\n\nI test this out with psql\n\ntest=# create table testbigint (id serial, fp0 int8);\nNOTICE: CREATE TABLE will create implicit sequence 'testbigint_id_seq'\nfor SERIAL column 'testbigint.id'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index\n'testbigint_id_key' for table 'testbigint'\nCREATE\ntest=# insert into testbigint (fp0) values (1);\nINSERT 333698 1\ntest=# update testbigint set fp0 = -9223372036854775808 where id = 1;\nERROR: int8 value out of range: \"-9223372036854775808\"\n\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Dav Coleman\nSent: August 7, 2001 11:37 AM\nTo: pgsql-jdbc@postgresql.org\nSubject: [JDBC] BIGINT vs Java's long\n\n\nAccording to the Java Language Specification,\nhttp://java.sun.com/docs/books/jls/second_edition/html/typesValues.doc.h\ntml#9151\n\n\"For long, from -9223372036854775808 to 9223372036854775807, inclusive\"\n\n\nIndeed, I have java code which generate random long's and println's\nthem, and I end up with values equal to -9223372036854775808.\n\nI had those println's redirected to a .sql file which I ran against psql\nto update some bigint columns, but I got \n ERROR: int8 value out of range: \"-9223372036854775808\"\n\nApparently bigint's don't like that value?\n\nWell confused, since 8 bytes should be 8 freaking bytes, I turned to\nJDBC.\n\nThat's when things got weird, first I tried declaring a long variable\nwith that value, and got a compilere error (Integer to large) or\nsomething like that.\n\nSo I declared \"long myBigint = Long.MIN_VALUE\" and that compiled, but\nwhen I tried using that value in a Statement.execute() I got the exact\nsame error.\n\nAnyone know what's going on? Here's the test code, using\njdbc7.0-1.2.jar:\n\n\nimport java.sql.*;\n\npublic class testPGSQLbigint {\n\n public static void main( String[] args ) {\n try {\n Class.forName(\"org.postgresql.Driver\");\n } catch (java.lang.ClassNotFoundException e) {\n System.out.println( e );\n }\n\n Connection db=null;\n String url = \"jdbc:postgresql:abinitio2\";\n try {\n db = DriverManager.getConnection(url,\"dav\",\"\");\n } catch ( SQLException e ) {\n System.err.println( e );\n }\n\n // the following gives a compiler error\n //long bigint = -9223372036854775808;\n long bigint = Long.MIN_VALUE;\n String sql_ = \"update chembase set fp0 = \"+bigint+\" where id =\n27948;\";\n System.out.println(sql_);\n\n try {\n Statement st = db.createStatement();\n st.execute( sql_ );\n st.close();\n } catch ( SQLException e ) {\n System.err.println( e );\n }\n }\n\n\n}\n\n\n\noutput:\n$ java -classpath /opt/java/jars/jdbc7.0-1.2.jar:. \"testPGSQLbigint\"\n\nupdate chembase set fp0 = -9223372036854775808 where id = 27948;\njava.sql.SQLException: ERROR: int8 value out of range:\n\"-9223372036854775808\"\n\n\n\nnote this runs the same in linux and win2k (using Sun's SDK)\n\n\n\n-- \nDav Coleman\nhttp://www.danger-island.com/dav/\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n\n",
"msg_date": "Tue, 7 Aug 2001 13:19:51 -0400",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": true,
"msg_subject": "FW: [JDBC] BIGINT vs Java's long"
},
{
"msg_contents": "On Tue, 7 Aug 2001, Dave Cramer wrote:\n\n> This came into the jdbc list\n> \n> Apparently bigint is not really 8 bytes???\n> \n> I test this out with psql\n> \n> test=# create table testbigint (id serial, fp0 int8);\n> NOTICE: CREATE TABLE will create implicit sequence 'testbigint_id_seq'\n> for SERIAL column 'testbigint.id'\n> NOTICE: CREATE TABLE/UNIQUE will create implicit index\n> 'testbigint_id_key' for table 'testbigint'\n> CREATE\n> test=# insert into testbigint (fp0) values (1);\n> INSERT 333698 1\n> test=# update testbigint set fp0 = -9223372036854775808 where id = 1;\n> ERROR: int8 value out of range: \"-9223372036854775808\"\n\nYes, it's failing on precisely the minimum value. It appears\nthat the code that does this sets the sign and then makes the number\nand applies the sign at the end which would be wrong in this\ncase (as it overflows on 9223372036854775808)\n\nI don't think my patch against recent sources would apply cleanly to \nolder ones, and I didn't run the regression against it, but it seemed\nto work, and is only a two line change in current source.",
"msg_date": "Tue, 7 Aug 2001 11:40:07 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: FW: [JDBC] BIGINT vs Java's long"
},
{
"msg_contents": "Dave Cramer writes:\n\n> Apparently bigint is not really 8 bytes???\n\nIt's sort of 7.999 bytes.\n\n> test=# update testbigint set fp0 = -9223372036854775808 where id = 1;\n> ERROR: int8 value out of range: \"-9223372036854775808\"\n\nThis is a bug in the int8 value parser. While it reads the string it\nalways accumulates the value as positive and then tags the sign on.\nSince +9223372036854775808 doesn't fit you get this error.\n\nISTM that this can be fixed by accumulating toward the negative end and\ntaking some special care around the boundaries, like this patch:\n\nIndex: int8.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/int8.c,v\nretrieving revision 1.30\ndiff -u -r1.30 int8.c\n--- int8.c\t2001/06/07 00:09:29\t1.30\n+++ int8.c\t2001/08/07 19:26:35\n@@ -77,16 +77,21 @@\n \t\telog(ERROR, \"Bad int8 external representation \\\"%s\\\"\", str);\n \twhile (*ptr && isdigit((unsigned char) *ptr))\t\t/* process digits */\n \t{\n-\t\tint64\t\tnewtmp = tmp * 10 + (*ptr++ - '0');\n+\t\t/* We accumulate the value towards the negative end to allow\n+\t\t the minimum value to fit it. */\n+\t\tint64\t\tnewtmp = tmp * 10 - (*ptr++ - '0');\n\n-\t\tif ((newtmp / 10) != tmp)\t\t/* overflow? */\n+\t\t/* overflow? */\n+\t\tif ((newtmp / 10) != tmp\n+\t\t\t/* This number only fits with a negative sign. */\n+\t\t\t|| (newtmp == -9223372036854775808 && sign > 0))\n \t\t\telog(ERROR, \"int8 value out of range: \\\"%s\\\"\", str);\n \t\ttmp = newtmp;\n \t}\n \tif (*ptr)\t\t\t\t\t/* trailing junk? */\n \t\telog(ERROR, \"Bad int8 external representation \\\"%s\\\"\", str);\n\n-\tresult = (sign < 0) ? -tmp : tmp;\n+\tresult = (sign > 0) ? -tmp : tmp;\n\n \tPG_RETURN_INT64(result);\n }\n===end\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 7 Aug 2001 21:34:02 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: FW: [JDBC] BIGINT vs Java's long"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> I don't think my patch against recent sources would apply cleanly to \n> older ones, and I didn't run the regression against it, but it seemed\n> to work, and is only a two line change in current source.\n\nThis patch needs more work. You are assuming that integer division on\nnegative numbers works the same everywhere, which it most definitely\ndoes not (the direction of truncation was unspecified until C99).\nThe overflow check will fail on platforms where negative results\ntruncate towards minus infinity. So we need a different way of checking\nfor overflow.\n\nRight off the bat I'm not coming up with an implementation that's both\nportable and able to accept INT64_MIN, but this has got to be a solved\nproblem. Look around, maybe in the GNU or BSD C libraries...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 15:44:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FW: [JDBC] BIGINT vs Java's long "
},
{
"msg_contents": "On Tue, 7 Aug 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > I don't think my patch against recent sources would apply cleanly to \n> > older ones, and I didn't run the regression against it, but it seemed\n> > to work, and is only a two line change in current source.\n> \n> This patch needs more work. You are assuming that integer division on\n> negative numbers works the same everywhere, which it most definitely\n> does not (the direction of truncation was unspecified until C99).\n> The overflow check will fail on platforms where negative results\n> truncate towards minus infinity. So we need a different way of checking\n> for overflow.\n> \n> Right off the bat I'm not coming up with an implementation that's both\n> portable and able to accept INT64_MIN, but this has got to be a solved\n> problem. Look around, maybe in the GNU or BSD C libraries...\n\nActually, that wasn't a suggested patch for real inclusion (I should have\nmentioned that) but instead for the user in question to try. I'll look\nand get something complete for this. :)\n\n\n",
"msg_date": "Tue, 7 Aug 2001 13:10:20 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: FW: [JDBC] BIGINT vs Java's long "
}
] |
[
{
"msg_contents": "\nhello all\n\nAnyone knows a way to get the machine that execute the query?\n\nthanks ...\n\n",
"msg_date": "7 Aug 2001 22:10:22 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "Trigger Function Question..."
}
] |
[
{
"msg_contents": "\n> There could be DELETE operations for the tuple\n> from other backends also and the TID may disappear.\n> Because FULL VACUUM couldn't run while the cursor\n> is open, it could neither move nor remove the tuple\n> but I'm not sure if the new VACUUM could remove\n> the deleted tuple and other backends could re-use\n> the space under such a situation.\n\nIf you also save the tuple transaction info (xmin ?) during the\nselect in addition to xtid, you could see whether the tupleslot was\nreused ?\n(This might need a function interface to make it reasonably portable to\nfuture \nversions)\nOf course the only thing you can do if you notice it has changed is bail\nout.\nBut that leaves the question to me on what should actually be done when\nthe tuple has changed underneath. \nI for one would not like the update to succeed if someone else modified\nit \ninbetween my fetch and my update.\n\nAndreas\n",
"msg_date": "Wed, 8 Aug 2001 14:46:10 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "RE: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n\n> > There could be DELETE operations for the tuple\n> > from other backends also and the TID may disappear.\n> > Because FULL VACUUM couldn't run while the cursor\n> > is open, it could neither move nor remove the tuple\n> > but I'm not sure if the new VACUUM could remove\n> > the deleted tuple and other backends could re-use\n> > the space under such a situation.\n> \n> If you also save the tuple transaction info (xmin ?) during the\n> select in addition to xtid, you could see whether the tupleslot was\n> reused ?\n> (This might need a function interface to make it reasonably portable to\n> future \n> versions)\n> Of course the only thing you can do if you notice it has changed is bail\n> out.\n> But that leaves the question to me on what should actually be done when\n> the tuple has changed underneath. \n> I for one would not like the update to succeed if someone else modified\n> it \n> inbetween my fetch and my update.\n\nIf PL/pgSQL doesn't lock the table before doing the select, then I\nthink it has to mark the tuples for update when it does the select.\nUnfortunately, the portal code explicitly rejects FOR UPDATE\n(transformSelectStmt in parser/analyze.c).\n\nIan\n",
"msg_date": "08 Aug 2001 07:42:48 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> Unfortunately, the portal code explicitly rejects FOR UPDATE\n> (transformSelectStmt in parser/analyze.c).\n\nAFAIK, that error check is there specifically because we don't have\nUPDATE WHERE CURRENT. Try removing it and see what happens --- AFAIK,\nthings might \"just work\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 12:41:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs "
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > There could be DELETE operations for the tuple\n> > from other backends also and the TID may disappear.\n> > Because FULL VACUUM couldn't run while the cursor\n> > is open, it could neither move nor remove the tuple\n> > but I'm not sure if the new VACUUM could remove\n> > the deleted tuple and other backends could re-use\n> > the space under such a situation.\n> \n> If you also save the tuple transaction info (xmin ?) during the\n> select in addition to xtid, you could see whether the tupleslot was\n> reused ?\n\nI think TID itself is available for the purpose as long as\nPostgreSQL uses no overwrite storage manager. If the tuple\nfor a saved TID isn't found, the tuple may be update/deleted.\nIf the tuple is found but the OID is different from the saved\none, the space may be re-used. If we switch to an overwriting\nstorage manager, TID would be no longer transient and we need\nanother item like xmin to detect the change of rows.\nI agree with you that detecting the change of rows is very\ncritical and xmin may be needed in the future.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 09 Aug 2001 08:54:04 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
}
] |
[
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> For all the default operations, the system would treat the\n> datums still like regular attributes. That means, that an\n\n> INSERT ... SELECT ...\n\n> copying a BLOB from one table to another (and that's correct,\n> BLOB's should have copy semantics) would force the entire\n> BLOB data into memory ... and ... then ... after ... some\n> ... time ... run out of memory.\n\nThis does not seem expensive or difficult to solve. tuptoaster.c\nwill be handed a TOAST pointer as part of heap_insert, and it will\nknow that it has to duplicate the value. It seems an easy, localized\nchange to persuade it to do that copying chunk-at-a-time instead of\nsuck-it-all-in-then-spew-it-all-out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 09:45:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Question about todo item "
},
{
"msg_contents": "I wrote:\n> Tom Lane wrote:\n> >\n> > Implementation is left as an exercise for the reader ;-).\n> >\n> > Offhand this seems like it would be doable for a column-value that\n> > was actually moved out-of-line by TOAST, since the open_toast_object\n> > function could see and return the TOAST pointer, and then the read/\n> > write operations just hack on rows in pg_largeobject. The hard part\n> > is how to provide equivalent functionality (transparent to the client\n> > of course) when the particular value you select has *not* been moved\n> > out-of-line. Ideas anyone?\n>\n> TOAST values aren't stored in pg_largeobject. And how do you\n> seek to a position in a compressed and then sliced object? We\n> need a way to force the object over a streaming interface\n> into uncompressed toast slices first. Let me think about it\n> for two days, Okay?\n>\n> The interface lacks imho a mode (r/w/rw/a) argument. Other\n> than that I'd like this part.\n\n The idea of making BLOB and CLOB simply toast forced special\n datatypes and add streaming access functions lacks one\n important requirement.\n\n For all the default operations, the system would treat the\n datums still like regular attributes. That means, that an\n\n INSERT ... SELECT ...\n\n copying a BLOB from one table to another (and that's correct,\n BLOB's should have copy semantics) would force the entire\n BLOB data into memory ... and ... then ... after ... some\n ... time ... run out of memory.\n\n We don't get far without a real new datatype and special\n support on the heap access level. We should for sure reuse\n the toast shadow table to store the data. But that's the only\n connection to toast here.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 8 Aug 2001 09:47:55 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
}
] |
[
{
"msg_contents": "\nbriefly: what would be the plausibility/difficulty \nin using a distributed shared memory library (DSS) \nand a socket bridge implementation to recompile\npostgresql and deploy on a cluster of linux boxes\nusing PVFS/GFS, creating a single storage clustered\ndatabase. or better yet has someone done it?\n\nHello all,\n\nI am interested in the possibility of constructing a \npostgresql cluster of machines. While I am not particularly \nwell informed, I believe this is not possible using\nthe MOSIX system due to the following implementation features\nof postgresql:\n\n- shared memory to implement locking/synchronization \n between the multiple backend processes\n\n- passing of socket down to forked backend process\n\nwhile perhaps the Ongoing projects of Network RAM and\nmigratable sockets would somehow help, I suspect they\ndo not fully solve the problem. \n\nMy idea is to try and recompile/hack postgresql to use\none of the DSS/PVM libraries, with a socket proxy\nimplementation to allow backend process to be\ndistributed on a cluster. Additionally, to use\na parallel filesystem, likely PVFS since it doesn't\nrequire specialized storage and network hardware,\nto distribute the data on the storage of each node.\n\nThere are likely many other issues which I am not\naware off, which I would like to be informed about\nif anybody knows them. Particularly, is there any\nreason that a clustered filesystem like PVFS would not\nprovide a necessary filesystem feature ?\n\nI am interested in such a solution since I would like\nto create a rather large database (500GB+), on which\ni would like to use the ability to embed the R language\nin postgresql to perform statistical calculations on \nthe data (REmbeddedPostgres). \n\nThese calculation are performed on the backend, and will\nlikely be intensive. In my mind, such a solution would\nallow me to add new nodes to the system which provide\nboth additional storage and CPU. Initially the backends\ncould be crudely balanced by a round robin system, or\nperhaps by MOSIX.\n\nI am not interested in using a replication solution since\nthis would require multiple copies of the data. Also, I\nam not that worried about a decrease in IO performance, \nsince the tradeoff is CPU and storage scalability at once.\n\nDue to my usage of R and philosophical beliefs, I am not\ninterested in using any commercial solution.\n\nMy basic question is basically, is my idea is possible, \nand if so how difficult do people think it would be to \nimplement?\n\nAs I have mentioned I am not that well informed on the\ndetails of the postgresql implementation, therefore\nI would greatly appreciate any advice or opinions on this\nidea. \n\nThank you in advance,\n\nBruno\n\nPS: I apologize for the automatically included disclaimer\n\tnotice\n\n\n\nVisit our website at http://www.ubswarburg.com\n\nThis message contains confidential information and is intended only \nfor the individual named. If you are not the named addressee you \nshould not disseminate, distribute or copy this e-mail. Please \nnotify the sender immediately by e-mail if you have received this \ne-mail by mistake and delete this e-mail from your system.\n \nE-mail transmission cannot be guaranteed to be secure or error-free \nas information could be intercepted, corrupted, lost, destroyed, \narrive late or incomplete, or contain viruses. The sender therefore \ndoes not accept liability for any errors or omissions in the contents \nof this message which arise as a result of e-mail transmission. If \nverification is required please request a hard-copy version. This \nmessage is provided for informational purposes and should not be \nconstrued as a solicitation or offer to buy or sell any securities or \nrelated financial instruments.\n\n",
"msg_date": "Wed, 8 Aug 2001 12:38:21 -0500",
"msg_from": "Bruno.White@ubsw.com",
"msg_from_op": true,
"msg_subject": "postgresql cluster?"
}
] |
[
{
"msg_contents": "I've discovered a bug in ALTER TABLE behaviour when it comes to\nrenaming a view.\n\nI'm not sure if renaming a view is supported but Postgres will let you\ndo it with ALTER TABLE aview RENAME TO aview2; SELECT operations still\nwork on the resulting view after this command but a dump or \\d aview2\nwill now print out :\n\noldplumbing=# \\d t\n View \"t\"\n Attribute | Type | Modifier\n------------+-------------------+----------\n ?column? | text |\n address | character varying |\n builder | character varying |\n subdiv | character varying |\n plan_# | character varying |\n sched_date | date |\n plan_id | integer |\nView definition: Not a view\n\nThis is obviously not correct.\n\n-- \nchalk slayer",
"msg_date": "Wed, 8 Aug 2001 13:48:55 -0500",
"msg_from": "Ashley Clark <aclark@ghoti.org>",
"msg_from_op": true,
"msg_subject": "Bug with ALTER TABLE"
},
{
"msg_contents": "* Ashley Clark in \"Bug with ALTER TABLE\" dated 2001/08/08 13:48 wrote:\n\n<snip>\n\nDuh, forgot the version #\n\n version\n-------------------------------------------------------------\n PostgreSQL 7.1 on i686-pc-linux-gnu, compiled by GCC 2.95.2\n\n-- \nchalk slayer",
"msg_date": "Wed, 8 Aug 2001 13:53:36 -0500",
"msg_from": "Ashley Clark <aclark@ghoti.org>",
"msg_from_op": true,
"msg_subject": "Re: Bug with ALTER TABLE"
},
{
"msg_contents": "I an confirm this is a bug. The line:\n\n> View definition: Not a view\n\nis the incorrect part.\n\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> I've discovered a bug in ALTER TABLE behaviour when it comes to\n> renaming a view.\n> \n> I'm not sure if renaming a view is supported but Postgres will let you\n> do it with ALTER TABLE aview RENAME TO aview2; SELECT operations still\n> work on the resulting view after this command but a dump or \\d aview2\n> will now print out :\n> \n> oldplumbing=# \\d t\n> View \"t\"\n> Attribute | Type | Modifier\n> ------------+-------------------+----------\n> ?column? | text |\n> address | character varying |\n> builder | character varying |\n> subdiv | character varying |\n> plan_# | character varying |\n> sched_date | date |\n> plan_id | integer |\n> View definition: Not a view\n> \n> This is obviously not correct.\n> \n> -- \n> chalk slayer\n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Aug 2001 15:21:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Bug with ALTER TABLE"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I an confirm this is a bug. The line:\n\nPresumably, ALTER RENAME is forgetting to rename the ON SELECT rule\nassociated with the view. The actual use of the rule is driven by\nOID-based lookups and isn't affected, but I bet that psql tries to\nlook up the rule by name.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 17:41:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] Bug with ALTER TABLE "
},
{
"msg_contents": "Ashley Clark <aclark@ghoti.org> writes:\n> I've discovered a bug in ALTER TABLE behaviour when it comes to\n> renaming a view.\n\nI've committed a fix for this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Aug 2001 17:38:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug with ALTER TABLE "
}
] |
[
{
"msg_contents": "Hi,\n\nI'm about to announce a release of OpenFTS search engine.\n\nShort blurb:\n\n OpenFTS (Open Source Full Text Search engine)\n is an advanced PostgreSQL-based search engine\n that provides online indexing of data and relevance\n ranking for database searching. Close integration\n with database allows use of metadata to restrict\n search results.\n\nThis is actually what we use at fts.postgresql.org and\nseveral other sites. Perl version is available for download,\nTCL version (with AOL server support) will be available soon -\nactually it just needs to wrap a release.\n\nThe OpenFTS project web site - http://openfts.sourceforge.net/\n\nOpenFTS team:\n Oleg Bartunov - Project Manager\n Teodor Sigaev - Principal Developer\n Daniel Wickstrom - Developer of TCL Version\n Neophytos Demetriou - Documentation Writer\n\nCopyright:\n\n OpenFTS is Copyright 2000-2001 XWare and\n licensed under the GNU General Public License,\n version 2 (June 1991). This means you can use it\n and modify it in any way you want. If you choose to\n redistribute OpenFTS, you must do so under the\n terms of the GNU license.\n\n\tRegards,\n\t\tOleg\n\nPS.\n\nMarc,\n\nI didn't subscribe to announce list. I think it's worth to\nput link somewhere to OpenFTS web site\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n\n",
"msg_date": "Thu, 9 Aug 2001 19:15:06 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "\nGREAT! We need to get this information to people so they know it is\navailable. It is a major feature that few people know exists.\n\n\n> Hi,\n> \n> I'm about to announce a release of OpenFTS search engine.\n> \n> Short blurb:\n> \n> OpenFTS (Open Source Full Text Search engine)\n> is an advanced PostgreSQL-based search engine\n> that provides online indexing of data and relevance\n> ranking for database searching. Close integration\n> with database allows use of metadata to restrict\n> search results.\n> \n> This is actually what we use at fts.postgresql.org and\n> several other sites. Perl version is available for download,\n> TCL version (with AOL server support) will be available soon -\n> actually it just needs to wrap a release.\n> \n> The OpenFTS project web site - http://openfts.sourceforge.net/\n> \n> OpenFTS team:\n> Oleg Bartunov - Project Manager\n> Teodor Sigaev - Principal Developer\n> Daniel Wickstrom - Developer of TCL Version\n> Neophytos Demetriou - Documentation Writer\n> \n> Copyright:\n> \n> OpenFTS is Copyright 2000-2001 XWare and\n> licensed under the GNU General Public License,\n> version 2 (June 1991). This means you can use it\n> and modify it in any way you want. If you choose to\n> redistribute OpenFTS, you must do so under the\n> terms of the GNU license.\n> \n> \tRegards,\n> \t\tOleg\n> \n> PS.\n> \n> Marc,\n> \n> I didn't subscribe to announce list. I think it's worth to\n> put link somewhere to OpenFTS web site\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Aug 2001 12:34:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "Maybe it would be worth creating a \"PostgreSQL Extensions\" section on\ntechdocs.postgresql.org?\n\n+ Justin\n\n\nBruce Momjian wrote:\n> \n> GREAT! We need to get this information to people so they know it is\n> available. It is a major feature that few people know exists.\n> \n> > Hi,\n> >\n> > I'm about to announce a release of OpenFTS search engine.\n> >\n> > Short blurb:\n> >\n> > OpenFTS (Open Source Full Text Search engine)\n> > is an advanced PostgreSQL-based search engine\n> > that provides online indexing of data and relevance\n> > ranking for database searching. Close integration\n> > with database allows use of metadata to restrict\n> > search results.\n> >\n> > This is actually what we use at fts.postgresql.org and\n> > several other sites. Perl version is available for download,\n> > TCL version (with AOL server support) will be available soon -\n> > actually it just needs to wrap a release.\n> >\n> > The OpenFTS project web site - http://openfts.sourceforge.net/\n> >\n> > OpenFTS team:\n> > Oleg Bartunov - Project Manager\n> > Teodor Sigaev - Principal Developer\n> > Daniel Wickstrom - Developer of TCL Version\n> > Neophytos Demetriou - Documentation Writer\n> >\n> > Copyright:\n> >\n> > OpenFTS is Copyright 2000-2001 XWare and\n> > licensed under the GNU General Public License,\n> > version 2 (June 1991). This means you can use it\n> > and modify it in any way you want. If you choose to\n> > redistribute OpenFTS, you must do so under the\n> > terms of the GNU license.\n> >\n> > Regards,\n> > Oleg\n> >\n> > PS.\n> >\n> > Marc,\n> >\n> > I didn't subscribe to announce list. I think it's worth to\n> > put link somewhere to OpenFTS web site\n> >\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 10 Aug 2001 11:57:47 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "> Maybe it would be worth creating a \"PostgreSQL Extensions\" section on\n> techdocs.postgresql.org?\n> \n\nDon't we have one on the main site in the User's Lounge?\n\t\n\tPostgreSQL In Real World\n\t\n\t PostgreSQL enhancements \n\t Interfacing PostgreSQL with other software \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Aug 2001 22:20:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "Cool.\n\nI forgot about that.\n\n:(\n\n+ Justin\n\n\nBruce Momjian wrote:\n> \n> > Maybe it would be worth creating a \"PostgreSQL Extensions\" section on\n> > techdocs.postgresql.org?\n> >\n> \n> Don't we have one on the main site in the User's Lounge?\n> \n> PostgreSQL In Real World\n> \n> PostgreSQL enhancements\n> Interfacing PostgreSQL with other software\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 10 Aug 2001 12:26:02 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "Doh! Guess that makes our work on contrib/fulltextindex a waste of time,\nhuh?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Friday, 10 August 2001 12:34 AM\n> To: Oleg Bartunov\n> Cc: Pgsql Hackers; scrappy@hub.org\n> Subject: Re: [HACKERS] OpenFTS (Open Source Full Text Search engine)\n> pre-announce\n>\n>\n>\n> GREAT! We need to get this information to people so they know it is\n> available. It is a major feature that few people know exists.\n>\n>\n> > Hi,\n> >\n> > I'm about to announce a release of OpenFTS search engine.\n> >\n> > Short blurb:\n> >\n> > OpenFTS (Open Source Full Text Search engine)\n> > is an advanced PostgreSQL-based search engine\n> > that provides online indexing of data and relevance\n> > ranking for database searching. Close integration\n> > with database allows use of metadata to restrict\n> > search results.\n> >\n> > This is actually what we use at fts.postgresql.org and\n> > several other sites. Perl version is available for download,\n> > TCL version (with AOL server support) will be available soon -\n> > actually it just needs to wrap a release.\n> >\n> > The OpenFTS project web site - http://openfts.sourceforge.net/\n> >\n> > OpenFTS team:\n> > Oleg Bartunov - Project Manager\n> > Teodor Sigaev - Principal Developer\n> > Daniel Wickstrom - Developer of TCL Version\n> > Neophytos Demetriou - Documentation Writer\n> >\n> > Copyright:\n> >\n> > OpenFTS is Copyright 2000-2001 XWare and\n> > licensed under the GNU General Public License,\n> > version 2 (June 1991). This means you can use it\n> > and modify it in any way you want. If you choose to\n> > redistribute OpenFTS, you must do so under the\n> > terms of the GNU license.\n> >\n> > \tRegards,\n> > \t\tOleg\n> >\n> > PS.\n> >\n> > Marc,\n> >\n> > I didn't subscribe to announce list. I think it's worth to\n> > put link somewhere to OpenFTS web site\n> >\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Fri, 10 Aug 2001 10:45:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": ">>>>> \"OB\" == Oleg Bartunov <oleg@sai.msu.su> writes:\n\n OB> Hi,\n OB> I'm about to announce a release of OpenFTS search engine.\n\n OB> Short blurb:\n\n OB> OpenFTS (Open Source Full Text Search engine)\n OB> is an advanced PostgreSQL-based search engine\n OB> that provides online indexing of data and relevance\n OB> ranking for database searching. Close integration\n OB> with database allows use of metadata to restrict\n OB> search results.\n\n OB> This is actually what we use at fts.postgresql.org and\n OB> several other sites. Perl version is available for download,\n OB> TCL version (with AOL server support) will be available soon -\n OB> actually it just needs to wrap a release.\n\n OB> The OpenFTS project web site - http://openfts.sourceforge.net/\n\nSorry, but I have received:\n\nNot Found\n\nThe requested URL / was not found on this server.\n\n\nApache/1.3.19 Server at openfts.sourceforge.net Port 80\n\n-- \nAnatoly K. Lasareff Email: tolik@aaanet.ru \n http://tolikus.hq.aaanet.ru:8080\n",
"msg_date": "10 Aug 2001 10:21:19 +0400",
"msg_from": "tolik@aaanet.ru (Anatoly K. Lasareff)",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "Hi Anatoly,\n\nJust tried it, it works for me.\n\nWould it possibly be your browser is broken?\n\nIf not, maybe you got to it during an update, etc. Does it work for you\nnow?\n\nRegards and best wishes,\n\nJustin Clift\n\n\n\"Anatoly K. Lasareff\" wrote:\n> \n> >>>>> \"OB\" == Oleg Bartunov <oleg@sai.msu.su> writes:\n> \n> OB> Hi,\n> OB> I'm about to announce a release of OpenFTS search engine.\n> \n> OB> Short blurb:\n> \n> OB> OpenFTS (Open Source Full Text Search engine)\n> OB> is an advanced PostgreSQL-based search engine\n> OB> that provides online indexing of data and relevance\n> OB> ranking for database searching. Close integration\n> OB> with database allows use of metadata to restrict\n> OB> search results.\n> \n> OB> This is actually what we use at fts.postgresql.org and\n> OB> several other sites. Perl version is available for download,\n> OB> TCL version (with AOL server support) will be available soon -\n> OB> actually it just needs to wrap a release.\n> \n> OB> The OpenFTS project web site - http://openfts.sourceforge.net/\n> \n> Sorry, but I have received:\n> \n> Not Found\n> \n> The requested URL / was not found on this server.\n> \n> Apache/1.3.19 Server at openfts.sourceforge.net Port 80\n> \n> --\n> Anatoly K. Lasareff Email: tolik@aaanet.ru\n> http://tolikus.hq.aaanet.ru:8080\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n",
"msg_date": "Fri, 10 Aug 2001 17:02:02 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "Now all is ok, it was a temrorary difficulties.\n\nAnatoly K. Lasareff wrote:\n>>>>>>\"OB\" == Oleg Bartunov <oleg@sai.msu.su> writes:\n>>>>>>\n> \n> OB> Hi,\n> OB> I'm about to announce a release of OpenFTS search engine.\n> \n> OB> Short blurb:\n> \n> OB> OpenFTS (Open Source Full Text Search engine)\n> OB> is an advanced PostgreSQL-based search engine\n> OB> that provides online indexing of data and relevance\n> OB> ranking for database searching. Close integration\n> OB> with database allows use of metadata to restrict\n> OB> search results.\n> \n> OB> This is actually what we use at fts.postgresql.org and\n> OB> several other sites. Perl version is available for download,\n> OB> TCL version (with AOL server support) will be available soon -\n> OB> actually it just needs to wrap a release.\n> \n> OB> The OpenFTS project web site - http://openfts.sourceforge.net/\n> \n> Sorry, but I have received:\n> \n> Not Found\n> \n> The requested URL / was not found on this server.\n> \n> \n> Apache/1.3.19 Server at openfts.sourceforge.net Port 80\n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Fri, 10 Aug 2001 11:33:10 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "On Thu, 9 Aug 2001, Bruce Momjian wrote:\n\n> > Maybe it would be worth creating a \"PostgreSQL Extensions\" section on\n> > techdocs.postgresql.org?\n> >\n>\n> Don't we have one on the main site in the User's Lounge?\n>\n> \tPostgreSQL In Real World\n>\n> \t PostgreSQL enhancements\n> \t Interfacing PostgreSQL with other software\n\nYes and I plan on adding it in when I get back in town.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 10 Aug 2001 05:45:11 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "On Fri, 10 Aug 2001, Vince Vielhaber wrote:\n\n> On Thu, 9 Aug 2001, Bruce Momjian wrote:\n>\n> > > Maybe it would be worth creating a \"PostgreSQL Extensions\" section on\n> > > techdocs.postgresql.org?\n> > >\n> >\n> > Don't we have one on the main site in the User's Lounge?\n> >\n> > \tPostgreSQL In Real World\n> >\n> > \t PostgreSQL enhancements\n> > \t Interfacing PostgreSQL with other software\n>\n> Yes and I plan on adding it in when I get back in town.\n\nCorrection, I had a few minutes so I added it to the Projects Relating\nto PostgreSQL page.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 10 Aug 2001 06:38:08 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
},
{
"msg_contents": "\nI am not sure how they compare. I have gotten little information on the\nopenFTP project.\n\n> Doh! Guess that makes our work on contrib/fulltextindex a waste of time,\n> huh?\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > Sent: Friday, 10 August 2001 12:34 AM\n> > To: Oleg Bartunov\n> > Cc: Pgsql Hackers; scrappy@hub.org\n> > Subject: Re: [HACKERS] OpenFTS (Open Source Full Text Search engine)\n> > pre-announce\n> >\n> >\n> >\n> > GREAT! We need to get this information to people so they know it is\n> > available. It is a major feature that few people know exists.\n> >\n> >\n> > > Hi,\n> > >\n> > > I'm about to announce a release of OpenFTS search engine.\n> > >\n> > > Short blurb:\n> > >\n> > > OpenFTS (Open Source Full Text Search engine)\n> > > is an advanced PostgreSQL-based search engine\n> > > that provides online indexing of data and relevance\n> > > ranking for database searching. Close integration\n> > > with database allows use of metadata to restrict\n> > > search results.\n> > >\n> > > This is actually what we use at fts.postgresql.org and\n> > > several other sites. Perl version is available for download,\n> > > TCL version (with AOL server support) will be available soon -\n> > > actually it just needs to wrap a release.\n> > >\n> > > The OpenFTS project web site - http://openfts.sourceforge.net/\n> > >\n> > > OpenFTS team:\n> > > Oleg Bartunov - Project Manager\n> > > Teodor Sigaev - Principal Developer\n> > > Daniel Wickstrom - Developer of TCL Version\n> > > Neophytos Demetriou - Documentation Writer\n> > >\n> > > Copyright:\n> > >\n> > > OpenFTS is Copyright 2000-2001 XWare and\n> > > licensed under the GNU General Public License,\n> > > version 2 (June 1991). This means you can use it\n> > > and modify it in any way you want. If you choose to\n> > > redistribute OpenFTS, you must do so under the\n> > > terms of the GNU license.\n> > >\n> > > \tRegards,\n> > > \t\tOleg\n> > >\n> > > PS.\n> > >\n> > > Marc,\n> > >\n> > > I didn't subscribe to announce list. I think it's worth to\n> > > put link somewhere to OpenFTS web site\n> > >\n> > > _____________________________________________________________\n> > > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > > Sternberg Astronomical Institute, Moscow University (Russia)\n> > > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > > phone: +007(095)939-16-83, +007(095)939-23-83\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > >\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 3 Sep 2001 16:17:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OpenFTS (Open Source Full Text Search engine) pre-announce"
}
] |
[
{
"msg_contents": "Hi,\n\nwe got a spare time (well, hope so) and\nwe'are about to document GiST interface - sort of guide with intro,\nexamples and programming notices. I think it's worth to include it\ninto documentation for 7.2 release. What's a procedure for\ntaking part in documentation project ? What should be a style\nof such guide ? GiST is very powerful feature of postgres and many people\njust don't know how to use it.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 9 Aug 2001 19:57:52 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "GiST docs"
},
{
"msg_contents": "Oleg Bartunov writes:\n\n> we got a spare time (well, hope so) and\n> we'are about to document GiST interface - sort of guide with intro,\n> examples and programming notices. I think it's worth to include it\n> into documentation for 7.2 release. What's a procedure for\n> taking part in documentation project ? What should be a style\n> of such guide ? GiST is very powerful feature of postgres and many people\n> just don't know how to use it.\n\nThere is already a GiST chapter in the programmer's guide but it looks\nlike you might as well scratch that and replace it.\n\nhttp://www.de.postgresql.org/devel-corner/docs/postgres/gist.html\n\nSubmitting documentation works just like code, but we tend to keep\ndiscussion on the pgsql-docs list.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 9 Aug 2001 20:08:12 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: GiST docs"
}
] |
[
{
"msg_contents": "Is it a good idea to allow creating tables with columns of type \"unknown\"?\nThis is generally a user error with an ambiguous CREATE TABLE AS and there\nisn't a whole lot you can do with such a table. Should it be an error?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 9 Aug 2001 21:15:02 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Creating tables with unknown type columns"
}
] |
[
{
"msg_contents": "We have realized that allowing per-column locale would be difficult with\nthe existing C library interface, because setlocale() is a pretty\ninefficient operation. But I think what we could allow, and what would be\nfairly useful, is the choice between the plain C locale and one \"real\"\nlocale of choice (as determined by initdb) on a column or datum basis.\n\nOne possible way to implement this is to set or clear a bit somewhere in\nthe header of each text (char, varchar) type datum, depending on what you\nwant. Basically, this bit is going to be part of the atttypmod. Then the\ncomparison operators would use strcoll or strcmp, depending on the choice,\nand similarly for other functions that are locale-aware.\n\nDoes anyone see a problem with this, aside from the fact that this breaks\nthe internal representation of the character types (which might have to\nhappen anyway if we ever want to do something in this direction)?\n\n(If this is an acceptable plan then we could tie this in with the proposed\nwork of making the LIKE optimization work. We wouldn't have to make up\nnew ugly-named operators, we'd just have to do a bit of plain old type\ncasting.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 10 Aug 2001 00:38:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Vague idea for allowing per-column locale"
},
{
"msg_contents": "On Fri, 10 Aug 2001, Peter Eisentraut wrote:\n\n> We have realized that allowing per-column locale would be difficult with\n> the existing C library interface, because setlocale() is a pretty\n> inefficient operation. But I think what we could allow, and what would be\n> fairly useful, is the choice between the plain C locale and one \"real\"\n> locale of choice (as determined by initdb) on a column or datum basis.\n\nYes, the C library locale notion is somewhat broken, or at best limited,\nimho. It doesn't fit at all well with the needs of a server that can have\nclients in different locales, or even clients in the same place who have\ndifferent locale preferences. I guess it's a pre-ubiquitous-internet\nconcept. If you keep stretching this idea beyond the model that it\ncomfortably supports, life will become steadily more difficult, and it may\nbe better to give up on that model altogether.\n\nA different idea related to this is to treat different text\nrepresentations as different data types. In the case of different\nmulti-byte text representations, this definitely makes sense; in the case\nof just different locales for essentially the same character set it might\nnot be as obviously beneficial, but still merits some consideration, imho.\n\nFor converting, say, utf8 to euc-jp, it would be nice to be able to make\nuse of all the existing infrastructure that PostgreSQL has for type\nconversion and type identification. It'd be even nicer if you could make a\ntable that has, say, one column utf8 (or utf32 even), one column euc-jp\nand one column shift-jis, so that you could cache format conversions.\n\n> One possible way to implement this is to set or clear a bit somewhere in\n> the header of each text (char, varchar) type datum, depending on what you\n> want. Basically, this bit is going to be part of the atttypmod. Then the\n> comparison operators would use strcoll or strcmp, depending on the choice,\n> and similarly for other functions that are locale-aware.\n\nUnder my grand plan one would have to implement comparison operators for\neach data type (as well as all the other things one has to implement for a\ndata type); then it should Just Work, because postgres would know what\ncomparison to use for each column.\n\n> Does anyone see a problem with this, aside from the fact that this breaks\n> the internal representation of the character types (which might have to\n> happen anyway if we ever want to do something in this direction)?\n\n> (If this is an acceptable plan then we could tie this in with the proposed\n> work of making the LIKE optimization work. We wouldn't have to make up\n> new ugly-named operators, we'd just have to do a bit of plain old type\n> casting.)\n\nThe separate data types notion would work here also, since one could\ndeclare a column to be of plain vanilla ascii data type, with all\ncomparisons just a simple comparison of numerical values.\n\nBTW, how does postgres store multibyte text? As char * with a multibyte\nencoding? As 16 bit or 32 bit code points? I should of course just look at\nthe code and find out...:) I guess the former, from Peter's earlier\ncomments. It does seem to me that using an explicit 32 bit representation\n(or at least providing that as an option) would make life easier in many\nways.\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen tim@proximity.com.au\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n",
"msg_date": "Fri, 10 Aug 2001 09:36:19 +1000 (EST)",
"msg_from": "Tim Allen <tim@proximity.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Vague idea for allowing per-column locale"
},
{
"msg_contents": "> (If this is an acceptable plan then we could tie this in with the proposed\n> work of making the LIKE optimization work. We wouldn't have to make up\n> new ugly-named operators, we'd just have to do a bit of plain old type\n> casting.)\n\nIf we are thinking about improvements at this level, why not go ahead\nand reopen the discussion of how to do SQL9x national character sets,\ncollations, etc etc. istm that these will offer a solution for which the\ncurrent issues are a (hopefully large) subset.\n\nWe could use the type system to support this (my current preference);\nothers have suggested that this might be too heavy to be usable and had\nalternate suggestions.\n\nIssues with SQL9x include:\n\no character set/collation syntax for string literals\n\no internal representation\n\no appropriate operators and functions for these sets/collations\n\no I/O conventions between client and server (may use the current\nscheme?)\n\no allowing these alternate character sets for table names (or wherever\nallowed by SQL9x). How to expose, for example, equality operators to\nallow internal PostgreSQL operation: is our current use of strcmp()\nenough?\n\nComments?\n\n - Thomas\n",
"msg_date": "Thu, 09 Aug 2001 23:50:42 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Vague idea for allowing per-column locale"
},
{
"msg_contents": "On Fri, 10 Aug 2001, Tim Allen wrote:\n\n> > We have realized that allowing per-column locale would be difficult with\n> > the existing C library interface, because setlocale() is a pretty\n> > inefficient operation. But I think what we could allow, and what would be\n\n> Yes, the C library locale notion is somewhat broken, or at best limited,\n> imho. It doesn't fit at all well with the needs of a server that can have\n> clients in different locales, or even clients in the same place who have\n> different locale preferences.\n\nThis may be of interest:\n\nhttp://www.cygnus.com/~drepper/tllocale.ps.bz2\n\nIt's not clear if glibc implements it yet, nor if people like\nthe interface, but it seems like a nice, easily-wrapped and\nstubbed option, which might come for free on at least one OS.\n\nMatthew.\n\n",
"msg_date": "Fri, 10 Aug 2001 12:49:48 +0100 (BST)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Vague idea for allowing per-column locale"
},
{
"msg_contents": "Tim Allen writes:\n\n> For converting, say, utf8 to euc-jp, it would be nice to be able to make\n> use of all the existing infrastructure that PostgreSQL has for type\n> conversion and type identification.\n\nUnfortunately, that infrastructure is rather poorly suited for handling\narbitrary types.\n\n> It'd be even nicer if you could make a table that has, say, one column\n> utf8 (or utf32 even), one column euc-jp and one column shift-jis, so\n> that you could cache format conversions.\n\nThis might be a nice thing to show off but I'm not sure about the\npractical use. There's Unicode that you can use if you want to mix and\nmatch on the server, and the ability to convert the character set between\nclient and server so the right thing shows up on everyone's screen.\n\n> Under my grand plan one would have to implement comparison operators for\n> each data type (as well as all the other things one has to implement for a\n> data type);\n\nYou do realize that this would be hundreds, if not thousands, of things?\n\n> BTW, how does postgres store multibyte text? As char * with a multibyte\n> encoding?\n\nYes.\n\n> As 16 bit or 32 bit code points? I should of course just look at\n> the code and find out...:) I guess the former, from Peter's earlier\n> comments. It does seem to me that using an explicit 32 bit representation\n> (or at least providing that as an option) would make life easier in many\n> ways.\n\nI think the storage size penality would be prohibitive.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 10 Aug 2001 17:24:13 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Vague idea for allowing per-column locale"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> o character set/collation syntax for string literals\n\nI think it's <character value expression> COLLATE <collation name>. It's\nanother one of these things that break our nicely organized system. :-(\n\n> o internal representation\n>\n> o appropriate operators and functions for these sets/collations\n\nHmm, there aren't a lot of things you can do with them, no?\n\n> o I/O conventions between client and server (may use the current\n> scheme?)\n\nAs long as it works I don't see a problem. I have said before that we\nshould allow using the iconv interface because it's more powerful.\n\n> o allowing these alternate character sets for table names (or wherever\n> allowed by SQL9x). How to expose, for example, equality operators to\n> allow internal PostgreSQL operation: is our current use of strcmp()\n> enough?\n\nNo. This could be tricky to do though without sacrificing the designed\nefficiency of the \"name\" type. For instance, how could we store the\nwhich-locale-is-it information?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 10 Aug 2001 17:35:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Vague idea for allowing per-column locale"
},
{
"msg_contents": "> > It'd be even nicer if you could make a table that has, say, one column\n> > utf8 (or utf32 even), one column euc-jp and one column shift-jis, so\n> > that you could cache format conversions.\n> \n> This might be a nice thing to show off but I'm not sure about the\n> practical use. There's Unicode that you can use if you want to mix and\n> match on the server, and the ability to convert the character set between\n> client and server so the right thing shows up on everyone's screen.\n\nStoring everything as Unicode is not a good idea, actually. First,\nUnicode tends to consume more storage space than other character\nsets. For example, UTF-8, one of the most commonly used encoding for\nUnicode consumes 3 bytes for Japanese characters, while SJIS only\nconsumes 2 bytes. Second, a round trip converison between Unicode and\nother character sets is not always possible. Third, sorting\nissue. There is no convenient way to sort Unicode correctly.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 14 Aug 2001 10:02:41 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Vague idea for allowing per-column locale"
},
{
"msg_contents": "On Tue, 14 Aug 2001, Tatsuo Ishii wrote:\n\n> Storing everything as Unicode is not a good idea, actually. First,\n> Unicode tends to consume more storage space than other character\n> sets. For example, UTF-8, one of the most commonly used encoding for\n> Unicode consumes 3 bytes for Japanese characters, while SJIS only\n> consumes 2 bytes. Second, a round trip converison between Unicode and\n> other character sets is not always possible. Third, sorting\n> issue. There is no convenient way to sort Unicode correctly.\n\nUTF-16 can handle most Japanese characters in two bytes, afaict. Generally\nit seems that utf8 encodes European text more efficiently on average,\nwhereas utf16 is better for most Asian languages. I may be mistaken, but I\nwas under the impression that sorting of unicode characters was a solved\nproblem. The IBM ICU class library (which does have a C interface), for\nexample, claims to provide everything you need to sort unicode text in\nvarious locales, and uses utf16 internally:\n\nhttp://oss.software.ibm.com/developerworks/opensource/icu/project/index.html\n\nThe licence is, I gather, the X licence, which presumably is compatible\nenough with BSD; not that I would necessarily advocate building this into\npostgres at a fundamental level, but it demonstrates that it can be done.\n\nNote that I'm not speaking from experience here, I've just read the docs,\nand a book on unicode, never actually performed a Japanese-language (or\nany other non-English language) sort, so no need to take me too seriously\n:).\n\n> Tatsuo Ishii\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen tim@proximity.com.au\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n",
"msg_date": "Tue, 14 Aug 2001 12:36:19 +1000 (EST)",
"msg_from": "Tim Allen <tim@proximity.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Vague idea for allowing per-column locale"
},
{
"msg_contents": "> > Storing everything as Unicode is not a good idea, actually. First,\n> > Unicode tends to consume more storage space than other character\n> > sets. For example, UTF-8, one of the most commonly used encoding for\n> > Unicode consumes 3 bytes for Japanese characters, while SJIS only\n> > consumes 2 bytes. Second, a round trip converison between Unicode and\n> > other character sets is not always possible. Third, sorting\n> > issue. There is no convenient way to sort Unicode correctly.\n> \n> UTF-16 can handle most Japanese characters in two bytes, afaict. Generally\n> it seems that utf8 encodes European text more efficiently on average,\n> whereas utf16 is better for most Asian languages.\n\nSame thing can be said to UCS-2. Most multibyte characters could be\ntwo bytes within UCS-2. The problem with both UTF-16 and UCS-4 is that\ndata may contain NULL bytes.\n\n> I may be mistaken, but I\n> was under the impression that sorting of unicode characters was a solved\n> problem. The IBM ICU class library (which does have a C interface), for\n> example, claims to provide everything you need to sort unicode text in\n> various locales, and uses utf16 internally:\n\nInteresting. Thanks for the info. I will look into this.\n\nBTW, \"round trip conversion problem\" still need to be addressed.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 14 Aug 2001 14:01:30 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Vague idea for allowing per-column locale"
}
] |
[
{
"msg_contents": "It seems that sometimes uncommitted data (dirty data?) could be seen\nin PL/pgSQL function.\n\nBelow is a sample script to reproduce the problem: If you execute\n\"SELECT myftest(1)\" concurrently, you will see the subselect in the\nSELECT INTO... will produce:\n\nERROR: More than one tuple returned by a subselect used as an expression.\n\nThis is odd, since the coulum i is a primary key, and should never has\nduplicate values.\n\nIf you comment out the SELECT INTO... statement, you could see a line\nsomething like:\n\nNOTICE: ctid (0,5) xmin 645188 xmax 645190 cmin 2 cmax 2\n\nThis is odd too, since xmax > 0 or cmax > 0 should never happen with\nvisible tuples, in my understanding.\n\nI see these in 7.0.3, 7.1.2 and current.\n--\nTatsuo Ishii\n\n----------------------------------------------------------------------\nDROP TABLE t1;\nCREATE TABLE t1 (i INT PRIMARY KEY);\nDROP FUNCTION myftest(INT);\nCREATE FUNCTION myftest(INT)\nRETURNS INT\nAS '\n DECLARE myid INT;\n DECLARE rec RECORD;\n key ALIAS FOR $1;\n BEGIN\n UPDATE t1 SET i = 1 WHERE i = 1;\n SELECT INTO tid,myid ctid,i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n FOR rec IN SELECT ctid,xmin,xmax,cmin,cmax from t1 LOOP\n RAISE NOTICE ''ctid % xmin % xmax % cmin % cmax %'', rec.ctid,rec.xmin,rec.xmax,rec.cmin,rec.cmax;\n END LOOP;\n RETURN 0;\n END;\n '\n LANGUAGE 'plpgsql';\n",
"msg_date": "Fri, 10 Aug 2001 15:20:45 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "PL/pgSQL bug?"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> It seems that sometimes uncommitted data (dirty data?) could be seen\n> in PL/pgSQL function.\n>\n> Below is a sample script to reproduce the problem: If you execute\n> \"SELECT myftest(1)\" concurrently, you will see the subselect in the\n> SELECT INTO... will produce:\n>\n> ERROR: More than one tuple returned by a subselect used as an expression.\n>\n> This is odd, since the coulum i is a primary key, and should never has\n> duplicate values.\n>\n> If you comment out the SELECT INTO... statement, you could see a line\n> something like:\n>\n> NOTICE: ctid (0,5) xmin 645188 xmax 645190 cmin 2 cmax 2\n>\n> This is odd too, since xmax > 0 or cmax > 0 should never happen with\n> visible tuples, in my understanding.\n>\n> I see these in 7.0.3, 7.1.2 and current.\n\n If that's the case, it must be a general problem with SPI\n that'll apply to any procedural language as well as user\n defined C function using SPI.\n\n When scans or functions are involved, PL/pgSQL uses SPI\n functionality to evaluate the expression.\n\n\nJan\n\n> --\n> Tatsuo Ishii\n>\n> ----------------------------------------------------------------------\n> DROP TABLE t1;\n> CREATE TABLE t1 (i INT PRIMARY KEY);\n> DROP FUNCTION myftest(INT);\n> CREATE FUNCTION myftest(INT)\n> RETURNS INT\n> AS '\n> DECLARE myid INT;\n> DECLARE rec RECORD;\n> key ALIAS FOR $1;\n> BEGIN\n> UPDATE t1 SET i = 1 WHERE i = 1;\n> SELECT INTO tid,myid ctid,i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n> FOR rec IN SELECT ctid,xmin,xmax,cmin,cmax from t1 LOOP\n> RAISE NOTICE ''ctid % xmin % xmax % cmin % cmax %'', rec.ctid,rec.xmin,rec.xmax,rec.cmin,rec.cmax;\n> END LOOP;\n> RETURN 0;\n> END;\n> '\n> LANGUAGE 'plpgsql';\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 10 Aug 2001 08:38:24 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> NOTICE: ctid (0,5) xmin 645188 xmax 645190 cmin 2 cmax 2\n> This is odd too, since xmax > 0 or cmax > 0 should never happen with\n> visible tuples, in my understanding.\n\nThat's what the docs presently say, but they're in error --- nonzero\nxmax could represent a not-yet-committed deleting xact (or one that\ndid commit, but not in your snapshot); or it could be from a deleting\nxact that rolled back. \n\nI get \n\nregression=# SELECT myftest(1);\nNOTICE: Error occurred while executing PL/pgSQL function myftest\nNOTICE: line 6 at SQL statement\nERROR: parser: parse error at or near \"ctid\"\nregression=#\n\nso there's something wrong with the function as posted.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 09:54:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> If that's the case, it must be a general problem with SPI\n> that'll apply to any procedural language as well as user\n> defined C function using SPI.\n\nNot necessarily. It looks to me like someone is forgetting to do a\nCommandCounterIncrement() between plpgsql statements. Is this something\nthat plpgsql should do, or should SPI do it? Not clear.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 10:06:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "I said:\n> Not necessarily. It looks to me like someone is forgetting to do a\n> CommandCounterIncrement() between plpgsql statements.\n\nIt's worse than that: someone is caching an out-of-date command counter\nvalue.\n\nLoad the attached variant of Tatsuo's script, and then do this:\n\nregression=# SELECT myftest(1);\nNOTICE: i 1 ctid (0,30) xmin 5687 xmax 0 cmin 2 cmax 0\nNOTICE: i 2 ctid (0,31) xmin 5687 xmax 0 cmin 4 cmax 0\n myftest\n---------\n 0\n(1 row)\n\nregression=# SELECT myftest(1);\nNOTICE: i 1 ctid (0,32) xmin 5688 xmax 0 cmin 1 cmax 0\n myftest\n---------\n 0\n(1 row)\n\nregression=#\n\nNeat eh? What happened to the i=2 line? If you start a fresh backend,\nthe first execution of the function works.\n\n\t\t\tregards, tom lane\n\n\nDROP TABLE t1;\nCREATE TABLE t1 (i INT PRIMARY KEY);\ninsert into t1 values(1);\n\nDROP FUNCTION myftest(INT);\nCREATE FUNCTION myftest(INT)\nRETURNS INT\nAS '\n DECLARE myid INT;\n DECLARE rec RECORD;\n key ALIAS FOR $1;\n BEGIN\n UPDATE t1 SET i = 1 WHERE i = 1;\n\tINSERT INTO t1 VALUES (2);\n FOR rec IN SELECT i,ctid,xmin,xmax,cmin,cmax from t1 LOOP\n RAISE NOTICE ''i % ctid % xmin % xmax % cmin % cmax %'', rec.i,rec.ctid,rec.xmin,rec.xmax,rec.cmin,rec.cmax;\n END LOOP;\n SELECT INTO myid i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n\tDELETE FROM t1 WHERE i = 2;\n RETURN 0;\n END;\n '\n LANGUAGE 'plpgsql';\n",
"msg_date": "Fri, 10 Aug 2001 10:15:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Okay, I understand Tatsuo's original complaint, and I don't think it's\na bug exactly --- it's MVCC/Read Committed operating as designed. Using\nthe variant script I just posted and two *freshly started* backends, do:\n\nBackend 1:\n\nregression=# begin;\nBEGIN\nregression=# SELECT myftest(1);\nNOTICE: i 1 ctid (0,42) xmin 5701 xmax 0 cmin 3 cmax 0\nNOTICE: i 2 ctid (0,43) xmin 5701 xmax 0 cmin 5 cmax 0\n myftest\n---------\n 0\n(1 row)\n\nBackend 2:\n\nregression=# SELECT myftest(1);\n\n[ backend 2 hangs; now go back and commit backend 1 ]\n\nNOTICE: i 1 ctid (0,40) xmin 5696 xmax 5701 cmin 1 cmax 3\nNOTICE: i 1 ctid (0,44) xmin 5702 xmax 0 cmin 2 cmax 0\nNOTICE: i 2 ctid (0,45) xmin 5702 xmax 0 cmin 4 cmax 0\nNOTICE: Error occurred while executing PL/pgSQL function myftest\nNOTICE: line 10 at select into variables\nERROR: More than one tuple returned by a subselect used as an expression.\nregression=#\n\nThe second backend finds that it wants to update the same row backend 1\ndid, so it waits to see if 1 commits. After the commit, it decides it\ncan do the update. Now, what will we see later in that same\ntransaction? SELECT will consider the original row (ctid 40, here)\nto be still good --- it was deleted, sure enough, but by a transaction\nthat has not committed as far as the current transaction is concerned.\nAnd the row inserted earlier in our own transaction is good too. So\nyou see two rows with i=1. The only way to avoid this is to use\nSerializable mode, which would mean that backend 2 would've gotten an\nerror on its UPDATE.\n\nHowever, if you do the same experiment a second time in the same\nbackends, you get different results. This I think is a SPI bug:\nSPI is doing CommandCounterIncrements at bizarre times, and in\nparticular you get fewer CommandCounterIncrements while planning\nand executing a plpgsql function than you do while re-executing\nan already-planned one. Not sure yet exactly how it should be\nchanged.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 10:43:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "I said:\n> SPI is doing CommandCounterIncrements at bizarre times, and in\n> particular you get fewer CommandCounterIncrements while planning\n> and executing a plpgsql function than you do while re-executing\n> an already-planned one.\n\ns/fewer/more/ ... guess I'm not fully awake yet ... but anyway,\nSPI's handling of CommandCounterIncrement is certainly broken.\nParticularly for cursors --- a CCI for every FETCH will not do,\nyou want the whole scan to be run with the same commandId.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 10:46:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > NOTICE: ctid (0,5) xmin 645188 xmax 645190 cmin 2 cmax 2\n> > This is odd too, since xmax > 0 or cmax > 0 should never happen with\n> > visible tuples, in my understanding.\n> \n> That's what the docs presently say, but they're in error --- nonzero\n> xmax could represent a not-yet-committed deleting xact (or one that\n> did commit, but not in your snapshot); or it could be from a deleting\n> xact that rolled back. \n> \n> I get \n> \n> regression=# SELECT myftest(1);\n> NOTICE: Error occurred while executing PL/pgSQL function myftest\n> NOTICE: line 6 at SQL statement\n> ERROR: parser: parse error at or near \"ctid\"\n> regression=#\n> \n> so there's something wrong with the function as posted.\n> \n> \t\t\tregards, tom lane\n> \n\nSorry, please try new one attatched below.\n\nDROP TABLE t1;\nCREATE TABLE t1 (i INT PRIMARY KEY);\nINSERT INTO t1 VALUES(1);\nDROP FUNCTION myftest(INT);\nCREATE FUNCTION myftest(INT)\nRETURNS INT\nAS '\n DECLARE myid INT;\n DECLARE rec RECORD;\n key ALIAS FOR $1;\n BEGIN\n UPDATE t1 SET i = 1 WHERE i = 1;\n SELECT INTO myid i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n FOR rec IN SELECT ctid,xmin,xmax,cmin,cmax from t1 LOOP\n RAISE NOTICE ''ctid % xmin % xmax % cmin % cmax %'', rec.ctid,rec.xmin,rec.xmax,rec.cmin,rec.cmax;\n END LOOP;\n RETURN 0;\n END;\n '\n LANGUAGE 'plpgsql';\n",
"msg_date": "Sat, 11 Aug 2001 09:17:26 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "> Okay, I understand Tatsuo's original complaint, and I don't think it's\n> a bug exactly --- it's MVCC/Read Committed operating as designed. Using\n> the variant script I just posted and two *freshly started* backends, do:\n\nI don't think so. Seems the problem is specific to PL/pgSQL (or SPI?).\n\n> The second backend finds that it wants to update the same row backend 1\n> did, so it waits to see if 1 commits. After the commit, it decides it\n> can do the update. Now, what will we see later in that same\n> transaction? SELECT will consider the original row (ctid 40, here)\n> to be still good --- it was deleted, sure enough, but by a transaction\n> that has not committed as far as the current transaction is concerned.\n> And the row inserted earlier in our own transaction is good too. So\n> you see two rows with i=1. The only way to avoid this is to use\n> Serializable mode, which would mean that backend 2 would've gotten an\n> error on its UPDATE.\n\nIf your theory is like that, I could see same effect without using\nPL/pgSQL. But I see following in a session using psql (original row's\nctid = (0,2))\n\n[T1] begin;\n[T2] begin;\n[T1] update t1 set i = 1 where i = 1;\n[T2] update t1 set i = 1 where i = 1; <-- waiting for T1 committed/aborted\n[T1] end;\n[T2] select ctid, i from t1;\ntest=# select ctid,i from t1;\n ctid | i \n-------+---\n (0,4) | 1\n(1 row)\n\nSo I only see one row from the last select in T2?\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 11 Aug 2001 09:19:03 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> If your theory is like that, I could see same effect without using\n> PL/pgSQL. But I see following in a session using psql (original row's\n> ctid = (0,2))\n\n> [T1] begin;\n> [T2] begin;\n> [T1] update t1 set i = 1 where i = 1;\n> [T2] update t1 set i = 1 where i = 1; <-- waiting for T1 committed/aborted\n> [T1] end;\n> [T2] select ctid, i from t1;\n> test=# select ctid,i from t1;\n> ctid | i \n> -------+---\n> (0,4) | 1\n> (1 row)\n\n> So I only see one row from the last select in T2?\n\nI believe the reason for this is that in Read Committed mode,\neach separate query from the client computes a new snapshot (see\nSetQuerySnapshot calls in postgres.c). So, when your\n\"select ctid, i from t1\" query executes, it computes a snapshot\nthat says T1 is committed, and then it doesn't see the row left\nover from T1. On the other hand, your plpgsql function operates\ninside a single client query and so it's using just one QuerySnaphot.\n\nOne way to make the results equivalent is to compute a new QuerySnapshot\nfor each SPI query. Quite aside from the cost of doing so, I do not\nthink it makes sense, considering that the previous QuerySnapshot must\nbe restored when we return from the function. Do we really want\nfunctions to see transaction status different from what's seen outside\nthe function call? I doubt it.\n\nThe other way to make the results the same is to omit the\nSetQuerySnapshot calls for successive client-issued queries in one\ntransaction. This could perhaps be defended on logical grounds,\nbut considering your complaint I'm not sure it would make people\nhappier.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 20:40:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "> I believe the reason for this is that in Read Committed mode,\n> each separate query from the client computes a new snapshot (see\n> SetQuerySnapshot calls in postgres.c). So, when your\n> \"select ctid, i from t1\" query executes, it computes a snapshot\n> that says T1 is committed, and then it doesn't see the row left\n> over from T1. On the other hand, your plpgsql function operates\n> inside a single client query and so it's using just one QuerySnaphot.\n\nOh I see. So the \"problem\" is not specific to PL/pgSQL, but exists in\nall our procedual languages.\n\n> One way to make the results equivalent is to compute a new QuerySnapshot\n> for each SPI query. Quite aside from the cost of doing so, I do not\n> think it makes sense, considering that the previous QuerySnapshot must\n> be restored when we return from the function. Do we really want\n> functions to see transaction status different from what's seen outside\n> the function call? I doubt it.\n> \n> The other way to make the results the same is to omit the\n> SetQuerySnapshot calls for successive client-issued queries in one\n> transaction. This could perhaps be defended on logical grounds,\n> but considering your complaint I'm not sure it would make people\n> happier.\n\nOk, maybe another workaround might be adding a checking for cmax in\nthe subselect:\n\n SELECT INTO myid i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n\nto make sure that cmax > 0?\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 11 Aug 2001 11:19:39 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Ok, maybe another workaround might be adding a checking for cmax in\n> the subselect:\n\n> SELECT INTO myid i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n\n> to make sure that cmax > 0?\n\nHuh? How would that help?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 11:51:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Ok, maybe another workaround might be adding a checking for cmax in\n> > the subselect:\n> \n> > SELECT INTO myid i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n> \n> > to make sure that cmax > 0?\n> \n> Huh? How would that help?\n\nAccording to the doc, tuples with cmax > 0 should not be visible to\nthe current transaction, no?\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 12 Aug 2001 10:48:05 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> According to the doc, tuples with cmax > 0 should not be visible to\n> the current transaction, no?\n\nThe docs are wrong --- my mistake originally, and in fact I just fixed\nit in current sources. cmax != 0 only indicates that someone tried to\ndelete the tuple; not that the someone ever committed, much less that\ntheir commit should be visible to you under MVCC rules. (Also, I\nbelieve the command counter starts at 0, so this test would only catch\ndeletes that weren't the first command in their transaction, anyway.\nTesting xmax != 0 would avoid that issue, but not the fundamental\nproblem of commit status.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 22:27:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> I believe the reason for this is that in Read Committed mode,\n> each separate query from the client computes a new snapshot (see\n> SetQuerySnapshot calls in postgres.c). So, when your\n> \"select ctid, i from t1\" query executes, it computes a snapshot\n> that says T1 is committed, and then it doesn't see the row left\n> over from T1. On the other hand, your plpgsql function operates\n> inside a single client query and so it's using just one QuerySnaphot.\n> \n> One way to make the results equivalent is to compute a new QuerySnapshot\n> for each SPI query. Quite aside from the cost of doing so, I do not\n> think it makes sense, considering that the previous QuerySnapshot must\n> be restored when we return from the function. Do we really want\n> functions to see transaction status different from what's seen outside\n> the function call?\n\nYes I do.\n\n> I doubt it.\n> \n> The other way to make the results the same is to omit the\n> SetQuerySnapshot calls for successive client-issued queries in one\n> transaction.\n\nWhat's different from SERIALIZABLE mode ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 13 Aug 2001 00:08:48 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: PL/pgSQL bug? "
},
{
"msg_contents": "> > One way to make the results equivalent is to compute a new QuerySnapshot\n> > for each SPI query. Quite aside from the cost of doing so, I do not\n> > think it makes sense, considering that the previous QuerySnapshot must\n> > be restored when we return from the function. Do we really want\n> > functions to see transaction status different from what's seen outside\n> > the function call?\n> \n> Yes I do.\n\nMe too. Current behavior of procedural languages seem hard to\nunderstand for users. \n\nBTW, why must we restore the previous QuerySnapshot? We already break\nthe rule (if it's a rule). For example, COPY TO calls SetQuerySnapshot\n(see tcop/utility.c). So, below produces \"ERROR: More than one tuple\nreturned by a subselect used as an expression\":\n\nDROP TABLE t1;\nCREATE TABLE t1 (i INT PRIMARY KEY);\nDROP FUNCTION myftest(INT);\nCREATE FUNCTION myftest(INT)\nRETURNS INT\nAS '\n UPDATE t1 SET i = 1 WHERE i = 1;\n SELECT i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n '\n LANGUAGE 'sql';\n\nwhile below does not throw an error:\n\nDROP TABLE t1;\nCREATE TABLE t1 (i INT PRIMARY KEY);\nDROP FUNCTION myftest(INT);\nCREATE FUNCTION myftest(INT)\nRETURNS INT\nAS '\n UPDATE t1 SET i = 1 WHERE i = 1;\n COPY t1 TO ''/tmp/t1.data'';\n SELECT i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n '\n LANGUAGE 'sql';\n",
"msg_date": "Mon, 13 Aug 2001 10:07:34 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: PL/pgSQL bug? "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> BTW, why must we restore the previous QuerySnapshot? We already break\n> the rule (if it's a rule).\n\nNo we don't. There are no SetQuerySnapshot calls occuring *within*\na query. An example of why that would be unacceptable: consider\n\n\tselect myfunc(f1) from table where f2 = 42;\n\nSuppose executing myfunc() causes an unrestored SetQuerySnapshot call.\nThen, if other transactions are busy changing f2 values, the set of\nrows that this query returns could depend on the order in which rows\nare visited --- since whether it thinks a row with f2 = 42 is visible\nmight depend on whether any previous rows had been matched (causing\nmyfunc() to be evaluated). For that matter, it could depend on the\nform of the query plan used --- in some plans, myfunc() might be called\nwhile the scan is in progress, in others not till afterward.\n\n> For example, COPY TO calls SetQuerySnapshot\n> (see tcop/utility.c).\n\nThat's just because postgres.c doesn't do it automatically for utility\nstatements. There's still just one SetQuerySnapshot per query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Aug 2001 22:17:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "> No we don't. There are no SetQuerySnapshot calls occuring *within*\n> a query. An example of why that would be unacceptable: consider\n> \n> \tselect myfunc(f1) from table where f2 = 42;\n> \n> Suppose executing myfunc() causes an unrestored SetQuerySnapshot call.\n> Then, if other transactions are busy changing f2 values, the set of\n> rows that this query returns could depend on the order in which rows\n> are visited --- since whether it thinks a row with f2 = 42 is visible\n> might depend on whether any previous rows had been matched (causing\n> myfunc() to be evaluated). For that matter, it could depend on the\n> form of the query plan used --- in some plans, myfunc() might be called\n> while the scan is in progress, in others not till afterward.\n\nIf so, FROM clause-less SELECT (select myfunc();) might be ok.\n\n> > For example, COPY TO calls SetQuerySnapshot\n> > (see tcop/utility.c).\n> \n> That's just because postgres.c doesn't do it automatically for utility\n> statements. There's still just one SetQuerySnapshot per query.\n\nI'm confused. In my example:\n\nCREATE FUNCTION myftest(INT)\nRETURNS INT\nAS '\n UPDATE t1 SET i = 1 WHERE i = 1;\n COPY t1 TO ''/tmp/t1.data'';\n SELECT i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n '\n LANGUAGE 'sql';\n\nWhen COPY is invoked in the function, I thought SetQuerySnapshot is\ncalled.\n\nSo \"SELECT myftest(1);\" would call SetQuerySnapshot twice, no?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 13 Aug 2001 11:46:25 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> CREATE FUNCTION myftest(INT)\n> RETURNS INT\n> AS '\n> UPDATE t1 SET i = 1 WHERE i = 1;\n> COPY t1 TO ''/tmp/t1.data'';\n> SELECT i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n> '\n> LANGUAGE 'sql';\n\n> When COPY is invoked in the function, I thought SetQuerySnapshot is\n> called.\n\nHmm, I think you are right. This means that calling SetQuerySnapshot\nin ProcessUtility is the *wrong* place to do it; or that there should\nbe additional logic to suppress the call in this context. IMHO, anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Aug 2001 23:04:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": " Tatsuo Ishii wrote:\n>\n> > > One way to make the results equivalent is to compute a new\nQuerySnapshot\n> > > for each SPI query. Quite aside from the cost of doing so, I do not\n> > > think it makes sense, considering that the previous QuerySnapshot must\n> > > be restored when we return from the function. Do we really want\n> > > functions to see transaction status different from what's seen outside\n> > > the function call?\n> >\n> > Yes I do.\n>\n> Me too. Current behavior of procedural languages seem hard to\n> understand for users.\n>\n\nYes it's a siginificant point. I've referred to the\nimpropriety to use a unique snapshot thoughout a\nfunction call when this kind of bug(?) was reported.\nWho could take care of it in writing PL/pgSQL ?\n\n> BTW, why must we restore the previous QuerySnapshot?\n\nFor example, in the case such as\n select .., some_func(item1), .. from a_table;\nSELECT always uses the same snapshot for all its\ninternal fetch operations, so it seems reasonable\nfor each some_func() to be called in the same snapshot.\nIt's possible for a function to use a unique snapshot\nif there are only SELECT statements in the function\nbut it's impossible if there are UPDATE/DELETE or\nSELECT .. FOR UPDATE statements etc.. We should be\ncareful to handle such functions which have side\neffects. IMHO we shouldn't call such functions or\nshouldn't expect consistent results with the use\nof such funtions. OTOH\n select some_func(..);\nis a procedure call not a function call in reality.\nThere seems to be no necessity to restore the previous\nQuerySnapshot when calling procedures and we could\ncall any function as a procedure.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Mon, 13 Aug 2001 17:01:58 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: PL/pgSQL bug? "
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> It's possible for a function to use a unique snapshot\n> if there are only SELECT statements in the function\n> but it's impossible if there are UPDATE/DELETE or\n> SELECT .. FOR UPDATE statements etc.\n\nYou are confusing snapshots (which determine visibility of the results\nof OTHER transactions) with command-counter incrementing (which\ndetermines visibility of the results of OUR OWN transaction). I agree\nthat plpgsql's handling of command-counter changes is broken, but it\ndoes not follow that sprinkling the code with SetQuerySnapshot is wise.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 09:38:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n>> that plpgsql's handling of command-counter changes is broken, but it\n>> does not follow that sprinkling the code with SetQuerySnapshot is wise.\n\n> Why do you blame PL/pgSQL for that? I don't see a single\n> reference to the command counter from the PL/pgSQL sources.\n> All it does is using SPI. So does \"using SPI\" by itself count\n> as \"boken\"?\n\nSorry: SPI is broken, not plpgsql. Does that make you feel better?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 10:05:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > It's possible for a function to use a unique snapshot\n> > if there are only SELECT statements in the function\n> > but it's impossible if there are UPDATE/DELETE or\n> > SELECT .. FOR UPDATE statements etc.\n>\n> You are confusing snapshots (which determine visibility of the results\n> of OTHER transactions) with command-counter incrementing (which\n> determines visibility of the results of OUR OWN transaction). I agree\n> that plpgsql's handling of command-counter changes is broken, but it\n> does not follow that sprinkling the code with SetQuerySnapshot is wise.\n\n Why do you blame PL/pgSQL for that? I don't see a single\n reference to the command counter from the PL/pgSQL sources.\n All it does is using SPI. So does \"using SPI\" by itself count\n as \"boken\"?\n\n If so, uh-oh, referential integrity is using SPI ...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 13 Aug 2001 10:11:23 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@yahoo.com> writes:\n> >> that plpgsql's handling of command-counter changes is broken, but it\n> >> does not follow that sprinkling the code with SetQuerySnapshot is wise.\n>\n> > Why do you blame PL/pgSQL for that? I don't see a single\n> > reference to the command counter from the PL/pgSQL sources.\n> > All it does is using SPI. So does \"using SPI\" by itself count\n> > as \"boken\"?\n>\n> Sorry: SPI is broken, not plpgsql. Does that make you feel better?\n\n Not that it \"makes my day\". But it makes me feel better,\n thanks.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 13 Aug 2001 11:16:28 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > It's possible for a function to use a unique snapshot\n> > if there are only SELECT statements in the function\n> > but it's impossible if there are UPDATE/DELETE or\n> > SELECT .. FOR UPDATE statements etc.\n> \n> You are confusing\n\nNo.\n\n> snapshots (which determine visibility of the results\n> of OTHER transactions)\n\nYes.\n\n> with command-counter incrementing (which\n> determines visibility of the results of OUR OWN transaction).\n\nYes. \n\n> I agree\n> that plpgsql's handling of command-counter changes is broken,\n\nProbably yes but\n\n> but it\n> does not follow that sprinkling the code with SetQuerySnapshot is wise.\n> \n\nShould both command counter and snapshots be changed\nproperly ? Please explain me why/how we could do with\nno snapshot change in read committed mode.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 14 Aug 2001 01:56:02 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > It's possible for a function to use a unique snapshot\n> > if there are only SELECT statements in the function\n> > but it's impossible if there are UPDATE/DELETE or\n> > SELECT .. FOR UPDATE statements etc.\n> \n> You are confusing\n\nNo.\n\n> snapshots (which determine visibility of the results\n> of OTHER transactions)\n\nYes.\n\n> with command-counter incrementing (which\n> determines visibility of the results of OUR OWN transaction).\n\nYes. \n\n> I agree\n> that plpgsql's handling of command-counter changes is broken,\n\nProbably yes but\n\n> but it\n> does not follow that sprinkling the code with SetQuerySnapshot is wise.\n> \n\nShould both command counter and snapshots be changed\nproperly ? Please explain me why/how we could do with\nno snapshot change in read committed mode.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 14 Aug 2001 01:56:31 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > It's possible for a function to use a unique snapshot\n> > if there are only SELECT statements in the function\n> > but it's impossible if there are UPDATE/DELETE or\n> > SELECT .. FOR UPDATE statements etc.\n> \n> You are confusing snapshots (which determine visibility of the results\n> of OTHER transactions)\n\nPlease note that the meaning of snapshots for statements\nother than SELECT is different from that for SELECT.\nFor example,\n1) The result of SELECT .. FOR UPDATE may be different\n from that of SELECT for the same snapshot.\n2) Once a snapshot given, the result of SELECT is dicisive\n but that of SELECT .. FOR UPDATE isn't.\n\nSELECT and SELECT .. FOR UPDATE are alike in appearance\nbut quite different in nature. There's no real snapshot\nfor SELECT .. FOR UPDATE in the current implementation.\nI suggested the implementation with the real snapshot\n(without the word snapshot though) once before 6.5.\nThe implementation seems hard and the possibility isn't\nconfirmed. Even though the implementation is possible,\nit requires the repeated computation of snapshot until\nthe consisteny is satisfied, and so arbitrary snapshots\naren't allowed.\nThere's little meaning for SELECT statements and subsequent\nother statements like UPDATE/DELETE/SELECT .. FOR UPDATE to\nuse the same snapshot in read committed mode. \nThere's no consistency with the current handling of snapshots\ninside a function call.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 16 Aug 2001 07:14:56 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > NOTICE: ctid (0,5) xmin 645188 xmax 645190 cmin 2 cmax 2\n> > This is odd too, since xmax > 0 or cmax > 0 should never happen with\n> > visible tuples, in my understanding.\n> \n> That's what the docs presently say, but they're in error --- nonzero\n> xmax could represent a not-yet-committed deleting xact (or one that\n> did commit, but not in your snapshot); or it could be from a deleting\n> xact that rolled back.\n\nor it can come from referential integrity triggers:\n\nhannu=# create table parent(parid integer primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n'parent_pkey' for table 'parent'\nCREATE\nhannu=# create table child(cldid integer references parent on update\ncascade);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\nhannu=# insert into parent values(1);\nINSERT 20652 1\nhannu=# insert into child values(1);\nINSERT 20653 1\nhannu=# update parent set parid=2;\nUPDATE 1\nhannu=# select xmin,xmax,cmin,cmax,parid from parent;\n xmin | xmax | cmin | cmax | parid \n------+------+------+------+-------\n 731 | 731 | 0 | 4 | 2\n(1 row)\n\n\n\nNow I have a question: if xmax is not used in determining tuple\nvisibility \n(as I had assumed earlier) then what is ? How does postgres decide that\na \ntuple is deleted ?\n\n--------------------\nHannu\n",
"msg_date": "Tue, 21 Aug 2001 11:36:09 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Tom Lane wrote:\n>> That's what the docs presently say, but they're in error --- nonzero\n>> xmax could represent a not-yet-committed deleting xact (or one that\n>> did commit, but not in your snapshot); or it could be from a deleting\n>> xact that rolled back.\n\n> or it can come from referential integrity triggers:\n\nMmm, yeah, SELECT FOR UPDATE uses xmax to record the identity of a\ntransaction that has a row locked for update. In this case the xact\nhasn't actually deleted the old row yet (and may never do so), but xmax\nis set as though it has.\n\n> Now I have a question: if xmax is not used in determining tuple\n> visibility (as I had assumed earlier) then what is ?\n\nThere are additional status bits in each tuple (t_infomask) that\ndistinguish these various situations. The xmax field alone doesn't\ntell you much, since you can't interpret it without context.\n\nI'm not sure why we bother to make xmin/xmax/etc visible to\napplications. They're really of no value to an app AFAICS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 09:42:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Tom Lane wrote:\n> >> That's what the docs presently say, but they're in error --- nonzero\n> >> xmax could represent a not-yet-committed deleting xact (or one that\n> >> did commit, but not in your snapshot); or it could be from a deleting\n> >> xact that rolled back.\n> \n> > or it can come from referential integrity triggers:\n> \n> Mmm, yeah, SELECT FOR UPDATE uses xmax to record the identity of a\n> transaction that has a row locked for update. In this case the xact\n> hasn't actually deleted the old row yet (and may never do so), but xmax\n> is set as though it has.\n> \n> > Now I have a question: if xmax is not used in determining tuple\n> > visibility (as I had assumed earlier) then what is ?\n> \n> There are additional status bits in each tuple (t_infomask) that\n> distinguish these various situations. The xmax field alone doesn't\n> tell you much, since you can't interpret it without context.\n\nAs I understood it it should tell the trx id that invalidated this\ntuple, no ?\n\nIf you must write t_infomask in the tuple anyhow, then why not clean up\nxmax \non abort ?\n\n> I'm not sure why we bother to make xmin/xmax/etc visible to\n> applications. They're really of no value to an app AFAICS.\n> \n\nI guess they used to be of value at the time when time travel was\npossible \nand people did use xmax for documented purposes, i.e. recording tuple's\nlifetime \nand not for \"other\" stuff, especially without cleaning up after trx\nabort ;)\n\nI agree that they are losing their utility as we are moving away from\nthe \noriginal notion of transaction ids (and oids) as something permanent\nthat could \nbe used for time travel or system auditing and recommending peole who\nneed such \nfeatures to reimplement those at application level, with triggers and\nexplicitly\ndefined fields.\n\n------------------\nHannu\n",
"msg_date": "Tue, 21 Aug 2001 16:20:14 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "\nIs this something that still needs fixing?\n\n\n> > I believe the reason for this is that in Read Committed mode,\n> > each separate query from the client computes a new snapshot (see\n> > SetQuerySnapshot calls in postgres.c). So, when your\n> > \"select ctid, i from t1\" query executes, it computes a snapshot\n> > that says T1 is committed, and then it doesn't see the row left\n> > over from T1. On the other hand, your plpgsql function operates\n> > inside a single client query and so it's using just one QuerySnaphot.\n> \n> Oh I see. So the \"problem\" is not specific to PL/pgSQL, but exists in\n> all our procedual languages.\n> \n> > One way to make the results equivalent is to compute a new QuerySnapshot\n> > for each SPI query. Quite aside from the cost of doing so, I do not\n> > think it makes sense, considering that the previous QuerySnapshot must\n> > be restored when we return from the function. Do we really want\n> > functions to see transaction status different from what's seen outside\n> > the function call? I doubt it.\n> > \n> > The other way to make the results the same is to omit the\n> > SetQuerySnapshot calls for successive client-issued queries in one\n> > transaction. This could perhaps be defended on logical grounds,\n> > but considering your complaint I'm not sure it would make people\n> > happier.\n> \n> Ok, maybe another workaround might be adding a checking for cmax in\n> the subselect:\n> \n> SELECT INTO myid i FROM t1 WHERE i = (SELECT i FROM t1 WHERE i = 1);\n> \n> to make sure that cmax > 0?\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 21:46:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "\nI am not sure if there is a TODO item here, but if there is, please let\nme know. Thanks.\n\n\n> > -----Original Message-----\n> > From: Tom Lane\n> > \n> > I believe the reason for this is that in Read Committed mode,\n> > each separate query from the client computes a new snapshot (see\n> > SetQuerySnapshot calls in postgres.c). So, when your\n> > \"select ctid, i from t1\" query executes, it computes a snapshot\n> > that says T1 is committed, and then it doesn't see the row left\n> > over from T1. On the other hand, your plpgsql function operates\n> > inside a single client query and so it's using just one QuerySnaphot.\n> > \n> > One way to make the results equivalent is to compute a new QuerySnapshot\n> > for each SPI query. Quite aside from the cost of doing so, I do not\n> > think it makes sense, considering that the previous QuerySnapshot must\n> > be restored when we return from the function. Do we really want\n> > functions to see transaction status different from what's seen outside\n> > the function call?\n> \n> Yes I do.\n> \n> > I doubt it.\n> > \n> > The other way to make the results the same is to omit the\n> > SetQuerySnapshot calls for successive client-issued queries in one\n> > transaction.\n> \n> What's different from SERIALIZABLE mode ?\n> \n> regards,\n> Hiroshi Inoue\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 12:16:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n>\n> I am not sure if there is a TODO item here, but if there is, please let\n> me know. Thanks.\n\nThere seems to be no consensus on this item currently.\nIMHO both the command counters and the snapshots\nin a function should advance except the leading SELECT\nstatements.\nNote that SELECT .. FOR UPDATE statements aren't\nSELECT statements.\n\nregards,\nHiroshi Inoue\n\n>\n>\n> > > -----Original Message-----\n> > > From: Tom Lane\n> > >\n> > > I believe the reason for this is that in Read Committed mode,\n> > > each separate query from the client computes a new snapshot (see\n> > > SetQuerySnapshot calls in postgres.c). So, when your\n> > > \"select ctid, i from t1\" query executes, it computes a snapshot\n> > > that says T1 is committed, and then it doesn't see the row left\n> > > over from T1. On the other hand, your plpgsql function operates\n> > > inside a single client query and so it's using just one QuerySnaphot.\n> > >\n> > > One way to make the results equivalent is to compute a new\n> QuerySnapshot\n> > > for each SPI query. Quite aside from the cost of doing so, I do not\n> > > think it makes sense, considering that the previous QuerySnapshot must\n> > > be restored when we return from the function. Do we really want\n> > > functions to see transaction status different from what's seen outside\n> > > the function call?\n> >\n> > Yes I do.\n> >\n> > > I doubt it.\n> > >\n> > > The other way to make the results the same is to omit the\n> > > SetQuerySnapshot calls for successive client-issued queries in one\n> > > transaction.\n> >\n> > What's different from SERIALIZABLE mode ?\n> >\n> > regards,\n> > Hiroshi Inoue\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Sat, 8 Sep 2001 11:27:13 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL bug?"
}
] |
[
{
"msg_contents": "I would like to know if there is a way to compare the data of tables in\ndifferent databases. For example I have table in db1 and exactly the\nsame table in db2. Is it possible to see if the contents of the two\ntables are exactly the same?\nFor example if the table(db1.customer) has a column surname and the\nfield has a value of \"Testing\" and in db2.customer the exact same row\nhas a value \"Testong\" is it possible to actually know that there is a\ndifference ? I don't actually have to know what the actual differences\nare , I must just know that there is a difference.\n\nThank you\n\nPhillip\n\n",
"msg_date": "Fri, 10 Aug 2001 10:59:22 +0200",
"msg_from": "Phillip F Jansen <pfj@ucs.co.za>",
"msg_from_op": true,
"msg_subject": "Comparing tables in different db's"
},
{
"msg_contents": "Phillip F Jansen wrote:\n >I would like to know if there is a way to compare the data of tables in\n >different databases. For example I have table in db1 and exactly the\n >same table in db2. Is it possible to see if the contents of the two\n >tables are exactly the same?\n >For example if the table(db1.customer) has a column surname and the\n >field has a value of \"Testing\" and in db2.customer the exact same row\n >has a value \"Testong\" is it possible to actually know that there is a\n >difference ? I don't actually have to know what the actual differences\n >are , I must just know that there is a difference.\n\nYou can't do it inside PostgreSQL. However, this shell script will do it:\n\n psql -d db1 -tc \"SELECT surname FROM customer WHERE id = 'xxx'\" >db1.out\n psql -d db2 -tc \"SELECT surname FROM customer WHERE id = 'xxx'\" >db2.out\n if ! diff db1.out db2.out >/dev/null\n then\n echo Databases differ\n fi\n rm db[12].out\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If ye abide in me, and my words abide in you, ye shall\n ask what ye will, and it shall be done unto you.\" \n John 15:7 \n\n\n",
"msg_date": "Fri, 10 Aug 2001 11:25:22 +0100",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Comparing tables in different db's "
},
{
"msg_contents": "\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\n> I would like to know if there is a way to compare the data of tables in\n> different databases. For example I have table in db1 and exactly the\n> same table in db2. Is it possible to see if the contents of the two\n> tables are exactly the same?\n\nI use pg_dump for my tests. Example\n\npg_dump -a -t table_name db1 > db1_dump.out\npg_dump -a -t table_name db2 > db2_dump.out\n\nThen you can use diff db1_dump.out db2_dump.out\n\nI hope this helps\n\nDarren\n\n> For example if the table(db1.customer) has a column surname and the\n> field has a value of \"Testing\" and in db2.customer the exact same row\n> has a value \"Testong\" is it possible to actually know that there is a\n> difference ? I don't actually have to know what the actual differences\n> are , I must just know that there is a difference.\n\n\n",
"msg_date": "Fri, 10 Aug 2001 13:39:56 GMT",
"msg_from": "Darren Johnson <djohnson@greatbridge.com>",
"msg_from_op": false,
"msg_subject": "Re: Comparing tables in different db's"
},
{
"msg_contents": "Darren Johnson wrote:\n\n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n>\n> > I would like to know if there is a way to compare the data of tables in\n> > different databases. For example I have table in db1 and exactly the\n> > same table in db2. Is it possible to see if the contents of the two\n> > tables are exactly the same?\n>\n> I use pg_dump for my tests. Example\n>\n> pg_dump -a -t table_name db1 > db1_dump.out\n> pg_dump -a -t table_name db2 > db2_dump.out\n>\n> Then you can use diff db1_dump.out db2_dump.out\n\n(1) The output contains the OID and the owner, so I guess it won't work without stripping comments first?\n\n(2) It (still) doesn't work if you have datetime columns with more than two digits in the miliseconds field (see below).\n\nYeah, I guess this means that the usual backup strategy doesn't work either.... :-(\n\n\n --- Allan.\n\n\ntest=# create table test (a datetime);\nCREATE\ntest=# insert into test values ('2001-08-10 23:04:12.3456');\nINSERT 12760275 1\ntest=# insert into test values ('2001-08-10 23:04:12.345678');\nINSERT 12760276 1\ntest=# insert into test values ('2001-08-10 23:04:12.3456789');\nINSERT 12760277 1\ntest=# insert into test values ('2001-08-10 23:04:12.345678901234567890');\nINSERT 12760278 1\ntest=# select EXTRACT(MICROSECONDS FROM a) from test;\n date_part\n------------------\n 345599.999999999\n 345677.999999999\n 345679.000000001\n 345679.000000001\n(4 rows)\n\nbash-2.04$ pg_dump -a -t test test > /tmp/test.dmp\nbash-2.04$ cat /tmp/test.dmp\n--\n-- Selected TOC Entries:\n--\n--\n-- Data for TOC Entry ID 1 (OID 12760265)\n--\n-- Name: test Type: TABLE DATA Owner: allane\n--\n\n\n\\connect - postgres\n-- Disable triggers\nUPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" = 'test';\n\n\\connect - allane\nCOPY \"test\" FROM stdin;\n2001-08-10 23:04:12.35+01\n2001-08-10 23:04:12.35+01\n2001-08-10 23:04:12.35+01\n2001-08-10 23:04:12.35+01\n\\.\n\\connect - postgres\n-- Enable triggers\nUPDATE pg_class SET reltriggers = (SELECT count(*) FROM pg_trigger where pg_class.oid = tgrelid) WHERE relname = 'test';\n\n\n\n",
"msg_date": "Fri, 10 Aug 2001 23:14:46 +0100",
"msg_from": "Allan Engelhardt <allane@cybaea.com>",
"msg_from_op": false,
"msg_subject": "Re: Comparing tables in different db's"
},
{
"msg_contents": "Allan Engelhardt wrote:\n\n\n> (1) The output contains the OID and the owner, so I guess it won't work without stripping comments first?\n> \nI was using an older version of PostgreSQL which doesn't have\nthe comments, and it looks like I'll need to make the OID/owner\ncomments an option in pg_dump, once I get the further along\nin the changes I am working on. In the mean time you can try\nsomething like..\npg_dump -a -t table_name db1|egrep -v \"\\(OID|Owner\" > db1_dump.out\nbut this is a hack to strip the offending comments, and\nwouldn't work in every situation.\n\n> (2) It (still) doesn't work if you have datetime columns with more than two digits in the miliseconds field (see below).\n> \nI'm not sure about this one, I need to do more investigation here. BTW what platform/OS\nare you using?\n\nDarren\n\n \n\n",
"msg_date": "Sat, 11 Aug 2001 22:32:59 -0400",
"msg_from": "Darren Johnson <djohnson@greatbridge.com>",
"msg_from_op": false,
"msg_subject": "Re: Comparing tables in different db's"
},
{
"msg_contents": "Darren Johnson wrote:\n\n> Allan Engelhardt wrote:\n>\n> > (2) It (still) doesn't work if you have datetime columns with more than two digits in the miliseconds field (see below).\n> >\n> I'm not sure about this one, I need to do more investigation here. BTW what platform/OS\n> are you using?\n\nPostgreSQL 7.1.2-4PGDG, Linux 2.4.7 on i686 SMP.\n\nAllan.\n\n",
"msg_date": "Sun, 12 Aug 2001 19:48:50 +0100",
"msg_from": "Allan Engelhardt <allane@cybaea.com>",
"msg_from_op": false,
"msg_subject": "Re: Comparing tables in different db's"
},
{
"msg_contents": "\nJustin Clift wrote:\n\n\n> If you finalise this into a decent procedure (and/or scripts), then\n> would you mind contributing them? I can place them on the\n> techdocs.postgresql.org website as a start.\n> \n\nNot at all, I plan to contribute any/all work I am\ninvolved with. This would be part of validating and\ntesting replication, but unfortunately it could be a\nwhile before we get to that stage in the current\nversion of the PostgreSQL. :(\n\nThanks,\n\nDarren\n\n",
"msg_date": "Sun, 12 Aug 2001 17:13:44 -0400",
"msg_from": "Darren Johnson <djohnson@greatbridge.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Comparing tables in different db's"
}
] |
[
{
"msg_contents": "Hi,\n\nwe present our plans to change of index AM tables following\nTom's idea (http://fts.postgresql.org/db/mw/msg.html?mid=1025731)\n\n\" ... pg_opclass should have, not just one row for each distinct opclass name,\nbut one row for each supported combination of index AM and opclass name.\"\n\nThis change would help to create indexes with keys and values\n(to be indexed) of different types. Read some discussion in thread\nhttp://fts.postgresql.org/db/mw/msg.html?mid=119796\n\nWe proposed to do our changes in 2 stages for smooth transition.\n\nI) Changes of index AM tables (pg_opclass,pg_amop)\n 1. pg_opclass:\n Add opcamid(oid) - index type identificator from pg_am\n Add opckeytype(oid) - type of key,\n if opckeytype == InvalidOid then opckeytype=opcdeftype\n (opcname,opcamid) should be unique\n Relation -> rd_att must be filled by using opckeytype, not\n opcdeftype as now!!\n Is't worth to have index on (opcname,opcamid) ?\n\n 2. pg_amop:\n Add amopreqcheck(bool) - if TRUE, then results of check by index\n required to test with original values.\n At this stage we could determine index.islossy using amopreqcheck.\n We could don't use index.islossy even at this stage but we need to know\n how to determine pg_amop.reqcheck in create_indexscan_plan\n (does it's right place to check ?)\n\n After first stage completed we'll have everything we need\n\nII) Removing unnecessary information - clearing system tables\n 1. remove pg_index.indislossy (see I.2) and indhaskeytype from\n pg_index\n\n 2. remove pg_amop.amopid and pg_amproc.amid (see I.1)\n Tom has sugessted (http://fts.postgresql.org/db/mw/msg.html?mid=1025860)\n it might be remained because of performance reason. Tom, do you\n have a decision ?\n\nWe hope to get a first version of patch in a week.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Fri, 10 Aug 2001 18:36:59 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Proposal for changing of pg_opclass"
}
] |
[
{
"msg_contents": "\nIs there any value to KQSO anymore? It was for complex OR clauses. I\nthought Tom already fixed most of that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Aug 2001 13:18:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "KQSO parameter"
}
] |
[
{
"msg_contents": "Is there any value to KQSO parameter? It was for complex OR clauses. I\nthought Tom fixed most of that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Aug 2001 13:19:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "KSQO parameter"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is there any value to KQSO parameter? It was for complex OR clauses. I\n> thought Tom fixed most of that.\n\nThe last set of tests that I did (~7.0) showed that it was still\nmarginally faster than the default approach. Not a \"must have\"\nlike it used to be for those queries, though.\n\nAs of 7.1 it is disabled anyway, because I didn't have time to update\nthe KSQO code for the new implementation of UNION and friends.\n\nWe should either update the code or remove it entirely. I still don't\nhave the time or interest to do #1, but I'm not quite ready to do #2\neither ... does anyone out there want to work on it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 14:02:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: KSQO parameter "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is there any value to KQSO parameter? It was for complex OR clauses. I\n> > thought Tom fixed most of that.\n> \n> The last set of tests that I did (~7.0) showed that it was still\n> marginally faster than the default approach. Not a \"must have\"\n> like it used to be for those queries, though.\n> \n> As of 7.1 it is disabled anyway, because I didn't have time to update\n> the KSQO code for the new implementation of UNION and friends.\n> \n> We should either update the code or remove it entirely. I still don't\n> have the time or interest to do #1, but I'm not quite ready to do #2\n> either ... does anyone out there want to work on it?\n\nThe following patch removes KSQO from GUC and the call to the function.\nIt also moves the main KSQO file into _deadcode. Applied.\n\nODBC folks, should I remove KSQO from the ODBC driver?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.118\ndiff -c -r1.118 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t15 Jun 2002 19:58:53 -0000\t1.118\n--- doc/src/sgml/runtime.sgml\t15 Jun 2002 23:27:27 -0000\n***************\n*** 760,793 ****\n </varlistentry>\n \n <varlistentry>\n- <term><varname>KSQO</varname> (<type>boolean</type>)</term>\n- <listitem>\n- <para>\n- The <firstterm>Key Set Query Optimizer</firstterm>\n- (<acronym>KSQO</acronym>) causes the query planner to convert\n- queries whose <literal>WHERE</> clause contains many OR'ed AND\n- clauses (such as <literal>WHERE (a=1 AND b=2) OR (a=2 AND b=3)\n- ...</literal>) into a union query. This method can be faster\n- than the default implementation, but it doesn't necessarily give\n- exactly the same results, since <literal>UNION</> implicitly\n- adds a <literal>SELECT DISTINCT</> clause to eliminate identical\n- output rows. <acronym>KSQO</acronym> is commonly used when\n- working with products like <productname>Microsoft\n- Access</productname>, which tend to generate queries of this\n- form.\n- </para>\n- \n- <para>\n- The <acronym>KSQO</acronym> algorithm used to be absolutely\n- essential for queries with many OR'ed AND clauses, but in\n- <productname>PostgreSQL</productname> 7.0 and later the standard\n- planner handles these queries fairly successfully; hence the\n- default is off.\n- </para>\n- </listitem>\n- </varlistentry>\n- \n- <varlistentry>\n <term><varname>RANDOM_PAGE_COST</varname> (<type>floating point</type>)</term>\n <listitem>\n <para>\n--- 760,765 ----\nIndex: src/backend/optimizer/plan/planner.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/optimizer/plan/planner.c,v\nretrieving revision 1.120\ndiff -c -r1.120 planner.c\n*** src/backend/optimizer/plan/planner.c\t13 Jun 2002 15:10:25 -0000\t1.120\n--- src/backend/optimizer/plan/planner.c\t15 Jun 2002 23:27:29 -0000\n***************\n*** 145,155 ****\n \tPlannerQueryLevel++;\n \tPlannerInitPlan = NIL;\n \n- #ifdef ENABLE_KEY_SET_QUERY\n- \t/* this should go away sometime soon */\n- \ttransformKeySetQuery(parse);\n- #endif\n- \n \t/*\n \t * Check to see if any subqueries in the rangetable can be merged into\n \t * this query.\n--- 145,150 ----\nIndex: src/backend/optimizer/prep/Makefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/optimizer/prep/Makefile,v\nretrieving revision 1.12\ndiff -c -r1.12 Makefile\n*** src/backend/optimizer/prep/Makefile\t31 Aug 2000 16:10:13 -0000\t1.12\n--- src/backend/optimizer/prep/Makefile\t15 Jun 2002 23:27:29 -0000\n***************\n*** 12,18 ****\n top_builddir = ../../../..\n include $(top_builddir)/src/Makefile.global\n \n! OBJS = prepqual.o preptlist.o prepunion.o prepkeyset.o\n \n all: SUBSYS.o\n \n--- 12,18 ----\n top_builddir = ../../../..\n include $(top_builddir)/src/Makefile.global\n \n! OBJS = prepqual.o preptlist.o prepunion.o\n \n all: SUBSYS.o\n \nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/misc/guc.c,v\nretrieving revision 1.69\ndiff -c -r1.69 guc.c\n*** src/backend/utils/misc/guc.c\t17 May 2002 20:32:29 -0000\t1.69\n--- src/backend/utils/misc/guc.c\t15 Jun 2002 23:27:32 -0000\n***************\n*** 312,322 ****\n \t\t{ \"enable_hashjoin\", PGC_USERSET }, &enable_hashjoin,\n \t\ttrue, NULL, NULL\n \t},\n- \n- \t{\n- \t\t{ \"ksqo\", PGC_USERSET }, &_use_keyset_query_optimizer,\n- \t\tfalse, NULL, NULL\n- \t},\n \t{\n \t\t{ \"geqo\", PGC_USERSET }, &enable_geqo,\n \t\ttrue, NULL, NULL\n--- 312,317 ----\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.39\ndiff -c -r1.39 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t15 Jun 2002 01:29:50 -0000\t1.39\n--- src/backend/utils/misc/postgresql.conf.sample\t15 Jun 2002 23:27:32 -0000\n***************\n*** 89,96 ****\n #enable_mergejoin = true\n #enable_hashjoin = true\n \n- #ksqo = false\n- \n #effective_cache_size = 1000\t# default in 8k pages\n #random_page_cost = 4\n #cpu_tuple_cost = 0.01\n--- 89,94 ----\nIndex: src/bin/psql/tab-complete.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/bin/psql/tab-complete.c,v\nretrieving revision 1.49\ndiff -c -r1.49 tab-complete.c\n*** src/bin/psql/tab-complete.c\t15 Jun 2002 19:43:47 -0000\t1.49\n--- src/bin/psql/tab-complete.c\t15 Jun 2002 23:27:34 -0000\n***************\n*** 226,232 ****\n \t\t\"enable_nestloop\",\n \t\t\"enable_mergejoin\",\n \t\t\"enable_hashjoin\",\n- \t\t\"ksqo\",\n \t\t\"geqo\",\n \t\t\"fsync\",\n \t\t\"server_min_messages\",\n--- 226,231 ----\n***************\n*** 695,701 ****\n \n \t\t\tCOMPLETE_WITH_LIST(my_list);\n \t\t}\n! \t\telse if (strcasecmp(prev2_wd, \"GEQO\") == 0 || strcasecmp(prev2_wd, \"KSQO\") == 0)\n \t\t{\n \t\t\tchar\t *my_list[] = {\"ON\", \"OFF\", \"DEFAULT\", NULL};\n \n--- 694,700 ----\n \n \t\t\tCOMPLETE_WITH_LIST(my_list);\n \t\t}\n! \t\telse if (strcasecmp(prev2_wd, \"GEQO\") == 0)\n \t\t{\n \t\t\tchar\t *my_list[] = {\"ON\", \"OFF\", \"DEFAULT\", NULL};\n \nIndex: src/include/optimizer/planmain.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/optimizer/planmain.h,v\nretrieving revision 1.57\ndiff -c -r1.57 planmain.h\n*** src/include/optimizer/planmain.h\t18 May 2002 02:25:50 -0000\t1.57\n--- src/include/optimizer/planmain.h\t15 Jun 2002 23:27:34 -0000\n***************\n*** 62,72 ****\n \t\t\t\t\t\t\t Index acceptable_rel);\n extern void fix_opids(Node *node);\n \n- /*\n- * prep/prepkeyset.c\n- */\n- extern bool _use_keyset_query_optimizer;\n- \n- extern void transformKeySetQuery(Query *origNode);\n- \n #endif /* PLANMAIN_H */\n--- 62,65 ----",
"msg_date": "Sat, 15 Jun 2002 20:08:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] KSQO parameter"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> The following patch removes KSQO from GUC and the call to the function.\n> It also moves the main KSQO file into _deadcode. Applied.\n\n_deadcode is nowadays known as CVS history.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 17 Jun 2002 23:21:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: KSQO parameter"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > The following patch removes KSQO from GUC and the call to the function.\n> > It also moves the main KSQO file into _deadcode. Applied.\n> \n> _deadcode is nowadays known as CVS history.\n\nAgreed, but _deadcode directories still exist, so I put it there. \nPersonally, I would like to see all those files removed, but I was\noutvoted last time I asked.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 17:21:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: KSQO parameter"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Peter Eisentraut wrote:\n>> _deadcode is nowadays known as CVS history.\n\n> Agreed, but _deadcode directories still exist, so I put it there. \n> Personally, I would like to see all those files removed, but I was\n> outvoted last time I asked.\n\nPerhaps we need a revote. I don't like _deadcode one bit ---\nhaving that stuff in the tree just produces a lot of false hits\nwhen I'm searching for things.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 09:57:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: KSQO parameter "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Peter Eisentraut wrote:\n> >> _deadcode is nowadays known as CVS history.\n> \n> > Agreed, but _deadcode directories still exist, so I put it there. \n> > Personally, I would like to see all those files removed, but I was\n> > outvoted last time I asked.\n> \n> Perhaps we need a revote. I don't like _deadcode one bit ---\n> having that stuff in the tree just produces a lot of false hits\n> when I'm searching for things.\n\nYes, that is my problem too. I can't tell you how many times I have\nedited backend/optimizer/path/_deadcode/xfunc.c to keep it in sync with\nthe rest of my code changes.\n\nOK, we have three who don't like _deadcode directories and want them\nremoved from CVS current (they will still exist in CVS). Does anyone\nwant them? I think Marc wanted them initially.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jun 2002 12:55:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: KSQO parameter"
}
] |
[
{
"msg_contents": "Would some JDBC hacker develop a patch for the following issue? The\nchange is just barely large enough that I don't want to commit untested\ncode for it --- but not having a Java development environment at hand,\nI can't test the updated code.\n\nThe problem is in DatabaseMetaData.java (same code in both jdbc1 and\njdbc2, looks like). It does direct access to pg_description that isn't\nright anymore. In getTables, instead of\n\n\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n\nit should be\n\n\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n\nIn getColumns, the change is a little more involved, because\npg_attribute doesn't have an OID column anymore. The initial query\ncan't fetch a.oid, but should fetch a.attrelid instead, and then the\npg_description query should become\n\n\tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n\n(col_description takes the table OID and the column's attnum).\n\nThe reason this is more than a 3-line change is that it should be done\neither the old way or the new way depending on whether server version >=\n7.2 or not, for backwards-compatibility of the driver.\n\nIt's possible there are other similar changes needed that I missed in a\nquick lookover.\n\nSo, would some enterprising person fix the JDBC code to work with CVS\ntip, and submit a patch?\n\n\t\tthanks, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 16:08:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "On Fri, 10 Aug 2001 16:08:50 -0400, Tom Lane wrote:\n[direct access to pg_description that isn't right anymore]\n>So, would some enterprising person fix the JDBC code to work \n>with CVS tip, and submit a patch?\n\nI'm working on it Tom, but I may need a couple of days or so to\nget this done. Is that OK? This is because I still need to setup\na test environment with a running server build from current CVS.\nThat's fine, I wanted to do that anyway. I'm also in the middle\na chess tournament and still need to work on my Queen's Gambit\nDeclined :-)\n\nBy the way, what does \"tip\" mean?\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Sun, 12 Aug 2001 10:49:36 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "Rene Pijlman <rpijlman@wanadoo.nl> writes:\n> By the way, what does \"tip\" mean?\n\n\"CVS tip\" = \"latest file versions in CVS\". Think tip of a branch...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Aug 2001 12:31:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [JDBC] JDBC pg_description update needed for CVS tip "
},
{
"msg_contents": "On Fri, 10 Aug 2001 16:08:50 -0400, you wrote:\n>The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n>jdbc2, looks like). It does direct access to pg_description that isn't\n>right anymore. In getTables, instead of\n>\n>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n>\n>it should be\n>\n>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n\nDone that (columns to). When testing I noticed a difference\nbetween 7.1 and 7.2: when there is no comment on a table or\ncolumn, 7.1 returns the string \"no remarks\" in the REMARKS\ncolumn of the ResultSet from getTables()/getColumns(), whereas\n7.2 returns null.\n\nSo it appears that your new statement that uses\nobj_description() and col_description() returns one row with a\nnull when there is no comment, instead of 0 rows. Is this\nintentional?\n\nThe JDBC spec says: \"String object containing an explanatory\ncomment on the table/column, which may be null\". So actually,\nthis new behaviour is closer to the standard than the old\nbehaviour and I'm inclined to leave it this way. In fact, I\nmight as well remove the defaultRemarks code from\nDatabaseMetaData.java.\n\nThis might break existing code that doesn't follow the JDBC spec\nand isn't prepared to handle a null in the REMARKS column of\ngetTables()/getColumns().\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Mon, 13 Aug 2001 00:38:17 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "Rene Pijlman <rpijlman@wanadoo.nl> writes:\n> So it appears that your new statement that uses\n> obj_description() and col_description() returns one row with a\n> null when there is no comment, instead of 0 rows. Is this\n> intentional?\n\nThat is how selecting a function result would work. If you don't\nlike the behavior then we can reconsider it --- but if it's per\nspec then I think we should be happy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Aug 2001 22:06:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [JDBC] JDBC pg_description update needed for CVS tip "
},
{
"msg_contents": "Attached is the patch requested by Tom Lane (see below). It\nincludes two changes in the JDBC driver:\n\n1) When connected to a backend >= 7.2: use obj_description() and\ncol_description() instead of direct access to pg_description.\n\n2) In DatabaseMetaData.getTables()/getColumns()/getProcedures():\nwhen there is no comment on the object, return null in the\nREMARKS column of the ResultSet, instead of the default string\n\"no remarks\".\n\nChange 2 first appeared as a side-effect of change 1, but it is\nactually more compliant with the JDBC spec: \"String object\ncontaining an explanatory comment on the table/column/procedure,\nwhich may be null\". The default string \"no remarks\" was strictly\nspeaking incorrect, as it could not be distinguished from a real\nuser comment \"no remarks\". So I removed the default string\ncompletely.\n\nChange 2 might break existing code that doesn't follow the JDBC\nspec and isn't prepared to handle a null in the REMARKS column\nof getTables()/getColumns()/getProcedures.\n\nPatch tested with jdbc2 against both a 7.1 and a CVS tip\nbackend. I did not have a jdbc1 environment to build and test\nwith, but since the touched code is identical in jdbc1 and jdbc2\nI don't foresee any problems.\n\nRegards,\nRené Pijlman\n\nOn Fri, 10 Aug 2001 16:08:50 -0400, Tom Lane wrote:\n>Would some JDBC hacker develop a patch for the following issue? The\n>change is just barely large enough that I don't want to commit untested\n>code for it --- but not having a Java development environment at hand,\n>I can't test the updated code.\n>\n>The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n>jdbc2, looks like). It does direct access to pg_description that isn't\n>right anymore. In getTables, instead of\n>\n>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n>\n>it should be\n>\n>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n>\n>In getColumns, the change is a little more involved, because\n>pg_attribute doesn't have an OID column anymore. The initial query\n>can't fetch a.oid, but should fetch a.attrelid instead, and then the\n>pg_description query should become\n>\n>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n>\n>(col_description takes the table OID and the column's attnum).\n>\n>The reason this is more than a 3-line change is that it should be done\n>either the old way or the new way depending on whether server version >=\n>7.2 or not, for backwards-compatibility of the driver.\n>\n>It's possible there are other similar changes needed that I missed in a\n>quick lookover.\n>\n>So, would some enterprising person fix the JDBC code to work with CVS\n>tip, and submit a patch?\n>\n>\t\tthanks, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n\n\nRegards,\nRené Pijlman",
"msg_date": "Mon, 13 Aug 2001 20:01:24 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": false,
"msg_subject": "Re: JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "I I'm not mistaken I haven't seen the usual confirmation (\"Your\npatch has been added to the PostgreSQL unapplied patches list\")\nfor this patch yet. \n\nIs there something wrong with the patch, or is it just waiting\nfor something or someone?\n\nOn Mon, 13 Aug 2001 20:01:24 +0200, I wrote:\n>Attached is the patch requested by Tom Lane (see below). It\n>includes two changes in the JDBC driver:\n>\n>1) When connected to a backend >= 7.2: use obj_description() and\n>col_description() instead of direct access to pg_description.\n>\n>2) In DatabaseMetaData.getTables()/getColumns()/getProcedures():\n>when there is no comment on the object, return null in the\n>REMARKS column of the ResultSet, instead of the default string\n>\"no remarks\".\n>\n>Change 2 first appeared as a side-effect of change 1, but it is\n>actually more compliant with the JDBC spec: \"String object\n>containing an explanatory comment on the table/column/procedure,\n>which may be null\". The default string \"no remarks\" was strictly\n>speaking incorrect, as it could not be distinguished from a real\n>user comment \"no remarks\". So I removed the default string\n>completely.\n>\n>Change 2 might break existing code that doesn't follow the JDBC\n>spec and isn't prepared to handle a null in the REMARKS column\n>of getTables()/getColumns()/getProcedures.\n>\n>Patch tested with jdbc2 against both a 7.1 and a CVS tip\n>backend. I did not have a jdbc1 environment to build and test\n>with, but since the touched code is identical in jdbc1 and jdbc2\n>I don't foresee any problems.\n>\n>Regards,\n>Ren� Pijlman\n>\n>On Fri, 10 Aug 2001 16:08:50 -0400, Tom Lane wrote:\n>>Would some JDBC hacker develop a patch for the following issue? The\n>>change is just barely large enough that I don't want to commit untested\n>>code for it --- but not having a Java development environment at hand,\n>>I can't test the updated code.\n>>\n>>The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n>>jdbc2, looks like). It does direct access to pg_description that isn't\n>>right anymore. In getTables, instead of\n>>\n>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n>>\n>>it should be\n>>\n>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n>>\n>>In getColumns, the change is a little more involved, because\n>>pg_attribute doesn't have an OID column anymore. The initial query\n>>can't fetch a.oid, but should fetch a.attrelid instead, and then the\n>>pg_description query should become\n>>\n>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n>>\n>>(col_description takes the table OID and the column's attnum).\n>>\n>>The reason this is more than a 3-line change is that it should be done\n>>either the old way or the new way depending on whether server version >=\n>>7.2 or not, for backwards-compatibility of the driver.\n>>\n>>It's possible there are other similar changes needed that I missed in a\n>>quick lookover.\n>>\n>>So, would some enterprising person fix the JDBC code to work with CVS\n>>tip, and submit a patch?\n>>\n>>\t\tthanks, tom lane\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 3: if posting/reading through Usenet, please send an appropriate\n>>subscribe-nomail command to majordomo@postgresql.org so that your\n>>message can get through to the mailing list cleanly\n>\n>\n>Regards,\n>Ren� Pijlman\n\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Thu, 16 Aug 2001 19:53:39 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Attached is the patch requested by Tom Lane (see below). It\n> includes two changes in the JDBC driver:\n> \n> 1) When connected to a backend >= 7.2: use obj_description() and\n> col_description() instead of direct access to pg_description.\n> \n> 2) In DatabaseMetaData.getTables()/getColumns()/getProcedures():\n> when there is no comment on the object, return null in the\n> REMARKS column of the ResultSet, instead of the default string\n> \"no remarks\".\n> \n> Change 2 first appeared as a side-effect of change 1, but it is\n> actually more compliant with the JDBC spec: \"String object\n> containing an explanatory comment on the table/column/procedure,\n> which may be null\". The default string \"no remarks\" was strictly\n> speaking incorrect, as it could not be distinguished from a real\n> user comment \"no remarks\". So I removed the default string\n> completely.\n> \n> Change 2 might break existing code that doesn't follow the JDBC\n> spec and isn't prepared to handle a null in the REMARKS column\n> of getTables()/getColumns()/getProcedures.\n> \n> Patch tested with jdbc2 against both a 7.1 and a CVS tip\n> backend. I did not have a jdbc1 environment to build and test\n> with, but since the touched code is identical in jdbc1 and jdbc2\n> I don't foresee any problems.\n> \n> Regards,\n> Ren? Pijlman\n> \n> On Fri, 10 Aug 2001 16:08:50 -0400, Tom Lane wrote:\n> >Would some JDBC hacker develop a patch for the following issue? The\n> >change is just barely large enough that I don't want to commit untested\n> >code for it --- but not having a Java development environment at hand,\n> >I can't test the updated code.\n> >\n> >The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n> >jdbc2, looks like). It does direct access to pg_description that isn't\n> >right anymore. In getTables, instead of\n> >\n> >\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n> >\n> >it should be\n> >\n> >\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n> >\n> >In getColumns, the change is a little more involved, because\n> >pg_attribute doesn't have an OID column anymore. The initial query\n> >can't fetch a.oid, but should fetch a.attrelid instead, and then the\n> >pg_description query should become\n> >\n> >\tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n> >\n> >(col_description takes the table OID and the column's attnum).\n> >\n> >The reason this is more than a 3-line change is that it should be done\n> >either the old way or the new way depending on whether server version >=\n> >7.2 or not, for backwards-compatibility of the driver.\n> >\n> >It's possible there are other similar changes needed that I missed in a\n> >quick lookover.\n> >\n> >So, would some enterprising person fix the JDBC code to work with CVS\n> >tip, and submit a patch?\n> >\n> >\t\tthanks, tom lane\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 3: if posting/reading through Usenet, please send an appropriate\n> >subscribe-nomail command to majordomo@postgresql.org so that your\n> >message can get through to the mailing list cleanly\n> \n> \n> Regards,\n> Ren? Pijlman\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Aug 2001 17:45:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "If it helps I reviewed that patch and it looks fine to me.\n\n--Barry\n\nRene Pijlman wrote:\n> I I'm not mistaken I haven't seen the usual confirmation (\"Your\n> patch has been added to the PostgreSQL unapplied patches list\")\n> for this patch yet. \n> \n> Is there something wrong with the patch, or is it just waiting\n> for something or someone?\n> \n> On Mon, 13 Aug 2001 20:01:24 +0200, I wrote:\n> \n>>Attached is the patch requested by Tom Lane (see below). It\n>>includes two changes in the JDBC driver:\n>>\n>>1) When connected to a backend >= 7.2: use obj_description() and\n>>col_description() instead of direct access to pg_description.\n>>\n>>2) In DatabaseMetaData.getTables()/getColumns()/getProcedures():\n>>when there is no comment on the object, return null in the\n>>REMARKS column of the ResultSet, instead of the default string\n>>\"no remarks\".\n>>\n>>Change 2 first appeared as a side-effect of change 1, but it is\n>>actually more compliant with the JDBC spec: \"String object\n>>containing an explanatory comment on the table/column/procedure,\n>>which may be null\". The default string \"no remarks\" was strictly\n>>speaking incorrect, as it could not be distinguished from a real\n>>user comment \"no remarks\". So I removed the default string\n>>completely.\n>>\n>>Change 2 might break existing code that doesn't follow the JDBC\n>>spec and isn't prepared to handle a null in the REMARKS column\n>>of getTables()/getColumns()/getProcedures.\n>>\n>>Patch tested with jdbc2 against both a 7.1 and a CVS tip\n>>backend. I did not have a jdbc1 environment to build and test\n>>with, but since the touched code is identical in jdbc1 and jdbc2\n>>I don't foresee any problems.\n>>\n>>Regards,\n>>Ren� Pijlman\n>>\n>>On Fri, 10 Aug 2001 16:08:50 -0400, Tom Lane wrote:\n>>\n>>>Would some JDBC hacker develop a patch for the following issue? The\n>>>change is just barely large enough that I don't want to commit untested\n>>>code for it --- but not having a Java development environment at hand,\n>>>I can't test the updated code.\n>>>\n>>>The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n>>>jdbc2, looks like). It does direct access to pg_description that isn't\n>>>right anymore. In getTables, instead of\n>>>\n>>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n>>>\n>>>it should be\n>>>\n>>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n>>>\n>>>In getColumns, the change is a little more involved, because\n>>>pg_attribute doesn't have an OID column anymore. The initial query\n>>>can't fetch a.oid, but should fetch a.attrelid instead, and then the\n>>>pg_description query should become\n>>>\n>>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n>>>\n>>>(col_description takes the table OID and the column's attnum).\n>>>\n>>>The reason this is more than a 3-line change is that it should be done\n>>>either the old way or the new way depending on whether server version >=\n>>>7.2 or not, for backwards-compatibility of the driver.\n>>>\n>>>It's possible there are other similar changes needed that I missed in a\n>>>quick lookover.\n>>>\n>>>So, would some enterprising person fix the JDBC code to work with CVS\n>>>tip, and submit a patch?\n>>>\n>>>\t\tthanks, tom lane\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 3: if posting/reading through Usenet, please send an appropriate\n>>>subscribe-nomail command to majordomo@postgresql.org so that your\n>>>message can get through to the mailing list cleanly\n>>>\n>>\n>>Regards,\n>>Ren� Pijlman\n>>\n> \n> \n> Regards,\n> Ren� Pijlman\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n",
"msg_date": "Thu, 16 Aug 2001 19:21:01 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "\nApplied. Thanks.\n\n> Attached is the patch requested by Tom Lane (see below). It\n> includes two changes in the JDBC driver:\n> \n> 1) When connected to a backend >= 7.2: use obj_description() and\n> col_description() instead of direct access to pg_description.\n> \n> 2) In DatabaseMetaData.getTables()/getColumns()/getProcedures():\n> when there is no comment on the object, return null in the\n> REMARKS column of the ResultSet, instead of the default string\n> \"no remarks\".\n> \n> Change 2 first appeared as a side-effect of change 1, but it is\n> actually more compliant with the JDBC spec: \"String object\n> containing an explanatory comment on the table/column/procedure,\n> which may be null\". The default string \"no remarks\" was strictly\n> speaking incorrect, as it could not be distinguished from a real\n> user comment \"no remarks\". So I removed the default string\n> completely.\n> \n> Change 2 might break existing code that doesn't follow the JDBC\n> spec and isn't prepared to handle a null in the REMARKS column\n> of getTables()/getColumns()/getProcedures.\n> \n> Patch tested with jdbc2 against both a 7.1 and a CVS tip\n> backend. I did not have a jdbc1 environment to build and test\n> with, but since the touched code is identical in jdbc1 and jdbc2\n> I don't foresee any problems.\n> \n> Regards,\n> Ren? Pijlman\n> \n> On Fri, 10 Aug 2001 16:08:50 -0400, Tom Lane wrote:\n> >Would some JDBC hacker develop a patch for the following issue? The\n> >change is just barely large enough that I don't want to commit untested\n> >code for it --- but not having a Java development environment at hand,\n> >I can't test the updated code.\n> >\n> >The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n> >jdbc2, looks like). It does direct access to pg_description that isn't\n> >right anymore. In getTables, instead of\n> >\n> >\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n> >\n> >it should be\n> >\n> >\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n> >\n> >In getColumns, the change is a little more involved, because\n> >pg_attribute doesn't have an OID column anymore. The initial query\n> >can't fetch a.oid, but should fetch a.attrelid instead, and then the\n> >pg_description query should become\n> >\n> >\tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n> >\n> >(col_description takes the table OID and the column's attnum).\n> >\n> >The reason this is more than a 3-line change is that it should be done\n> >either the old way or the new way depending on whether server version >=\n> >7.2 or not, for backwards-compatibility of the driver.\n> >\n> >It's possible there are other similar changes needed that I missed in a\n> >quick lookover.\n> >\n> >So, would some enterprising person fix the JDBC code to work with CVS\n> >tip, and submit a patch?\n> >\n> >\t\tthanks, tom lane\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 3: if posting/reading through Usenet, please send an appropriate\n> >subscribe-nomail command to majordomo@postgresql.org so that your\n> >message can get through to the mailing list cleanly\n> \n> \n> Regards,\n> Ren? Pijlman\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 09:59:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "\nThis patch was applied a few hours ago.\n\n> If it helps I reviewed that patch and it looks fine to me.\n> \n> --Barry\n> \n> Rene Pijlman wrote:\n> > I I'm not mistaken I haven't seen the usual confirmation (\"Your\n> > patch has been added to the PostgreSQL unapplied patches list\")\n> > for this patch yet. \n> > \n> > Is there something wrong with the patch, or is it just waiting\n> > for something or someone?\n> > \n> > On Mon, 13 Aug 2001 20:01:24 +0200, I wrote:\n> > \n> >>Attached is the patch requested by Tom Lane (see below). It\n> >>includes two changes in the JDBC driver:\n> >>\n> >>1) When connected to a backend >= 7.2: use obj_description() and\n> >>col_description() instead of direct access to pg_description.\n> >>\n> >>2) In DatabaseMetaData.getTables()/getColumns()/getProcedures():\n> >>when there is no comment on the object, return null in the\n> >>REMARKS column of the ResultSet, instead of the default string\n> >>\"no remarks\".\n> >>\n> >>Change 2 first appeared as a side-effect of change 1, but it is\n> >>actually more compliant with the JDBC spec: \"String object\n> >>containing an explanatory comment on the table/column/procedure,\n> >>which may be null\". The default string \"no remarks\" was strictly\n> >>speaking incorrect, as it could not be distinguished from a real\n> >>user comment \"no remarks\". So I removed the default string\n> >>completely.\n> >>\n> >>Change 2 might break existing code that doesn't follow the JDBC\n> >>spec and isn't prepared to handle a null in the REMARKS column\n> >>of getTables()/getColumns()/getProcedures.\n> >>\n> >>Patch tested with jdbc2 against both a 7.1 and a CVS tip\n> >>backend. I did not have a jdbc1 environment to build and test\n> >>with, but since the touched code is identical in jdbc1 and jdbc2\n> >>I don't foresee any problems.\n> >>\n> >>Regards,\n> >>Ren? Pijlman\n> >>\n> >>On Fri, 10 Aug 2001 16:08:50 -0400, Tom Lane wrote:\n> >>\n> >>>Would some JDBC hacker develop a patch for the following issue? The\n> >>>change is just barely large enough that I don't want to commit untested\n> >>>code for it --- but not having a Java development environment at hand,\n> >>>I can't test the updated code.\n> >>>\n> >>>The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n> >>>jdbc2, looks like). It does direct access to pg_description that isn't\n> >>>right anymore. In getTables, instead of\n> >>>\n> >>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n> >>>\n> >>>it should be\n> >>>\n> >>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n> >>>\n> >>>In getColumns, the change is a little more involved, because\n> >>>pg_attribute doesn't have an OID column anymore. The initial query\n> >>>can't fetch a.oid, but should fetch a.attrelid instead, and then the\n> >>>pg_description query should become\n> >>>\n> >>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n> >>>\n> >>>(col_description takes the table OID and the column's attnum).\n> >>>\n> >>>The reason this is more than a 3-line change is that it should be done\n> >>>either the old way or the new way depending on whether server version >=\n> >>>7.2 or not, for backwards-compatibility of the driver.\n> >>>\n> >>>It's possible there are other similar changes needed that I missed in a\n> >>>quick lookover.\n> >>>\n> >>>So, would some enterprising person fix the JDBC code to work with CVS\n> >>>tip, and submit a patch?\n> >>>\n> >>>\t\tthanks, tom lane\n> >>>\n> >>>---------------------------(end of broadcast)---------------------------\n> >>>TIP 3: if posting/reading through Usenet, please send an appropriate\n> >>>subscribe-nomail command to majordomo@postgresql.org so that your\n> >>>message can get through to the mailing list cleanly\n> >>>\n> >>\n> >>Regards,\n> >>Ren? Pijlman\n> >>\n> > \n> > \n> > Regards,\n> > Ren? Pijlman\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 13:34:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "\nCan someone tackles this and supply a patch?\n\n\n> Would some JDBC hacker develop a patch for the following issue? The\n> change is just barely large enough that I don't want to commit untested\n> code for it --- but not having a Java development environment at hand,\n> I can't test the updated code.\n> \n> The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n> jdbc2, looks like). It does direct access to pg_description that isn't\n> right anymore. In getTables, instead of\n> \n> \tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n> \n> it should be\n> \n> \tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n> \n> In getColumns, the change is a little more involved, because\n> pg_attribute doesn't have an OID column anymore. The initial query\n> can't fetch a.oid, but should fetch a.attrelid instead, and then the\n> pg_description query should become\n> \n> \tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n> \n> (col_description takes the table OID and the column's attnum).\n> \n> The reason this is more than a 3-line change is that it should be done\n> either the old way or the new way depending on whether server version >=\n> 7.2 or not, for backwards-compatibility of the driver.\n> \n> It's possible there are other similar changes needed that I missed in a\n> quick lookover.\n> \n> So, would some enterprising person fix the JDBC code to work with CVS\n> tip, and submit a patch?\n> \n> \t\tthanks, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 00:30:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "I believe this was done a while ago. (It looks like it was patched on \nAug 17 by a patch from Rene).\n\nthanks,\n--Barry\n\nBruce Momjian wrote:\n> Can someone tackles this and supply a patch?\n> \n> \n> \n>>Would some JDBC hacker develop a patch for the following issue? The\n>>change is just barely large enough that I don't want to commit untested\n>>code for it --- but not having a Java development environment at hand,\n>>I can't test the updated code.\n>>\n>>The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n>>jdbc2, looks like). It does direct access to pg_description that isn't\n>>right anymore. In getTables, instead of\n>>\n>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select description from pg_description where objoid=\"+r.getInt(2));\n>>\n>>it should be\n>>\n>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select obj_description(\"+r.getInt(2)+\",'pg_class')\");\n>>\n>>In getColumns, the change is a little more involved, because\n>>pg_attribute doesn't have an OID column anymore. The initial query\n>>can't fetch a.oid, but should fetch a.attrelid instead, and then the\n>>pg_description query should become\n>>\n>>\tjava.sql.ResultSet dr = connection.ExecSQL(\"select col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n>>\n>>(col_description takes the table OID and the column's attnum).\n>>\n>>The reason this is more than a 3-line change is that it should be done\n>>either the old way or the new way depending on whether server version >=\n>>7.2 or not, for backwards-compatibility of the driver.\n>>\n>>It's possible there are other similar changes needed that I missed in a\n>>quick lookover.\n>>\n>>So, would some enterprising person fix the JDBC code to work with CVS\n>>tip, and submit a patch?\n>>\n>>\t\tthanks, tom lane\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Don't 'kill -9' the postmaster\n>>\n>>\n> \n\n\n",
"msg_date": "Thu, 06 Sep 2001 22:22:17 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> I believe this was done a while ago. (It looks like it was patched on \n> Aug 17 by a patch from Rene).\n\nLooking again, getTables() seems to be fixed, but there is still an\nunpatched reference to pg_description in getColumns(), in both\njdbc1 and jdbc2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Sep 2001 01:34:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip "
},
{
"msg_contents": "Interestingly it was fixed in the getColumns() method, until a patch \nthat was applied yesterday broke it again.\n\n--Barry\n\n\nTom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> \n>>I believe this was done a while ago. (It looks like it was patched on \n>>Aug 17 by a patch from Rene).\n>>\n> \n> Looking again, getTables() seems to be fixed, but there is still an\n> unpatched reference to pg_description in getColumns(), in both\n> jdbc1 and jdbc2.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n",
"msg_date": "Thu, 06 Sep 2001 23:39:53 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "On Fri, 07 Sep 2001 01:34:46 -0400, you wrote:\n>Looking again, getTables() seems to be fixed, but there is still an\n>unpatched reference to pg_description in getColumns(), in both\n>jdbc1 and jdbc2.\n\nThere shouldn't be, I fixed that in the same patch. I'll have a\nlook at it this weekend.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Fri, 07 Sep 2001 10:15:01 +0200",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip "
},
{
"msg_contents": "On Thu, 06 Sep 2001 23:39:53 -0700, you wrote:\n>Interestingly it was fixed in the getColumns() method, until a patch \n>that was applied yesterday broke it again.\n\nAh, that's probably the getColumns() fix from my fellow\ncountryman that was based on an older version before my patch.\n\nLet me know if I have to re-merge my changes with a new patch.\nI'll have time for that this weekend, so it can be in 7.2 beta1.\n\nAlso, this calls for a regression test :-)\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Fri, 07 Sep 2001 10:53:02 +0200",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "At 00:30 9/7/2001 -0400, Bruce Momjian wrote:\n>Can someone tackles this and supply a patch?\n\nThis has been addressed in the patch that was recently committed for JDBC's \nbroken getColumn() support. As I'm using outer joins and was unable to come \nup with SQL syntax that would correctly use an outer join with a function \nreturning a single row (col_description), I used the actual function \ndefinition for col_description for >= 7.2 servers. For details see my mail \nat http://fts.postgresql.org/db/mw/msg.html?mid=1032468\n\nOTOH, I haven't touched JDBC's getTable() code.\n\n> > Would some JDBC hacker develop a patch for the following issue? The\n> > change is just barely large enough that I don't want to commit untested\n> > code for it --- but not having a Java development environment at hand,\n> > I can't test the updated code.\n> >\n> > The problem is in DatabaseMetaData.java (same code in both jdbc1 and\n> > jdbc2, looks like). It does direct access to pg_description that isn't\n> > right anymore. In getTables, instead of\n> >\n> > java.sql.ResultSet dr = connection.ExecSQL(\"select description \n> from pg_description where objoid=\"+r.getInt(2));\n> >\n> > it should be\n> >\n> > java.sql.ResultSet dr = connection.ExecSQL(\"select \n> obj_description(\"+r.getInt(2)+\",'pg_class')\");\n> >\n> > In getColumns, the change is a little more involved, because\n> > pg_attribute doesn't have an OID column anymore. The initial query\n> > can't fetch a.oid, but should fetch a.attrelid instead, and then the\n> > pg_description query should become\n> >\n> > java.sql.ResultSet dr = connection.ExecSQL(\"select \n> col_description(\"+r.getInt(1)+\",\"+r.getInt(5)+\")\");\n> >\n> > (col_description takes the table OID and the column's attnum).\n> >\n> > The reason this is more than a 3-line change is that it should be done\n> > either the old way or the new way depending on whether server version >=\n> > 7.2 or not, for backwards-compatibility of the driver.\n> >\n> > It's possible there are other similar changes needed that I missed in a\n> > quick lookover.\n> >\n> > So, would some enterprising person fix the JDBC code to work with CVS\n> > tip, and submit a patch?\n> >\n> > thanks, tom lane\n>\n>--\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nCheers,\n\n\nJeroen\n\n",
"msg_date": "Fri, 07 Sep 2001 16:45:57 +0200",
"msg_from": "Jeroen van Vianen <jeroen.van.vianen@satama.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS"
},
{
"msg_contents": "If you have the time this weekend to work on addressing this, that would \nbe great.\n\nthanks,\n--Barry\n\nRene Pijlman wrote:\n> On Thu, 06 Sep 2001 23:39:53 -0700, you wrote:\n> \n>>Interestingly it was fixed in the getColumns() method, until a patch \n>>that was applied yesterday broke it again.\n>>\n> \n> Ah, that's probably the getColumns() fix from my fellow\n> countryman that was based on an older version before my patch.\n> \n> Let me know if I have to re-merge my changes with a new patch.\n> I'll have time for that this weekend, so it can be in 7.2 beta1.\n> \n> Also, this calls for a regression test :-)\n> \n> Regards,\n> Ren� Pijlman <rene@lab.applinet.nl>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n",
"msg_date": "Fri, 07 Sep 2001 09:26:17 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "On Fri, 07 Sep 2001 01:34:46 -0400, Tom Lane wrote:\n>there is still an unpatched reference to pg_description in \n>getColumns(), in both jdbc1 and jdbc2.\n\nThis was introduced by Jeroen's patch (see\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1032468). Attached\nis a patch that returns getColumns() to using \"select\nobj_description()\" instead of direct access to pg_description,\nas per the request by Tom. \n\nI've incorporated Jeroen's fix to left outer join with\npg_attrdef instead of inner join, so getColumns() also returns\ncolumns without a default value.\n\nI have, however, not included Jeroen's attempt to combine\nmultiple queries into one huge multi-join query for better\nperformance, because:\n1) I don't know how to do that using obj_description() instead\nof direct access to pg_description\n2) I don't think a performance improvement (if any) in this\nmethod is very important\n\nBecause of the outer join, getColumns() will only work with a\nbackend >= 7.1. Since the conditional coding for 7.1/7.2 and\njdbc1/jdbc2 is already giving me headaches I didn't pursue a\npre-7.1 solution.\n\nRegards,\nRené Pijlman <rene@lab.applinet.nl>",
"msg_date": "Sun, 09 Sep 2001 00:18:05 +0200",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip "
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> On Fri, 07 Sep 2001 01:34:46 -0400, Tom Lane wrote:\n> >there is still an unpatched reference to pg_description in \n> >getColumns(), in both jdbc1 and jdbc2.\n> \n> This was introduced by Jeroen's patch (see\n> http://fts.postgresql.org/db/mw/msg.html?mid=1032468). Attached\n> is a patch that returns getColumns() to using \"select\n> obj_description()\" instead of direct access to pg_description,\n> as per the request by Tom. \n> \n> I've incorporated Jeroen's fix to left outer join with\n> pg_attrdef instead of inner join, so getColumns() also returns\n> columns without a default value.\n> \n> I have, however, not included Jeroen's attempt to combine\n> multiple queries into one huge multi-join query for better\n> performance, because:\n> 1) I don't know how to do that using obj_description() instead\n> of direct access to pg_description\n> 2) I don't think a performance improvement (if any) in this\n> method is very important\n> \n> Because of the outer join, getColumns() will only work with a\n> backend >= 7.1. Since the conditional coding for 7.1/7.2 and\n> jdbc1/jdbc2 is already giving me headaches I didn't pursue a\n> pre-7.1 solution.\n> \n> Regards,\n> Ren? Pijlman <rene@lab.applinet.nl>\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 8 Sep 2001 21:15:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip"
},
{
"msg_contents": "At 00:18 9/9/2001 +0200, Rene Pijlman wrote:\n>On Fri, 07 Sep 2001 01:34:46 -0400, Tom Lane wrote:\n> >there is still an unpatched reference to pg_description in\n> >getColumns(), in both jdbc1 and jdbc2.\n>\n>This was introduced by Jeroen's patch (see\n>http://fts.postgresql.org/db/mw/msg.html?mid=1032468). Attached\n>is a patch that returns getColumns() to using \"select\n>obj_description()\" instead of direct access to pg_description,\n>as per the request by Tom.\n>\n>I've incorporated Jeroen's fix to left outer join with\n>pg_attrdef instead of inner join, so getColumns() also returns\n>columns without a default value.\n>\n>I have, however, not included Jeroen's attempt to combine\n>multiple queries into one huge multi-join query for better\n>performance, because:\n>1) I don't know how to do that using obj_description() instead\n>of direct access to pg_description\n\nExactly. That's why I put a comment in my orginal mail \n(http://fts.postgresql.org/db/mw/msg.html?mid=1032468) about not being able \nto use the col_description in a (left) outer join and used the actual code \nof col_description instead. Is it possible to do:\n\nselect t1.*, f from t1 left outer join \nfunction_returning_a_single_row_or_null(parameters) f ?\n\nI think this should be possible, but I have no clue how/whether the grammar \nand/or executor should be changed to allow this. Or someone with more \nexperience with outer join SQL syntax might be able to help here.\n\n>2) I don't think a performance improvement (if any) in this\n>method is very important\n\nIt is of course a performance improvement if it uses only 1 SQL statement \nrather than N+1 with N being the number of columns reported. E.g. if you \nlist all columns of all tables in a big database, this would be a huge win. \nI noted that some of the JDBC MetaData functions in the Oracle JDBC driver \nwere really slow compared to PostgreSQL's (e.g. seconds slower).\n\n>Because of the outer join, getColumns() will only work with a\n>backend >= 7.1. Since the conditional coding for 7.1/7.2 and\n>jdbc1/jdbc2 is already giving me headaches I didn't pursue a\n>pre-7.1 solution.\n\nCheers,\n\n\nJeroen\n\n",
"msg_date": "Sun, 09 Sep 2001 14:48:41 +0200",
"msg_from": "Jeroen van Vianen <jeroen.van.vianen@satama.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS"
},
{
"msg_contents": "On Sun, 09 Sep 2001 14:48:41 +0200, you wrote:\n>It is of course a performance improvement if it uses only 1 SQL statement \n>rather than N+1 with N being the number of columns reported. E.g. if you \n>list all columns of all tables in a big database, this would be a huge win. \n\nI think that can only be decided by measurement. \n\nWhat you're saying is:\n\n 1 * c1 < (N + 1) * c2\n\nbut that can only be decided if we know c1 and c2 (meaning: the\nexecution times of two different queries, including round trip\noverhead).\n\nThat doesn't mean I'm opposed to the change, on the contrary. As\na rule, I find a complex SQL statement more elegant than the\nsame 'algorithm' in procedural code. But in this case I wasn't\nsure how to construct it.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Sun, 09 Sep 2001 14:58:40 +0200",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip "
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n> On Fri, 07 Sep 2001 01:34:46 -0400, Tom Lane wrote:\n> >there is still an unpatched reference to pg_description in \n> >getColumns(), in both jdbc1 and jdbc2.\n> \n> This was introduced by Jeroen's patch (see\n> http://fts.postgresql.org/db/mw/msg.html?mid=1032468). Attached\n> is a patch that returns getColumns() to using \"select\n> obj_description()\" instead of direct access to pg_description,\n> as per the request by Tom. \n> \n> I've incorporated Jeroen's fix to left outer join with\n> pg_attrdef instead of inner join, so getColumns() also returns\n> columns without a default value.\n> \n> I have, however, not included Jeroen's attempt to combine\n> multiple queries into one huge multi-join query for better\n> performance, because:\n> 1) I don't know how to do that using obj_description() instead\n> of direct access to pg_description\n> 2) I don't think a performance improvement (if any) in this\n> method is very important\n> \n> Because of the outer join, getColumns() will only work with a\n> backend >= 7.1. Since the conditional coding for 7.1/7.2 and\n> jdbc1/jdbc2 is already giving me headaches I didn't pursue a\n> pre-7.1 solution.\n> \n> Regards,\n> Ren? Pijlman <rene@lab.applinet.nl>\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 Sep 2001 10:55:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC pg_description update needed for CVS tip"
}
] |
[
{
"msg_contents": "Jan,\n\nI discovered yesterday that DECLARE BINARY CURSOR was broken in CVS tip:\nwhen you do a FETCH from a binary cursor, you get back ASCII data.\n\nThe problem seems to have been induced by your patch for SPI cursor\nsupport, specifically commands/command.c version 1.128: what had been\n\tif (dest == None)\n\t\toverride portal's destination with requested dest\nbecame\n\tif (dest != queryDesc->dest)\n\t\toverride portal's destination with requested dest\n\nNow a FETCH always passes the standard output destination, normally\nRemote, and so this breaks binary cursors which have dest =\nRemoteInternal and need to stick to that.\n\nI changed the code to look like this:\n\n /*\n * If the requested destination is not the same as the query's\n * original destination, make a temporary QueryDesc with the proper\n * destination. This supports MOVE, for example, which will pass in\n * dest = None.\n *\n * EXCEPTION: if the query's original dest is RemoteInternal (ie, it's\n * a binary cursor) and the request is Remote, we do NOT override the\n * original dest. This is necessary since a FETCH command will pass\n * dest = Remote, not knowing whether the cursor is binary or not.\n */\n if (dest != queryDesc->dest &&\n !(queryDesc->dest == RemoteInternal && dest == Remote))\n {\n\t... override\n\nThis makes binary cursors work again, and it didn't break the regression\ntests, but I thought you should take a look at it. I don't know what\naspect of SPI cursors led you to make the original change, so maybe this\nversion isn't right for SPI cursors.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 16:19:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Portal destination issue: binary vs normal cursors"
},
{
"msg_contents": "\nCan we get BINARY supported in ordinary SELECT too, while you are there?\nHard to argue why BINARY works only for cursors.\n\n\n> Jan,\n> \n> I discovered yesterday that DECLARE BINARY CURSOR was broken in CVS tip:\n> when you do a FETCH from a binary cursor, you get back ASCII data.\n> \n> The problem seems to have been induced by your patch for SPI cursor\n> support, specifically commands/command.c version 1.128: what had been\n> \tif (dest == None)\n> \t\toverride portal's destination with requested dest\n> became\n> \tif (dest != queryDesc->dest)\n> \t\toverride portal's destination with requested dest\n> \n> Now a FETCH always passes the standard output destination, normally\n> Remote, and so this breaks binary cursors which have dest =\n> RemoteInternal and need to stick to that.\n> \n> I changed the code to look like this:\n> \n> /*\n> * If the requested destination is not the same as the query's\n> * original destination, make a temporary QueryDesc with the proper\n> * destination. This supports MOVE, for example, which will pass in\n> * dest = None.\n> *\n> * EXCEPTION: if the query's original dest is RemoteInternal (ie, it's\n> * a binary cursor) and the request is Remote, we do NOT override the\n> * original dest. This is necessary since a FETCH command will pass\n> * dest = Remote, not knowing whether the cursor is binary or not.\n> */\n> if (dest != queryDesc->dest &&\n> !(queryDesc->dest == RemoteInternal && dest == Remote))\n> {\n> \t... override\n> \n> This makes binary cursors work again, and it didn't break the regression\n> tests, but I thought you should take a look at it. I don't know what\n> aspect of SPI cursors led you to make the original change, so maybe this\n> version isn't right for SPI cursors.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Aug 2001 16:47:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Portal destination issue: binary vs normal cursors"
},
{
"msg_contents": "Good day,\n\nIn the on-line documentation, it's mentioned that there is an available\ndate format of the form YYDDD, where YY is the year, and DDD is the day of\nthe year.\n\nThe example given is '99008', which does show up correctly:\n\nlx=# SELECT date('99008');\n date\n------------\n 1999-01-08\n(1 row)\n\nIn fact, it shows up correctly up until I try to specify past\njanuary, when above the 31st day of the year it fails.\n\nlx=# SELECT date('99031');\n date\n------------\n 1999-01-31\n(1 row)\n\nlx=# SELECT date('99032');\nERROR: Bad date external representation '99032'\n\nIs this format no longer maintained, or am I missing something? ;)\n\n\nRegards,\nJw.\n--\njlx@commandprompt.com\nby way of pgsql-hackers@commandprompt.com\n\n",
"msg_date": "Fri, 10 Aug 2001 14:08:54 -0700 (PDT)",
"msg_from": "<pgsql-hackers@commandprompt.com>",
"msg_from_op": false,
"msg_subject": "Maintained Date Formats"
},
{
"msg_contents": "Does this mean there should be a regression test for binary cursors then...?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Saturday, 11 August 2001 4:19 AM\n> To: Jan Wieck\n> Cc: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Portal destination issue: binary vs normal cursors\n>\n>\n> Jan,\n>\n> I discovered yesterday that DECLARE BINARY CURSOR was broken in CVS tip:\n> when you do a FETCH from a binary cursor, you get back ASCII data.\n>\n> The problem seems to have been induced by your patch for SPI cursor\n> support, specifically commands/command.c version 1.128: what had been\n> \tif (dest == None)\n> \t\toverride portal's destination with requested dest\n> became\n> \tif (dest != queryDesc->dest)\n> \t\toverride portal's destination with requested dest\n>\n> Now a FETCH always passes the standard output destination, normally\n> Remote, and so this breaks binary cursors which have dest =\n> RemoteInternal and need to stick to that.\n>\n> I changed the code to look like this:\n>\n> /*\n> * If the requested destination is not the same as the query's\n> * original destination, make a temporary QueryDesc with the proper\n> * destination. This supports MOVE, for example, which will pass in\n> * dest = None.\n> *\n> * EXCEPTION: if the query's original dest is RemoteInternal (ie, it's\n> * a binary cursor) and the request is Remote, we do NOT override the\n> * original dest. This is necessary since a FETCH command will pass\n> * dest = Remote, not knowing whether the cursor is binary or not.\n> */\n> if (dest != queryDesc->dest &&\n> !(queryDesc->dest == RemoteInternal && dest == Remote))\n> {\n> \t... override\n>\n> This makes binary cursors work again, and it didn't break the regression\n> tests, but I thought you should take a look at it. I don't know what\n> aspect of SPI cursors led you to make the original change, so maybe this\n> version isn't right for SPI cursors.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n",
"msg_date": "Mon, 13 Aug 2001 09:48:49 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Portal destination issue: binary vs normal cursors"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Does this mean there should be a regression test for binary cursors then...?\n\nSure, if you can think of a portable way to do it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 12 Aug 2001 23:09:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Portal destination issue: binary vs normal cursors "
}
] |
[
{
"msg_contents": "Good day,\n\nThe on-line documentation lists that PostgreSQL supports only four\nimplicitly typed constants: string, bit string, integer and floating\npoint.\n\nDon't the boolean values true and false qualify as an implicitly typed\nconstants as well, as they don't need to be bound by single quotes? For\nexample:\n\nlx=# SELECT true AS bool_const_t, false AS bool_const_f;\n bool_const_t | bool_const_f\n--------------+--------------\n t | f\n(1 row)\n\nlx=#\n\nIs this documentation just out of date, or am I not quite grasping the\nspecific connotation of 'implicitly typed' constants? Thoughts? ;)\n\n\nRegards,\nJw.\n--\njlx@commandprompt.com\nby way of pgsql-hackers@commandprompt.com\n\n",
"msg_date": "Fri, 10 Aug 2001 14:12:15 -0700 (PDT)",
"msg_from": "<pgsql-hackers@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Boolean Implicitly Typed?"
}
] |
[
{
"msg_contents": "I got a patch on the rewriteHandler.c file to the multi-action rules, and it \nworked great on Solaris 8, but doesn't compile on Solaris 7 if I use the \noption --with-CXX.\n\nIt fails with this message on configure:\n\nchecking for class string in <string.h>... no\nconfigure: error: neither <string> nor <string.h> seem to define the C++ \nclass `string\\'\n\nand from the config.log I get this:\n\nconfigure:2347: checking whether we are using GNU C++\nconfigure:2356: c++ -E conftest.C\nconfigure:2375: checking whether c++ accepts -g\nconfigure:2423: checking how to run the C++ preprocessor\nconfigure:2441: c++ -E conftest.C >/dev/null 2>conftest.out\nconfigure:2476: checking for string\nconfigure:2486: c++ -E conftest.C >/dev/null 2>conftest.out\nIn file included from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/alloc.h:18,\n from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/std/bastring.h:39,\n from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/string:6,\n from configure:2482:\n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/stl_config.h:151: \n_G_config.h: No such file or directory\nIn file included from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/streambuf.h:36,\n from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/iostream.h:31,\n from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/stl_alloc.h:45,\n from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/alloc.h:21,\n from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/std/bastring.h:39,\n from \n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/string:6,\n from configure:2482:\n/usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/../../../../include/g++-3/libio.h:30: \n_G_config.h: No such file or directory\nconfigure: failed program was:\n#line 2481 \"configure\"\n#include \"confdefs.h\"\n#include <string>\nconfigure:2513: checking for class string in <string.h>\nconfigure:2528: c++ -c -O2 -g conftest.C 1>&5\nconfigure: In function `int main()':\nconfigure:2524: `string' undeclared (first use this function)\nconfigure:2524: (Each undeclared identifier is reported only once\nconfigure:2524: for each function it appears in.)\nconfigure:2524: parse error before `='\nconfigure: failed program was:\n#line 2518 \"configure\"\n#include \"confdefs.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nint main() {\nstring foo = \"test\"\n; return 0; }\n\nAny thoughts?\n\n-- \nCualquiera administra un NT.\nEse es el problema, que cualquiera administre.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n",
"msg_date": "Fri, 10 Aug 2001 18:26:12 -0300",
"msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "problem with patch"
},
{
"msg_contents": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar> writes:\n> I got a patch on the rewriteHandler.c file to the multi-action rules, and it \n> worked great on Solaris 8, but doesn't compile on Solaris 7 if I use the \n> option --with-CXX.\n\nEvidently Solaris 7's C++ support is broken. Don't use it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 17:58:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with patch "
}
] |
[
{
"msg_contents": "What's wrong with this query that used to work great:\n\nwebunl=> SELECT DISTINCT count(*),id_curso AS LINK,titulo AS \nTITULO_CARRERA,admin_view_facultades.nombre AS FACULTAD,\nwebunl-> duracion/12 AS DURACION FROM admin_view, admin_view_facultades\nwebunl-> WHERE admin_view_facultades.id_carrera=admin_view.id_curso\nwebunl-> GROUP BY id_curso,titulo,admin_view_facultades.nombre,duracion/12\nwebunl-> ORDER BY id_curso ASC LIMIT 20 OFFSET 0 ;\n count | link | titulo_carrera | facultad \n| duracion\n-------+------+-----------------------------+--------------------------------+----------\n 1 | 1 | Ingenieria en informatica | Facultad de Ingenier<ED>a \nQu<ED>mica | 4\n 1 | 2 | Licenciatura en Matematicas | Facultad de Ingenier<ED>a \nQu<ED>mica | 4\n(2 rows)\n \nwebunl=>\n\nAs you can see count(*) is 1, even seeing 2 rows selected. What's wrong?\nAll that chenged is that I recompiled postgreSQL 7.1.2 with the patch from \nthe rewriteHandler.c file.\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n",
"msg_date": "Fri, 10 Aug 2001 20:58:54 -0300",
"msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "Bug?"
},
{
"msg_contents": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar> writes:\n> What's wrong with this query that used to work great:\n\nImpossible to tell, with that amount of information.\n\n> As you can see count(*) is 1, even seeing 2 rows selected. What's wrong?\n\nSince it's a GROUP BY query, that's not necessarily wrong. The counts\nare the number of input rows in each group.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Aug 2001 20:53:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug? "
},
{
"msg_contents": "Looks like it works as it should. The count(*) function is returning the number of records\nin each group by condition, or '1' for each row. The count(*) does not return the total\nnumber of rows returned by the query.\n\nDwayne\n\nMart�n Marqu�s wrote:\n\n> What's wrong with this query that used to work great:\n>\n> webunl=> SELECT DISTINCT count(*),id_curso AS LINK,titulo AS\n> TITULO_CARRERA,admin_view_facultades.nombre AS FACULTAD,\n> webunl-> duracion/12 AS DURACION FROM admin_view, admin_view_facultades\n> webunl-> WHERE admin_view_facultades.id_carrera=admin_view.id_curso\n> webunl-> GROUP BY id_curso,titulo,admin_view_facultades.nombre,duracion/12\n> webunl-> ORDER BY id_curso ASC LIMIT 20 OFFSET 0 ;\n> count | link | titulo_carrera | facultad\n> | duracion\n> -------+------+-----------------------------+--------------------------------+----------\n> 1 | 1 | Ingenieria en informatica | Facultad de Ingenier<ED>a\n> Qu<ED>mica | 4\n> 1 | 2 | Licenciatura en Matematicas | Facultad de Ingenier<ED>a\n> Qu<ED>mica | 4\n> (2 rows)\n>\n> webunl=>\n>\n> As you can see count(*) is 1, even seeing 2 rows selected. What's wrong?\n> All that chenged is that I recompiled postgreSQL 7.1.2 with the patch from\n> the rewriteHandler.c file.\n>\n> --\n> Porqu� usar una base de datos relacional cualquiera,\n> si pod�s usar PostgreSQL?\n> -----------------------------------------------------------------\n> Mart�n Marqu�s | mmarques@unl.edu.ar\n> Programador, Administrador, DBA | Centro de Telematica\n> Universidad Nacional\n> del Litoral\n> -----------------------------------------------------------------\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Fri, 10 Aug 2001 21:53:28 -0400",
"msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>",
"msg_from_op": false,
"msg_subject": "Re: Bug?"
}
] |
[
{
"msg_contents": "I've been experimenting with using a different parser (one which is\nmore Oracle compatible). While the parser is not yet ready for prime\ntime, I thought I would send out the code used to select the parser to\ngauge its level of acceptability. If this patch isn't going to fly,\nthen my approach to an alternate parser isn't going to fly either.\n\nAfter this patch is applied, you can switch to a new parser by doing\nsomething like this:\n\ncreate function my_parser(opaque, opaque, opaque) returns opaque\n as '..../parser.so' language 'c';\nset parser = my_parser;\n\nAfter you do this, all subsequent input will be interpreted using the\nspecified parser. Note that you may want to leave yourself an escape\nhatch of some sort to set the parser back to Postgres standard.\n\nIf this patch is accepted, then some further work needs to be done to\nset the parser for SPI calls, so that it is possible for the user to\nchange the parser while still using ordinary PL/pgSQL.\n\nI would appreciate any feedback.\n\nIan\n\nIndex: src/backend/tcop/postgres.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/tcop/postgres.c,v\nretrieving revision 1.230\ndiff -u -r1.230 postgres.c\n--- src/backend/tcop/postgres.c\t2001/08/04 00:14:43\t1.230\n+++ src/backend/tcop/postgres.c\t2001/08/11 00:53:08\n@@ -58,9 +58,11 @@\n #include \"tcop/utility.h\"\n #include \"storage/proc.h\"\n #include \"utils/exc.h\"\n+#include \"utils/fcache.h\"\n #include \"utils/guc.h\"\n #include \"utils/memutils.h\"\n #include \"utils/ps_status.h\"\n+#include \"utils/syscache.h\"\n #ifdef MULTIBYTE\n #include \"mb/pg_wchar.h\"\n #endif\n@@ -118,6 +120,10 @@\n */\n int\t\t\tXfuncMode = 0;\n \n+char *parser_function_name = 0;\n+static bool update_parser_function_fcache = true;\n+static FunctionCachePtr parser_function_fcache = NULL;\n+\n /* ----------------------------------------------------------------\n *\t\tdecls for routines only used in this file\n * ----------------------------------------------------------------\n@@ -389,8 +395,23 @@\n \n \tif (Show_parser_stats)\n \t\tResetUsage();\n+\n+\tif (update_parser_function_fcache)\n+\t\tassign_parser(parser_function_name);\n+\n+\tif (parser_function_fcache == NULL)\n+\t\traw_parsetree_list = parser(query_string, typev, nargs);\n+\telse\n+\t{\n+\t\tDatum\tresult;\n+\n+\t\tparser_function_fcache->fcinfo.arg[0] = PointerGetDatum(query_string);\n+\t\tparser_function_fcache->fcinfo.arg[1] = PointerGetDatum(typev);\n+\t\tparser_function_fcache->fcinfo.arg[2] = Int32GetDatum(nargs);\n \n-\traw_parsetree_list = parser(query_string, typev, nargs);\n+\t\tresult = FunctionCallInvoke(&parser_function_fcache->fcinfo);\n+\t\traw_parsetree_list = (List *) DatumGetPointer(result);\n+\t}\n \n \tif (Show_parser_stats)\n \t{\n@@ -399,6 +420,82 @@\n \t}\n \n \treturn raw_parsetree_list;\n+}\n+\n+/*\n+ * Check that we can find a parser function. This is called when the\n+ * user assigns a value to the `parser' variable.\n+ */\n+bool\n+check_parser(const char *proposed)\n+{\n+\tHeapTuple\ttup;\n+\tOid\t\t\targtypes[FUNC_MAX_ARGS];\n+\n+\tif (proposed[0] == '\\0' || strcmp(proposed, \"postgres\") == 0)\n+\t\treturn true;\n+\n+\t/* We can't check this unless we have started running. */\n+\tif (! IsNormalProcessingMode())\n+\t\treturn true;\n+\n+\tmemset(argtypes, 0, sizeof argtypes);\n+\ttup = SearchSysCache(PROCNAME,\n+\t\t\t\t\t\t PointerGetDatum(proposed),\n+\t\t\t\t\t\t Int32GetDatum(3),\n+\t\t\t\t\t\t PointerGetDatum(argtypes),\n+\t\t\t\t\t\t 0);\n+\tif (! HeapTupleIsValid(tup))\n+\t\treturn false;\n+\tReleaseSysCache(tup);\n+\treturn true;\n+}\n+\n+/*\n+ * Assign a new parser function.\n+ */\n+void\n+assign_parser(const char *value)\n+{\n+\tFunctionCachePtr\tfcache;\n+\tHeapTuple\t\t\ttup;\n+\tOid\t\t\t\t\targtypes[FUNC_MAX_ARGS];\n+\tOid\t\t\t\t\toid;\n+\n+\t/* We can't update parser_function_fcache until we have started\n+\t * running.\n+\t */\n+\tif (! IsNormalProcessingMode())\n+\t{\n+\t\tupdate_parser_function_fcache = true;\n+\t\treturn;\n+\t}\n+\n+\tif (value[0] == '\\0' || strcmp(value, \"postgres\") == 0)\n+\t\tfcache = NULL;\n+\telse\n+\t{\n+\t\tmemset(argtypes, 0, sizeof argtypes);\n+\t\ttup = SearchSysCache(PROCNAME,\n+\t\t\t\t\t\t\t PointerGetDatum(value),\n+\t\t\t\t\t\t\t Int32GetDatum(3),\n+\t\t\t\t\t\t\t PointerGetDatum(argtypes),\n+\t\t\t\t\t\t\t 0);\n+\t\tif (! HeapTupleIsValid(tup))\n+\t\t\telog(ERROR, \"parser function %s does not exist\", value);\n+\n+\t\toid = tup->t_data->t_oid;\n+\n+\t\tReleaseSysCache(tup);\n+\n+\t\tfcache = init_fcache(oid, 3, TopMemoryContext);\n+\t}\n+\n+\tif (parser_function_fcache != NULL)\n+\t\tpfree(parser_function_fcache);\n+\tparser_function_fcache = fcache;\n+\n+\tupdate_parser_function_fcache = false;\n }\n \n /*\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/guc.c,v\nretrieving revision 1.45\ndiff -u -r1.45 guc.c\n--- src/backend/utils/misc/guc.c\t2001/07/05 15:19:40\t1.45\n+++ src/backend/utils/misc/guc.c\t2001/08/11 00:53:11\n@@ -376,6 +376,9 @@\n \t{\"krb_server_keyfile\", PGC_POSTMASTER, &pg_krb_server_keyfile,\n \tPG_KRB_SRVTAB, NULL, NULL},\n \n+\t{\"parser\", PGC_USERSET, &parser_function_name, \"\",\n+\t check_parser, assign_parser},\n+\n #ifdef ENABLE_SYSLOG\n \t{\"syslog_facility\", PGC_POSTMASTER, &Syslog_facility,\n \t\"LOCAL0\", check_facility, NULL},\nIndex: src/include/tcop/tcopprot.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/tcop/tcopprot.h,v\nretrieving revision 1.41\ndiff -u -r1.41 tcopprot.h\n--- src/include/tcop/tcopprot.h\t2001/06/08 21:16:48\t1.41\n+++ src/include/tcop/tcopprot.h\t2001/08/11 00:53:12\n@@ -31,6 +31,8 @@\n extern bool HostnameLookup;\n extern bool ShowPortNumber;\n \n+extern char *parser_function_name;\n+\n #ifndef BOOTSTRAP_INCLUDE\n \n extern List *pg_parse_and_rewrite(char *query_string,\n@@ -39,6 +41,8 @@\n extern void pg_exec_query_string(char *query_string,\n \t\t\t\t\t CommandDest dest,\n \t\t\t\t\t MemoryContext parse_context);\n+extern bool check_parser(const char *proposed);\n+extern void assign_parser(const char *value);\n \n #endif\t /* BOOTSTRAP_INCLUDE */\n \n",
"msg_date": "10 Aug 2001 18:05:30 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Select parser at runtime"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> I've been experimenting with using a different parser (one which is\n> more Oracle compatible).\n\nHmm. What we have here is a mechanism to swap out the entire\nbackend/parser/ subdirectory --- and nothing else. Somehow this seems\nlike the wrong granularity. parser/ is an awful lot of code to replace\nto make a few random tweaks that don't affect query semantics. Since\nyou aren't changing the querytree representation nor any of the rewrite/\nplan/execute pipeline, it's hard to see how you can do more with this\nthan very marginal syntax hacks. But if that's all you want to do,\nseems like replacing pieces of the parser/semantic analyzer is the right\nmechanism, not the whole durn thing.\n\n> Note that you may want to leave yourself an escape\n> hatch of some sort to set the parser back to Postgres standard.\n> If this patch is accepted, then some further work needs to be done to\n> set the parser for SPI calls, so that it is possible for the user to\n> change the parser while still using ordinary PL/pgSQL.\n\nI think both of these issues would need to be addressed before, not\nafter, considering the patch for acceptance. In particular, how do we\ncater for both plpgsql and a true \"PL/SQL\" language that uses the Oracle\nparser?\n\n> +\t{\n> +\t\tDatum\tresult;\n> +\n> +\t\tparser_function_fcache->fcinfo.arg[0] = PointerGetDatum(query_string);\n> +\t\tparser_function_fcache->fcinfo.arg[1] = PointerGetDatum(typev);\n> +\t\tparser_function_fcache->fcinfo.arg[2] = Int32GetDatum(nargs);\n \n> -\traw_parsetree_list = parser(query_string, typev, nargs);\n> +\t\tresult = FunctionCallInvoke(&parser_function_fcache->fcinfo);\n> +\t\traw_parsetree_list = (List *) DatumGetPointer(result);\n> +\t}\n\nUse FunctionCall3() to hide the cruft here.\n\n> +\t\tfcache = init_fcache(oid, 3, TopMemoryContext);\n\nThis is a tad odd. Why don't you just do fmgr_info and store an FmgrInfo\nstructure? You have no use for the rest of an executor fcache\nstructure.\n\nThe update_parser_function_fcache business bothers me, too. I see the\nproblem: it doesn't work to do this lookup when postgresql.conf is read\nin the postmaster. However, I really don't like the notion of disabling\ncheck_parser and allowing a possibly-bogus value to be assigned on\nfaith. (If the function name *is* bogus, your code can never recover;\nat the very least you ought to clear update_parser_function_fcache\nbefore failing.) Given the extent to which the parser is necessarily\ntied to the rest of the system, I'm not sure there's any value in\nallowing it to be implemented as a dynamic-link function. I'd be more\nthan half inclined to go with a lower-tech solution wherein you expect\nthe alternate parser to be permanently linked in and known to the\ncheck_parser and assign_parser subroutines. Then the state variable is\njust a C function pointer, and assign_parser looks something like\n\n#ifdef ORACLE_PARSER_SUPPORTED\n\tif (strcasecmp(value, \"Oracle\") == 0)\n\t\tparser_fn = oracle_parser;\n\telse\n#endif\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 11:50:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Select parser at runtime "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ian Lance Taylor <ian@airs.com> writes:\n> > I've been experimenting with using a different parser (one which is\n> > more Oracle compatible).\n> \n> Hmm. What we have here is a mechanism to swap out the entire\n> backend/parser/ subdirectory --- and nothing else. Somehow this seems\n> like the wrong granularity. parser/ is an awful lot of code to replace\n> to make a few random tweaks that don't affect query semantics. Since\n> you aren't changing the querytree representation nor any of the rewrite/\n> plan/execute pipeline, it's hard to see how you can do more with this\n> than very marginal syntax hacks. But if that's all you want to do,\n> seems like replacing pieces of the parser/semantic analyzer is the right\n> mechanism, not the whole durn thing.\n\nThis patch doesn't actually replace the entire backend/parser\nsubdirectory. It mainly only replaces scan.l and gram.y. This is\nbecause the code in postgres.c still passes the result of the replaced\nparser to pg_analyze_and_rewrite().\n\nYou are absolutely correct that at this point this can only do\nmarginal syntax hacks. But since my goal is to support Oracle syntax,\nmarginal syntax hacks are just what is needed. For example, it's hard\nto tweak the existing parser to support Oracle, because the set of\nreserved words is different, and because Oracle uses different names\nfor datatypes. Also, CREATE FUNCTION in Oracle doesn't use a literal\nstring, it actually switches to a different parser while parsing the\nfunction. It's not impossible to hack these into the Postgres parser;\njust hard, and a complex maintenance problem.\n\nIt's true that increased Oracle compatibility will require code\nchanges in other parts of the backend.\n\n> > Note that you may want to leave yourself an escape\n> > hatch of some sort to set the parser back to Postgres standard.\n> > If this patch is accepted, then some further work needs to be done to\n> > set the parser for SPI calls, so that it is possible for the user to\n> > change the parser while still using ordinary PL/pgSQL.\n> \n> I think both of these issues would need to be addressed before, not\n> after, considering the patch for acceptance. In particular, how do we\n> cater for both plpgsql and a true \"PL/SQL\" language that uses the Oracle\n> parser?\n\nI agree that these issues need to be addressed before the patch is\naccepted. I want to get a sense of whether I am on the right track at\nall.\n\nAs noted above, PL/SQL needs to be more closely tied to the Oracle\nparser than PL/pgSQL is to the Postgres parser.\n\nI have been thinking in terms of a parser stack. Code which uses SPI\ncould push the correct parser on top of the stack, and pop it off when\ndone. Perhaps SPI would always use the postgres parser unless\nexplicitly directed otherwise. The PL/SQL language interface would\ndirect otherwise.\n\nThanks for your other comments. I went with a dynamic link approach\nbecause it seemed minimally intrusive. Using a straight function\npointer is certainly easier.\n\nIan\n",
"msg_date": "11 Aug 2001 10:01:58 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: Select parser at runtime"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> This patch doesn't actually replace the entire backend/parser\n> subdirectory. It mainly only replaces scan.l and gram.y. This is\n> because the code in postgres.c still passes the result of the replaced\n> parser to pg_analyze_and_rewrite().\n\nOh, of course, how silly of me. I was thinking that that call did the\nanalyze step too, but you're correct that it does not. Okay, replacing\nlexer+syntaxer is a more reasonable chunk-size. (AFAIK there's no good\nway to replace just part of a yacc/bison grammar on the fly, so you\ncouldn't go to a finer grain anyway, could you?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 16:35:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Select parser at runtime "
},
{
"msg_contents": "Ian Lance Taylor writes:\n\n> I've been experimenting with using a different parser (one which is\n> more Oracle compatible).\n\nWhy aren't you trying to add the missing pieces to the existing parser?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 11 Aug 2001 22:59:49 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Ian Lance Taylor writes:\n> \n> > I've been experimenting with using a different parser (one which is\n> > more Oracle compatible).\n> \n> Why aren't you trying to add the missing pieces to the existing parser?\n\nBecause my goal is a parser which can accept Oracle code directly, and\nOracle does not use the same SQL syntax as Postgres. They are, of\ncourse, very similar, but it is not the case that all the differences\nare items missing from the Postgres parser. Some of them are items\nwhich Postgres does in a different, typically more standards-\nconforming, way.\n\nFor example: the datatypes have different names; the set of reserved\nwords is different; Oracle uses a weird syntax for outer joins.\n\nIan\n",
"msg_date": "11 Aug 2001 16:35:49 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> For example: the datatypes have different names; the set of reserved\n> words is different; Oracle uses a weird syntax for outer joins.\n\nIs it really possible to fix these things strictly in the parser\n(ie, without any semantic analysis)? For example, I don't quite see\nhow you're going to translate Oracle-style outer joins to SQL standard\nstyle without figuring out which fields belong to which relations.\nKeep in mind the cardinal rule for the parsing step: Thou Shalt Not\nDo Any Database Access (because the parser must work even in\ntransaction-aborted state, else how do we recognize ROLLBACK command?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 20:24:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime "
},
{
"msg_contents": "Ian Lance Taylor writes:\n\n> Because my goal is a parser which can accept Oracle code directly, and\n> Oracle does not use the same SQL syntax as Postgres. They are, of\n> course, very similar, but it is not the case that all the differences\n> are items missing from the Postgres parser. Some of them are items\n> which Postgres does in a different, typically more standards-\n> conforming, way.\n\nI'm not sure whether I like the notion of having to maintain multiple\nparsers in the future. We have always been quite fair in accepting\nextensions and aliases for compatibility, so I don't see a problem there.\nThen again, we're implemented an SQL server, not an Oracle server. If you\nwant to convert your application there's this ora2pg thing.\n\n> For example: the datatypes have different names; the set of reserved\n> words is different;\n\nUnless you have implemented a different parsing algorithm or want to rip\nout features you're going to have a hard time changing the set of reserved\nwords.\n\n> Oracle uses a weird syntax for outer joins.\n\nWe had already rejected this idea. The earlier this disappears from the\nface of the earth the better.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 12 Aug 2001 02:26:51 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "> > Oracle uses a weird syntax for outer joins.\n> \n> We had already rejected this idea. The earlier this disappears from the\n> face of the earth the better.\n\nSure, but until that happens, if we can easily give people portability,\nwhy not? My idea was to have a gram.y that rewrote _some_ of the\ncommands, and output another query string that could then be passed into\nthe existing gram.y. Kind of crazy, but it may limit the duplication\nof code in the two gram.y's.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Aug 2001 21:58:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Oracle uses a weird syntax for outer joins.\n\n> We had already rejected this idea. The earlier this disappears from the\n> face of the earth the better.\n\nNow now, what about our goal of Postgres world domination? Gonna be\ntough to get there unless we can assimilate Oracle users ;-)\n\nI do want to keep that brain-damaged syntax at arm's length, which\nsuggests a separate parser rather than merging it into our main\nparser. I'm not convinced a translation can be done at the grammar\nlevel with no semantic analysis --- but if Ian thinks he can do it,\nlet him try...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 22:47:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime "
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> Oracle uses a weird syntax for outer joins.\n> \n> > We had already rejected this idea. The earlier this disappears from the\n> > face of the earth the better.\n> \n> Now now, what about our goal of Postgres world domination? Gonna be\n> tough to get there unless we can assimilate Oracle users ;-)\n\n\"Fortress PostgreSQL\", Lamar Owen said. We do have fortress feel,\ndon't we?\n\n\n> I do want to keep that brain-damaged syntax at arm's length, which\n> suggests a separate parser rather than merging it into our main\n> parser. I'm not convinced a translation can be done at the grammar\n> level with no semantic analysis --- but if Ian thinks he can do it,\n> let him try...\n\nMy guess is that he is going to need some changes in the /parser but\nthose will be minor and triggered by some Oracle flag.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Aug 2001 00:25:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ian Lance Taylor <ian@airs.com> writes:\n> > For example: the datatypes have different names; the set of reserved\n> > words is different; Oracle uses a weird syntax for outer joins.\n> \n> Is it really possible to fix these things strictly in the parser\n> (ie, without any semantic analysis)? For example, I don't quite see\n> how you're going to translate Oracle-style outer joins to SQL standard\n> style without figuring out which fields belong to which relations.\n> Keep in mind the cardinal rule for the parsing step: Thou Shalt Not\n> Do Any Database Access (because the parser must work even in\n> transaction-aborted state, else how do we recognize ROLLBACK command?)\n\nI admit that I haven't sorted out the outer join thing yet. The\nothers are easy enough.\n\nIan\n",
"msg_date": "11 Aug 2001 23:27:28 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> I'm not sure whether I like the notion of having to maintain multiple\n> parsers in the future. We have always been quite fair in accepting\n> extensions and aliases for compatibility, so I don't see a problem there.\n> Then again, we're implemented an SQL server, not an Oracle server. If you\n> want to convert your application there's this ora2pg thing.\n\nAn approach such as ora2pg solves a specific problem. It doesn't\nreally solve the general problem of people familiar with Oracle who\nwant to use Postgres. It doesn't at all solve my problem, which is to\nhave an application which can easily speak to either Oracle or\nPostgres.\n\n> > For example: the datatypes have different names; the set of reserved\n> > words is different;\n> \n> Unless you have implemented a different parsing algorithm or want to rip\n> out features you're going to have a hard time changing the set of reserved\n> words.\n\nThat's why I'm implementing a different parsing algorithm.\n\nIan\n",
"msg_date": "11 Aug 2001 23:31:39 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Tom Lane writes:\n\n> Now now, what about our goal of Postgres world domination? Gonna be\n> tough to get there unless we can assimilate Oracle users ;-)\n\nIn order to achieve world domination you don't want to offer\ncompatibility, otherwise your users could move back and forth easily.\nWhat you want is static conversion tools so people can move to your\nproduct but not back to others.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 12 Aug 2001 18:23:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Tom Lane writes:\n> \n> > Now now, what about our goal of Postgres world domination? Gonna be\n> > tough to get there unless we can assimilate Oracle users ;-)\n> \n> In order to achieve world domination you don't want to offer\n> compatibility, otherwise your users could move back and forth easily.\n> What you want is static conversion tools so people can move to your\n> product but not back to others.\n\nI disagree. To achieve world domination you should lower to barriers\nto adoption as much as possible, and then keep people with you due to\nthe superiority of your product. If the barriers to adoption are\nhigh, people won't take the risk, and won't discover the superiority.\n\nIncompatible syntax is a barrier to adoption because people fear the\ntime required to learn the new syntax, and they fear adopting Postgres\nand then discovering after three months of enhancements to their\nPostgres code that Postgres won't do the job and they have to switch\nback.\n\nIan\n",
"msg_date": "12 Aug 2001 12:19:32 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > Ian Lance Taylor <ian@airs.com> writes:\n> > > For example: the datatypes have different names; the set of reserved\n> > > words is different; Oracle uses a weird syntax for outer joins.\n> > \n> > Is it really possible to fix these things strictly in the parser\n> > (ie, without any semantic analysis)? For example, I don't quite see\n> > how you're going to translate Oracle-style outer joins to SQL standard\n> > style without figuring out which fields belong to which relations.\n> > Keep in mind the cardinal rule for the parsing step: Thou Shalt Not\n> > Do Any Database Access (because the parser must work even in\n> > transaction-aborted state, else how do we recognize ROLLBACK command?)\n> \n> I admit that I haven't sorted out the outer join thing yet. The\n> others are easy enough.\n> \n\nAnother idea is to put the Oracle stuff in gram.y, but use #ifdef or\nsomething to mark the Oracle parts, and run gram.y through yacc/bison\nwith the Oracle defines visiable, and another time to create a second\nparse state machine without Oracle. I think Jan did that for something\nlike that once.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 12 Aug 2001 17:22:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Hi guys,\n\nNot sure if Peter was joking, but Ian's approach sounds much more\nuser-friendly.\n\nGetting Oracle users to convert to PostgreSQL then be \"stuck-with-it\"\nbecause they can't afford the migration elsewhere is not the right\napproach.\n\nPostgreSQL is a really good product, and the best way to emphasise it is\n\"here's PostgreSQL, people use it coz it *works better*\".\n\nAnd that's definitely achieveable.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nIan Lance Taylor wrote:\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n> > Tom Lane writes:\n> >\n> > > Now now, what about our goal of Postgres world domination? Gonna be\n> > > tough to get there unless we can assimilate Oracle users ;-)\n> >\n> > In order to achieve world domination you don't want to offer\n> > compatibility, otherwise your users could move back and forth easily.\n> > What you want is static conversion tools so people can move to your\n> > product but not back to others.\n> \n> I disagree. To achieve world domination you should lower to barriers\n> to adoption as much as possible, and then keep people with you due to\n> the superiority of your product. If the barriers to adoption are\n> high, people won't take the risk, and won't discover the superiority.\n> \n> Incompatible syntax is a barrier to adoption because people fear the\n> time required to learn the new syntax, and they fear adopting Postgres\n> and then discovering after three months of enhancements to their\n> Postgres code that Postgres won't do the job and they have to switch\n> back.\n> \n> Ian\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 13 Aug 2001 08:17:17 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "On Mon, 13 Aug 2001, Justin Clift wrote:\n\n> Hi guys,\n>\n> Not sure if Peter was joking, but Ian's approach sounds much more\n> user-friendly.\n>\n> Getting Oracle users to convert to PostgreSQL then be \"stuck-with-it\"\n> because they can't afford the migration elsewhere is not the right\n> approach.\n\nIf you think that people are going to flock to PostgreSQL from Oracle\nsimply because it's a drop in replacement, I want some of whatever it\nis you're drinking!\n\nAn Oracle compatibility mode wouldn't be a bad idea, but at what cost\nand at how much effort? What are you going to do with incompatible\nreserved words? Who do you expect to do it? How soon? I've seen\nalot of projects try to make themselves \"user-friendly\" only to suffer\nin the end from what they lost in the effort.\n\nPersonally I'd prefer a PostgreSQL that was as SQL92 and beyond as it\ncould possibly be rather than some of this and some of that.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 12 Aug 2001 22:21:55 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "On Sun, Aug 12, 2001 at 10:21:55PM -0400, Vince Vielhaber wrote:\n> \n> If you think that people are going to flock to PostgreSQL from Oracle\n> simply because it's a drop in replacement, I want some of whatever it\n> is you're drinking!\n> \n> An Oracle compatibility mode wouldn't be a bad idea, but at what cost\n> and at how much effort? What are you going to do with incompatible\n> reserved words? Who do you expect to do it? How soon? I've seen\n> alot of projects try to make themselves \"user-friendly\" only to suffer\n> in the end from what they lost in the effort.\n> \n> Personally I'd prefer a PostgreSQL that was as SQL92 and beyond as it\n> could possibly be rather than some of this and some of that.\n\nCompatability modes have a good side and a bad side: they make it easy for\nDBAs to move existing installations to your product at little cost, then\nslowly upgrade to using native features. The OpenACS project would have\nloved to have this possibility. Many of our current Oracle compatability\nfeatures came about to ease such ports, right?\n\nHowever, they suffer from the problem of allowing continued operation in\n'compatability' mode: there is no incentive for DB using project to use\nthe native interfaces or capabilites, even if they are better. This\nis how OS/2's Windows compatabilty mode killed it in the long term:\nthe 'defacto standard' problem. Would the OpenACS project even exist,\nif a full Oracle mode had existed at the time?\n\nSo, I think you're right, in the long run it's best to conform to an open\nstandard than try to chase a commercial product.\n\nRoss\n\nP.S. Of course, as an open project, anyone can write anything they\nwant. If we end up with a 'better Oracle than Oracle', so be it.\n",
"msg_date": "Sun, 12 Aug 2001 22:13:46 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n\n> An Oracle compatibility mode wouldn't be a bad idea, but at what cost\n> and at how much effort?\n\nThat is why I focused on the relatively minor changes to Postgres\nrequired to hook in an alternate parser. I personally would not\nexpect the mainline Postgres team to worry about Oracle support. But\nif an Oracle parser can be decoupled from the mainline of Postgres\nwork, then most of the cost will be paid by the people who care about\nit. (Not all of the cost, because some communication will be required\nwhen the parse tree nodes are changed.)\n\nAlong these lines, I don't think Bruce's suggestion of modifications\nto the Postgres gram.y is a good idea, because it causes the Oracle\nparser to add an ongoing cost to the Postgres parser.\n\nIan\n",
"msg_date": "12 Aug 2001 21:51:55 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Hi Vince,\n\nThe point I'll make is this :\n\nPeople who presently have installations on Oracle will be more inclined\nto test/trial PostgreSQL if they know the learning curve is much less\nthan say, migrating to DB2 would be (or some other database without\nspecific Oracle-transition compatibilities).\n\nSure, they might move their installations to\nPostgreSQL-with-an-Oracle-like-parser and then never convert them to\npure PostgreSQL. So? Does it matter? Probably not, they're still\nusing PostgreSQL. I'm pretty sure over time newer projects and\ninstallations would become more PostgreSQL oriented as the DBA's gained\nmore experience and understanding of PostgreSQL's strengths. i.e. \n\"Chalk up a win.\"\n\nVince Vielhaber wrote:\n> \n> On Mon, 13 Aug 2001, Justin Clift wrote:\n> \n> > Hi guys,\n> >\n> > Not sure if Peter was joking, but Ian's approach sounds much more\n> > user-friendly.\n> >\n> > Getting Oracle users to convert to PostgreSQL then be \"stuck-with-it\"\n> > because they can't afford the migration elsewhere is not the right\n> > approach.\n> \n> If you think that people are going to flock to PostgreSQL from Oracle\n> simply because it's a drop in replacement, I want some of whatever it\n> is you're drinking!\n\nIf PostgreSQL was truly a drop-in-replacement then cost and good\nreputation (especially over the coming years) would mean a lot of places\nwould use us instead of Oracle. Presently though, we're not a\ndrop-in-replacement.\n\n> An Oracle compatibility mode wouldn't be a bad idea, but at what cost\n> and at how much effort? What are you going to do with incompatible\n> reserved words? Who do you expect to do it? How soon? I've seen\n> alot of projects try to make themselves \"user-friendly\" only to suffer\n> in the end from what they lost in the effort.\n\nThe cost and effort is purely voluntary. :) i.e. $0-ish cost, and\nheaps of effort.\n\n> Personally I'd prefer a PostgreSQL that was as SQL92 and beyond as it\n> could possibly be rather than some of this and some of that.\n\nI don't see how having alternate parsers available, maintained and\nupdated by those interested in them, is a bad thing. Certainly don't\nsee how it detracts from the main effort.\n\n???\n\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 13 Aug 2001 23:25:42 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Ian Lance Taylor wrote:\n> Along these lines, I don't think Bruce's suggestion of modifications\n> to the Postgres gram.y is a good idea, because it causes the Oracle\n> parser to add an ongoing cost to the Postgres parser.\n\n Bruce, Tom and I discussed these issues during our time in\n San Diego last month.\n\n If we want to have both parsers available at runtime we need\n to replace the YY (case-insensitive) prefix in the generated\n files per parser and call the right one from tcop. Now for\n some flex/bison combo's at least the prefix switches (to have\n something different than YY) don't work reliable. There will\n be some global YY-objects left, causing linkage problems.\n That's why PL/pgSQL's scanner/parser's C-code is run through\n sed(1).\n\n If Bruce's suggestion of having both parsers in one source\n with #ifdef, #else, #endif is better than separate sources\n depends mainly on how big the differences finally will be.\n Doesn't really bother me. Maybe we could start with a\n combined one and separate later if it turns out that they\n drift apart too much?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 13 Aug 2001 09:55:59 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> ... most of the cost will be paid by the people who care about\n> it. (Not all of the cost, because some communication will be required\n> when the parse tree nodes are changed.)\n\n> Along these lines, I don't think Bruce's suggestion of modifications\n> to the Postgres gram.y is a good idea, because it causes the Oracle\n> parser to add an ongoing cost to the Postgres parser.\n\nAnd managing grammar changes and parse-tree-node changes is not an\nongoing cost? I beg to differ. We do that a lot, and keeping multiple\ngrammar files in sync is not a pleasant prospect. (Look at ecpg ---\nit's a major pain to keep it in sync with the main parser, even though\nit only shares productions and not output code. Worse, I have zero\nconfidence that it actually *is* in sync.)\n\nIf the grammar changes are small and localized, I think Bruce's #ifdef\napproach might well be the way to go. However, we're speculating in\na vacuum here, not having seen the details of the changes needed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 09:57:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime "
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> files per parser and call the right one from tcop. Now for\n> some flex/bison combo's at least the prefix switches (to have\n> something different than YY) don't work reliable. There will\n> be some global YY-objects left, causing linkage problems.\n> That's why PL/pgSQL's scanner/parser's C-code is run through\n> sed(1).\n\nThe only reason plpgsql's parser is still run through sed is that\nI haven't gotten around to changing it ;-). The main system has\ndepended on -P for awhile, and we've seen no reports of trouble.\n\n(This is not unrelated to the fact that we now ship pre-yacced and\npre-lexed .c files, no doubt. Only people who pull from CVS ever\nrebuild the files at all, and we tell them they must use up-to-date\nflex and bison. This policy seems to work a lot better than the old\nway of trying to work with whatever broken tools a particular platform\nmight have...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 10:23:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime "
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n\n> If we want to have both parsers available at runtime we need\n> to replace the YY (case-insensitive) prefix in the generated\n> files per parser and call the right one from tcop. Now for\n> some flex/bison combo's at least the prefix switches (to have\n> something different than YY) don't work reliable. There will\n> be some global YY-objects left, causing linkage problems.\n> That's why PL/pgSQL's scanner/parser's C-code is run through\n> sed(1).\n\nThis is a solved problem. gdb, for example, links together four\ndifferent Yacc-based parsers, without even using bison's -p option.\n\nIan\n",
"msg_date": "13 Aug 2001 10:08:46 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ian Lance Taylor <ian@airs.com> writes:\n> > ... most of the cost will be paid by the people who care about\n> > it. (Not all of the cost, because some communication will be required\n> > when the parse tree nodes are changed.)\n> \n> > Along these lines, I don't think Bruce's suggestion of modifications\n> > to the Postgres gram.y is a good idea, because it causes the Oracle\n> > parser to add an ongoing cost to the Postgres parser.\n> \n> And managing grammar changes and parse-tree-node changes is not an\n> ongoing cost? I beg to differ. We do that a lot, and keeping multiple\n> grammar files in sync is not a pleasant prospect. (Look at ecpg ---\n> it's a major pain to keep it in sync with the main parser, even though\n> it only shares productions and not output code. Worse, I have zero\n> confidence that it actually *is* in sync.)\n\nParse tree node changes are definitely an ongoing cost, as I mention\nin the first quoted paragraph. However, I would see this as a\ncommunication issue from the people maintaining the core parser to the\npeople maintaining the Oracle parser. Perhaps it will be possible to\nbetter formalize the parse tree node interface.\n\n(This approach is from my gcc experience. The various gcc parsers (C,\nC++, etc.) generate tree nodes. The structure of the tree nodes does\nchange from time to time, forcing all the other parsers to change.\nThis is generally driven by the needs of some parser. Different\ngroups of people maintain each parser.)\n\nI'm not sure what you mean by managing grammar changes, although\nperhaps I am reading too much into that. The Oracle grammar is set by\nOracle, and will not change even if the Postgres grammar changes.\n\n> If the grammar changes are small and localized, I think Bruce's #ifdef\n> approach might well be the way to go. However, we're speculating in\n> a vacuum here, not having seen the details of the changes needed.\n\nI've been trying to avoid the details, because that is just going to\nraise another discussion which I honestly believe is orthogonal to\nthis discussion. However, a quick sketch: proper handling of Oracle\nreserved words is very difficult in a Yacc-based parser, because the\nset of reserved words effectively changes on a contextual basis. The\nPostgres grammar uses various workarounds for this, but doesn't handle\neverything entirely correctly from the point of view of the Oracle\nlanguage. Therefore, my prototype Oracle grammar is actually a\nrecursive descent parser, not Yacc-based at all.\n\nIan\n",
"msg_date": "13 Aug 2001 10:33:54 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Ian Lance Taylor writes:\n\n> I'm not sure what you mean by managing grammar changes, although\n> perhaps I am reading too much into that. The Oracle grammar is set by\n> Oracle, and will not change even if the Postgres grammar changes.\n\nThings like VACUUM and ANALYZE, which you will have to keep unless you\nwant to implement an Oracle storage manager as well. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 13 Aug 2001 19:49:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Things like VACUUM and ANALYZE, which you will have to keep unless you\n> want to implement an Oracle storage manager as well. ;-)\n\nEvidently Ian is just interested in a parser that could be used by\nOracle-compatible applications, which'd not be invoking such operations\nanyway. Maintenance scripts would have to use the regular PG parser.\nThat doesn't seem unreasonable.\n\nBased on his further explanation, it seems that tracking grammar changes\nwouldn't be an issue, but tracking parsetree changes definitely would\nbe. I'm also still concerned about whether this can be done within the\nparse step (no database access).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 13:58:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Select parser at runtime "
}
] |
[
{
"msg_contents": "Is there a reason why it's CREATE LANGUAGE 'string' and not CREATE\nLANGUAGE \"ident\"? I'd like to allow both to make it consistent with the\nother create commands. SQL also uses identifier syntax for language\nnames.\n\nAlso, what is LANCOMPILER for? If it's just a comment we ought to use\npg_description.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 11 Aug 2001 12:52:05 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "CREATE LANGUAGE"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Is there a reason why it's CREATE LANGUAGE 'string' and not CREATE\n> LANGUAGE \"ident\"? I'd like to allow both to make it consistent with the\n> other create commands. SQL also uses identifier syntax for language\n> names.\n\nSeems reasonable to me. Don't overlook the CREATE FUNCTION ... LANGUAGE\nclause, too.\n\n> Also, what is LANCOMPILER for? If it's just a comment we ought to use\n> pg_description.\n\nI think the original idea might have been to someday support\nauto-building of compiled functions. (Though a setup allowing one to\ninvoke make with suitable arguments would probably be far more useful\nthan a bare compiler name.) I'm not in a hurry to rip it out until we\nhave designed such a facility, anyway. There are a lot of vestigial\nfeatures in Postgres that we might someday resurrect/implement. I tend\nto view those things as TODO markers, not stuff to rip out to save a\nbyte or two.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 11:09:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE LANGUAGE "
},
{
"msg_contents": "Tom Lane writes:\n\n> > Also, what is LANCOMPILER for? If it's just a comment we ought to use\n> > pg_description.\n>\n> I think the original idea might have been to someday support\n> auto-building of compiled functions.\n\nThat's what I though, too. My concern is mostly that this clause should\nbe optional, because right now it needs to be filled with in with useless\nstuff.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 11 Aug 2001 17:44:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: CREATE LANGUAGE "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> That's what I though, too. My concern is mostly that this clause should\n> be optional,\n\nOh. No objection to that --- just drop an empty string into the\npg_language column if it's omitted.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 11:55:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE LANGUAGE "
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > That's what I though, too. My concern is mostly that this clause should\n> > be optional,\n> \n> Oh. No objection to that --- just drop an empty string into the\n> pg_language column if it's omitted.\n\nAlso, should we remove documentation about the field. Confusing to\ndocument something that is meaningless.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 11 Aug 2001 12:55:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE LANGUAGE"
},
{
"msg_contents": "Perhaps change the documentation instead so it reflects this is a\ndeprecated feature that one day may be resurrected?\n\nIMHO, totally removing reference to it in the documentation will confuse\npeople who have come across it before and wondered where it went.\n\n+ Justin\n\n\nBruce Momjian wrote:\n> \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > That's what I though, too. My concern is mostly that this clause should\n> > > be optional,\n> >\n> > Oh. No objection to that --- just drop an empty string into the\n> > pg_language column if it's omitted.\n> \n> Also, should we remove documentation about the field. Confusing to\n> document something that is meaningless.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sun, 12 Aug 2001 06:48:53 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: CREATE LANGUAGE"
}
] |
[
{
"msg_contents": "I've run into a problem with a fresh install from current CVS.\nIIRC the same problem did not occur with the 7.1.2 release, so\nthis may be a bug introduced in the current CVS tree.\n\nI've run configure _without_ --enable-multibyte. Then when I run\ninitdb with --encoding LATIN1 it fails with: \n\n:initdb:\n:/home/rene/scratch/PostgreSQL/installed/bin/pg_encoding: \n:No such file or directory\n:initdb: pg_encoding failed\n:\n:Perhaps you did not configure PostgreSQL for multibyte support\n:or the program was not successfully installed.\n\nIndeed when I configure with --enable-multibyte and rebuild,\ninitdb --encoding LATIN1 works fine.\n\nBut now the problem: LATIN1 is of course a singlebyte encoding.\nSo why is multibyte support needed to use it?\n\nIf this is the intended behaviour, I think the documentation of\n--enable-multibyte in the INSTALL file (and perhaps in other\nplaces) should be fixed. And if at all possible, it should be\nrenamed too.\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Sun, 12 Aug 2001 12:05:23 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": true,
"msg_subject": "Initdb -E LATIN1 fails when no multibyte support compiled in (current\n\tCVS)"
},
{
"msg_contents": "Rene Pijlman writes:\n\n> But now the problem: LATIN1 is of course a singlebyte encoding.\n> So why is multibyte support needed to use it?\n\n--enable-multibyte really gives you two things:\n\n1. ability to handle multi-byte characters sets in string mashing\nfunctions\n\n2. character set conversion between client and server\n\nThese things are technically unrelated but the group of users that need\nthis seems to have coincided. If you want to propose splitting this up it\ncould be discussed. Maybe #2 could even be on by default.\n\n> If this is the intended behaviour, I think the documentation of\n> --enable-multibyte in the INSTALL file (and perhaps in other\n> places) should be fixed. And if at all possible, it should be\n> renamed too.\n\nI think the INSTALL file refers you to the Admin Guide for more\ninformation, where this is explained. Maybe that needs to be improved.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 12 Aug 2001 12:50:57 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Initdb -E LATIN1 fails when no multibyte support\n\tcompiled in (current CVS)"
},
{
"msg_contents": "On Sun, 12 Aug 2001 12:50:57 +0200 (CEST), you wrote:\n>1. ability to handle multi-byte characters sets in string mashing\n>functions\n>\n>2. character set conversion between client and server\n>\n>These things are technically unrelated but the group of users that need\n>this seems to have coincided. If you want to propose splitting this up it\n>could be discussed.\n\nI have no reason to.\n\n>> If this is the intended behaviour, I think the documentation of\n>> --enable-multibyte in the INSTALL file (and perhaps in other\n>> places) should be fixed. And if at all possible, it should be\n>> renamed too.\n>\n>I think the INSTALL file refers you to the Admin Guide for more\n>information, where this is explained. Maybe that needs to be improved.\n\nThe INSTALL file says:\n\"--enable-multibyte\nAllows the use of multibyte character encodings. This is\nprimarily for languages like Japanese, Korean, and Chinese. Read\nthe Administrator's Guide for details.\"\n\nThis is misguiding for users with singlebyte (e.g. European)\nencodings. Even if the Admin Guide explains it better, I think\nthe explanation in the INSTALL file should be corrected.\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Sun, 12 Aug 2001 18:11:56 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": true,
"msg_subject": "Re: Initdb -E LATIN1 fails when no multibyte support compiled in\n\t(current CVS)"
}
] |
[
{
"msg_contents": "\nhello all\nHow can i create a user in a function called by a trigger \nwith the plpgsql language???\n\nthanks...\n",
"msg_date": "12 Aug 2001 15:27:55 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "CREATE USER in a TRIGGER"
}
] |
[
{
"msg_contents": "Forgot to CC.\n\nMagnus\n----- Original Message -----\nFrom: \"Magnus Naeslund(f)\" <mag@fbab.net>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nSent: Sunday, August 12, 2001 1:19 PM\nSubject: Re: [HACKERS] Re: Re: WIN32 errno patch\n\n\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> > >> It does not even compile:\n> > > Same behaviour here.\n> >\n> > When someone sends me a Windoze implementation of the proposed\n> > SOCK_STRERROR() macro, I'll see about fixing it. Till then\n> > I can't do much.\n>\n> As I said, the FormatMessage function is probably what we want.\n> I did a non threadsafe (not a crash thing, just if SOCK_STRERROR is called\n> the exact same time by two threads one thread will probably get the other\n> thread's message).\n>\n> The code + the testcase is attached.\n>\n> > regards, tom lane\n>\n> Magnus Naeslund\n>\n>\n>\n\n",
"msg_date": "Sun, 12 Aug 2001 19:40:55 +0200",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>",
"msg_from_op": true,
"msg_subject": "Re: Re: Re: WIN32 errno patch "
}
] |
[
{
"msg_contents": "I made the following patch, and it works for MY platform.\n\nPeter,\n Can we do something similar for the distribution to set the\nRUNPATH for Pg.so? \n\n\nIndex: Makefile.PL\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/perl5/Makefile.PL,v\nretrieving revision 1.17\ndiff -c -r1.17 Makefile.PL\n*** Makefile.PL\t2001/03/06 22:07:09\t1.17\n--- Makefile.PL\t2001/08/13 04:12:28\n***************\n*** 64,66 ****\n--- 64,77 ----\n ];\n \n }\n+ sub MY::dynamic_lib {\n+ package MY;\n+ my $inherited= shift->SUPER::dynamic_lib(@_);\n+ if (! -d $ENV{POSTGRES_LIB} ) {\n+ my $cwd = `pwd`;\n+ chop $cwd;\n+ $ENV{POSTGRES_LIB} = \"$cwd/../libpq\";\n+ }\n+ $inherited=~ s@OTHERLDFLAGS =@OTHERLDFLAGS =-R$ENV{POSTGRES_LIB}@;\n+ $inherited;\n+ }\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 12 Aug 2001 23:14:00 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Makefile.PL for Pg.so"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> I made the following patch, and it works for MY platform.\n>\n> Peter,\n> Can we do something similar for the distribution to set the\n> RUNPATH for Pg.so?\n\nThis is an interesting idea. I'd rather rip out MakeMaker completely, but\nthis might be a good start.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 15 Aug 2001 00:02:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010814 16:58]:\n> Larry Rosenman writes:\n> \n> > I made the following patch, and it works for MY platform.\n> >\n> > Peter,\n> > Can we do something similar for the distribution to set the\n> > RUNPATH for Pg.so?\n> \n> This is an interesting idea. I'd rather rip out MakeMaker completely, but\n> this might be a good start.\nI'm not familiar enough with the config / autoconf stuff to do a\nportable patch. Can you help in this area?\n\nLER\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 14 Aug 2001 17:38:36 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010825 18:14]:\n> Larry Rosenman writes:\n> \n> > Can we do something similar for the distribution to set the\n> > RUNPATH for Pg.so?\n> \n> AFAICT, Pg.so does get the runpath set correctly. Are you saying it\n> doesn't work on your system or do you want to get rid of the\n> recompilation during installation?\nIt doesn't work on this systsm . \n\n(you still have a login here). \n\nLER\n\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 25 Aug 2001 18:15:33 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> Can we do something similar for the distribution to set the\n> RUNPATH for Pg.so?\n\nAFAICT, Pg.so does get the runpath set correctly. Are you saying it\ndoesn't work on your system or do you want to get rid of the\nrecompilation during installation?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 26 Aug 2001 01:18:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010825 18:14]:\n> Larry Rosenman writes:\n> \n> > Can we do something similar for the distribution to set the\n> > RUNPATH for Pg.so?\n> \n> AFAICT, Pg.so does get the runpath set correctly. Are you saying it\n> doesn't work on your system or do you want to get rid of the\n> recompilation during installation?\n> \nOh, BTW, LD_RUN_PATH is *NOT* used to set the runpath on SVR5. That's\nwhy I needed to pass -R to the build. \n\nLER\n\n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 25 Aug 2001 19:18:55 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "On Saturday 25 August 2001 19:18, Peter Eisentraut wrote:\n> AFAICT, Pg.so does get the runpath set correctly. Are you saying it\n> doesn't work on your system or do you want to get rid of the\n> recompilation during installation?\n\nPg.so does not get the proper RPATH in a DESTDIR build environment.\n\nTrying to fix this for the RPMS -- the RPATH contains the buildroot instead \nof where the libs really are. Could cause security problems. Working on it, \nbut it's slow going for me at this busy time.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 26 Aug 2001 01:05:09 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "Lamar Owen writes:\n\n> Pg.so does not get the proper RPATH in a DESTDIR build environment.\n>\n> Trying to fix this for the RPMS -- the RPATH contains the buildroot instead\n> of where the libs really are. Could cause security problems. Working on it,\n> but it's slow going for me at this busy time.\n\nAnother fun feature of the DESTDIR build environment is that the\nwritability test of the target directory will most likely fail because it\ndoesn't exist at all.\n\nI've been thinking how I'd like to fix this: We add an option to\nconfigure that says to *not* install the Perl module into the standard\nPerl tree, but instead somewhere under our own $prefix. That way people\nthat don't have root access can use this option and still install the\nwhole tree in one run. But then we'd remove that writability check and\npeople that have root access or failed to use that option will get a hard\nfailure. This would create a much more reliable and predictable build\nenvironment.\n\nComments? And a good name for such an option?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 26 Aug 2001 14:37:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Another fun feature of the DESTDIR build environment is that the\n> writability test of the target directory will most likely fail because it\n> doesn't exist at all.\n\n> I've been thinking how I'd like to fix this: We add an option to\n> configure that says to *not* install the Perl module into the standard\n> Perl tree, but instead somewhere under our own $prefix. That way people\n> that don't have root access can use this option and still install the\n> whole tree in one run. But then we'd remove that writability check and\n> people that have root access or failed to use that option will get a hard\n> failure. This would create a much more reliable and predictable build\n> environment.\n\nWhy would we remove the writability check? Perhaps it needs to be\nextended to recognize the case of target-dir-doesn't-exist-but-can-be-\ncreated, but I don't see why a hard failure is better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 26 Aug 2001 12:11:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010826 11:11]:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Another fun feature of the DESTDIR build environment is that the\n> > writability test of the target directory will most likely fail because it\n> > doesn't exist at all.\n> \n> > I've been thinking how I'd like to fix this: We add an option to\n> > configure that says to *not* install the Perl module into the standard\n> > Perl tree, but instead somewhere under our own $prefix. That way people\n> > that don't have root access can use this option and still install the\n> > whole tree in one run. But then we'd remove that writability check and\n> > people that have root access or failed to use that option will get a hard\n> > failure. This would create a much more reliable and predictable build\n> > environment.\n> \n> Why would we remove the writability check? Perhaps it needs to be\n> extended to recognize the case of target-dir-doesn't-exist-but-can-be-\n> created, but I don't see why a hard failure is better.\nI tend to agree with Tom here. The original problem I was seeing is\n*NOT* related to DESTDIR (I don't believe). \n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--enable-debug \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n\nThe above is my configure input. \n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 26 Aug 2001 12:48:11 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> On Saturday 25 August 2001 19:18, Peter Eisentraut wrote:\n> > AFAICT, Pg.so does get the runpath set correctly. Are you saying it\n> > doesn't work on your system or do you want to get rid of the\n> > recompilation during installation?\n> \n> Pg.so does not get the proper RPATH in a DESTDIR build environment.\n\nRather: Perl decides it wants to specify a LD_RUN_PATH in its makefile. \nThis will automatically make use of -R. It's fixed in the RPMs\navailable at http://people.redhat.com/teg/pg/\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "26 Aug 2001 14:46:54 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "Trond Eivind Glomsr�d writes:\n\n> > Pg.so does not get the proper RPATH in a DESTDIR build environment.\n>\n> Rather: Perl decides it wants to specify a LD_RUN_PATH in its makefile.\n> This will automatically make use of -R. It's fixed in the RPMs\n> available at http://people.redhat.com/teg/pg/\n\nAFAICT, these RPMs still contain the same problem that the runpath will\npoint the the temporary installation directory, not the final location.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 26 Aug 2001 22:05:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Trond Eivind Glomsr�d writes:\n> \n> > > Pg.so does not get the proper RPATH in a DESTDIR build environment.\n> >\n> > Rather: Perl decides it wants to specify a LD_RUN_PATH in its makefile.\n> > This will automatically make use of -R. It's fixed in the RPMs\n> > available at http://people.redhat.com/teg/pg/\n> \n> AFAICT, these RPMs still contain the same problem that the runpath will\n> point the the temporary installation directory, not the final location.\n\nDoh... these were just temporary rpms, I've put the final -2 rpms\nthere now (built later the same day).\n\nBasically, I fixed it by removing references to LD_RUN_PATH after perl\nhad generated it.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "26 Aug 2001 16:09:45 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010826 17:42]:\n> I've checked in a fix for the runpath, DESTDIR, and VPATH problems. The\n> first needs to be tested on a variety of platforms because I had to guess\n> the osname configuration values and the linker/compiler that Perl uses.\nI ass/u/me that you tested it on lerami? \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 26 Aug 2001 17:43:38 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "I've checked in a fix for the runpath, DESTDIR, and VPATH problems. The\nfirst needs to be tested on a variety of platforms because I had to guess\nthe osname configuration values and the linker/compiler that Perl uses.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 27 Aug 2001 00:46:47 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Makefile.PL for Pg.so"
},
{
"msg_contents": "\nCan someone comment on this?\n\n\n> I made the following patch, and it works for MY platform.\n> \n> Peter,\n> Can we do something similar for the distribution to set the\n> RUNPATH for Pg.so? \n> \n> \n> Index: Makefile.PL\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/perl5/Makefile.PL,v\n> retrieving revision 1.17\n> diff -c -r1.17 Makefile.PL\n> *** Makefile.PL\t2001/03/06 22:07:09\t1.17\n> --- Makefile.PL\t2001/08/13 04:12:28\n> ***************\n> *** 64,66 ****\n> --- 64,77 ----\n> ];\n> \n> }\n> + sub MY::dynamic_lib {\n> + package MY;\n> + my $inherited= shift->SUPER::dynamic_lib(@_);\n> + if (! -d $ENV{POSTGRES_LIB} ) {\n> + my $cwd = `pwd`;\n> + chop $cwd;\n> + $ENV{POSTGRES_LIB} = \"$cwd/../libpq\";\n> + }\n> + $inherited=~ s@OTHERLDFLAGS =@OTHERLDFLAGS =-R$ENV{POSTGRES_LIB}@;\n> + $inherited;\n> + }\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 14:25:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Makefile.PL for Pg.so"
},
{
"msg_contents": "I think peter_e dealt with this....\n\nLER\n\n* Bruce Momjian <pgman@candle.pha.pa.us> [010907 13:25]:\n> \n> Can someone comment on this?\n> \n> \n> > I made the following patch, and it works for MY platform.\n> > \n> > Peter,\n> > Can we do something similar for the distribution to set the\n> > RUNPATH for Pg.so? \n> > \n> > \n> > Index: Makefile.PL\n> > ===================================================================\n> > RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/perl5/Makefile.PL,v\n> > retrieving revision 1.17\n> > diff -c -r1.17 Makefile.PL\n> > *** Makefile.PL\t2001/03/06 22:07:09\t1.17\n> > --- Makefile.PL\t2001/08/13 04:12:28\n> > ***************\n> > *** 64,66 ****\n> > --- 64,77 ----\n> > ];\n> > \n> > }\n> > + sub MY::dynamic_lib {\n> > + package MY;\n> > + my $inherited= shift->SUPER::dynamic_lib(@_);\n> > + if (! -d $ENV{POSTGRES_LIB} ) {\n> > + my $cwd = `pwd`;\n> > + chop $cwd;\n> > + $ENV{POSTGRES_LIB} = \"$cwd/../libpq\";\n> > + }\n> > + $inherited=~ s@OTHERLDFLAGS =@OTHERLDFLAGS =-R$ENV{POSTGRES_LIB}@;\n> > + $inherited;\n> > + }\n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 7 Sep 2001 13:35:39 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Makefile.PL for Pg.so"
},
{
"msg_contents": "\nYes, thanks. Cleaning out my mailbox, as usualy before a beta.\n\n\n\n> I think peter_e dealt with this....\n> \n> LER\n> \n> * Bruce Momjian <pgman@candle.pha.pa.us> [010907 13:25]:\n> > \n> > Can someone comment on this?\n> > \n> > \n> > > I made the following patch, and it works for MY platform.\n> > > \n> > > Peter,\n> > > Can we do something similar for the distribution to set the\n> > > RUNPATH for Pg.so? \n> > > \n> > > \n> > > Index: Makefile.PL\n> > > ===================================================================\n> > > RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/perl5/Makefile.PL,v\n> > > retrieving revision 1.17\n> > > diff -c -r1.17 Makefile.PL\n> > > *** Makefile.PL\t2001/03/06 22:07:09\t1.17\n> > > --- Makefile.PL\t2001/08/13 04:12:28\n> > > ***************\n> > > *** 64,66 ****\n> > > --- 64,77 ----\n> > > ];\n> > > \n> > > }\n> > > + sub MY::dynamic_lib {\n> > > + package MY;\n> > > + my $inherited= shift->SUPER::dynamic_lib(@_);\n> > > + if (! -d $ENV{POSTGRES_LIB} ) {\n> > > + my $cwd = `pwd`;\n> > > + chop $cwd;\n> > > + $ENV{POSTGRES_LIB} = \"$cwd/../libpq\";\n> > > + }\n> > > + $inherited=~ s@OTHERLDFLAGS =@OTHERLDFLAGS =-R$ENV{POSTGRES_LIB}@;\n> > > + $inherited;\n> > > + }\n> > > \n> > > -- \n> > > Larry Rosenman http://www.lerctr.org/~ler\n> > > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 14:48:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Makefile.PL for Pg.so"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Can someone comment on this?\n\nIt's done.\n\n>\n>\n> > I made the following patch, and it works for MY platform.\n> >\n> > Peter,\n> > Can we do something similar for the distribution to set the\n> > RUNPATH for Pg.so?\n> >\n> >\n> > Index: Makefile.PL\n> > ===================================================================\n> > RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/perl5/Makefile.PL,v\n> > retrieving revision 1.17\n> > diff -c -r1.17 Makefile.PL\n> > *** Makefile.PL\t2001/03/06 22:07:09\t1.17\n> > --- Makefile.PL\t2001/08/13 04:12:28\n> > ***************\n> > *** 64,66 ****\n> > --- 64,77 ----\n> > ];\n> >\n> > }\n> > + sub MY::dynamic_lib {\n> > + package MY;\n> > + my $inherited= shift->SUPER::dynamic_lib(@_);\n> > + if (! -d $ENV{POSTGRES_LIB} ) {\n> > + my $cwd = `pwd`;\n> > + chop $cwd;\n> > + $ENV{POSTGRES_LIB} = \"$cwd/../libpq\";\n> > + }\n> > + $inherited=~ s@OTHERLDFLAGS =@OTHERLDFLAGS =-R$ENV{POSTGRES_LIB}@;\n> > + $inherited;\n> > + }\n> >\n> > --\n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 7 Sep 2001 20:51:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Makefile.PL for Pg.so"
}
] |
[
{
"msg_contents": "Included is a example program appears in our docs (libpq.sgml). \nAs you can see, the very last part of the program:\n\n PQfinish(conn);\n\n return 0;\n\nnever execute. Should we remove them?\n--\nTatsuo Ishii\n\n\nmain()\n{\n char *pghost,\n *pgport,\n *pgoptions,\n *pgtty;\n char *dbName;\n int nFields;\n int i,\n j;\n\n PGconn *conn;\n PGresult *res;\n PGnotify *notify;\n\n /*\n * begin, by setting the parameters for a backend connection if the\n * parameters are null, then the system will try to use reasonable\n * defaults by looking up environment variables or, failing that,\n * using hardwired constants\n */\n pghost = NULL; /* host name of the backend server */\n pgport = NULL; /* port of the backend server */\n pgoptions = NULL; /* special options to start up the backend\n * server */\n pgtty = NULL; /* debugging tty for the backend server */\n dbName = getenv(\"USER\"); /* change this to the name of your test\n * database */\n\n /* make a connection to the database */\n conn = PQsetdb(pghost, pgport, pgoptions, pgtty, dbName);\n\n /*\n * check to see that the backend connection was successfully made\n */\n if (PQstatus(conn) == CONNECTION_BAD)\n {\n fprintf(stderr, \"Connection to database '%s' failed.\\n\", dbName);\n fprintf(stderr, \"%s\", PQerrorMessage(conn));\n exit_nicely(conn);\n }\n\n res = PQexec(conn, \"LISTEN TBL2\");\n if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)\n {\n fprintf(stderr, \"LISTEN command failed\\n\");\n PQclear(res);\n exit_nicely(conn);\n }\n\n /*\n * should PQclear PGresult whenever it is no longer needed to avoid\n * memory leaks\n */\n PQclear(res);\n\n while (1)\n {\n\n /*\n * wait a little bit between checks; waiting with select()\n * would be more efficient.\n */\n sleep(1);\n /* collect any asynchronous backend messages */\n PQconsumeInput(conn);\n /* check for asynchronous notify messages */\n while ((notify = PQnotifies(conn)) != NULL)\n {\n fprintf(stderr,\n \"ASYNC NOTIFY of '%s' from backend pid '%d' received\\n\",\n notify->relname, notify->be_pid);\n free(notify);\n }\n }\n\n /* close the connection to the database and cleanup */\n PQfinish(conn);\n\n return 0;\n}\n",
"msg_date": "Mon, 13 Aug 2001 13:49:15 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "example program bug?"
},
{
"msg_contents": "On Mon, 13 Aug 2001, Tatsuo Ishii wrote:\n\n> Included is a example program appears in our docs (libpq.sgml). \n> As you can see, the very last part of the program:\n> \n> PQfinish(conn);\n> \n> return 0;\n> \n> never execute. Should we remove them?\n\nMost compilers should be able to detect this and not generate\nwarnings. Compilers will default the return type of main() to int so\nperhaps for the sake of form it should be left in there with a comment:\n\n\t/* we never get here */\n\nBy the same line of thinking, PQfinish(conn) may as well stay.\n\nThere are other parts of the code which need fixing though. The call to\nsleep() will generate a prototyping error on many systems, since it is\ndefined in unistd.h which is not included from the program in the 7.2\nversion of the docs (on the Web site). Similar problem with getenv(),\nwhich is defined in stdlib.h.\n\nGavin\n\n\n",
"msg_date": "Mon, 13 Aug 2001 18:49:50 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: example program bug?"
},
{
"msg_contents": ">> Included is a example program appears in our docs (libpq.sgml). \n\nIf you're going to do any work on that example, how about transforming\nit into something useful, ie, a more realistic example of using NOTIFY.\nIt should be using a select() to wait for input, not a sleep() loop.\n\nAs for the cleanup code, leave it in place, and make the sample program\nbreak out of its loop when a specific NOTIFY name is received.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 09:47:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: example program bug? "
}
] |
[
{
"msg_contents": "\nhello all \n\nI have a perl script that read a text file and \ninsert the data in a table...\n\nbut when this script is runing the usage of CPU by postmaster grows to\nbetween 79 and 90 percent...\n\nthere's something to do about it?\n\nthanks...\n",
"msg_date": "13 Aug 2001 14:27:05 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "Perl,Postmaster and CPU question??"
},
{
"msg_contents": "\"gabriel\" <gabriel@workingnetsp.com.br> writes:\n\n> hello all \n> \n> I have a perl script that read a text file and \n> insert the data in a table...\n> \n> but when this script is runing the usage of CPU by postmaster grows to\n> between 79 and 90 percent...\n\nAnd this is a problem because... ?\n\nHonestly, you're doing a lot of work, why wouldn't the CPU be busy?\n\n> there's something to do about it?\n\nAre you using COPY or a series of INSERTs? If the latter, are you\ndoing them all inside a transaction? INSERTs not wrapped in a\ntransaction will load the system a lot more due to transaction\noverhead and disk sync latency.\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n",
"msg_date": "13 Aug 2001 14:14:13 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Perl,Postmaster and CPU question??"
}
] |
[
{
"msg_contents": "Another bug report complaining of include file name conflicts came in\njust now. The only solution I can see is to rename config.h to\nsomething more project-specific. Should we do this, or keep ignoring\nthe problem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 10:28:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Rename config.h to pg_config.h?"
},
{
"msg_contents": "> Another bug report complaining of include file name conflicts came in\n> just now. The only solution I can see is to rename config.h to\n> something more project-specific. Should we do this, or keep ignoring\n> the problem?\n\nI vote for ignore. Don't tons of projects have a config.h file?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Aug 2001 12:00:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rename config.h to pg_config.h?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I vote for ignore. Don't tons of projects have a config.h file?\n\nThat's exactly why there's a problem. We are presently part of the\nproblem, not part of the solution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 12:26:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rename config.h to pg_config.h? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I vote for ignore. Don't tons of projects have a config.h file?\n> \n> That's exactly why there's a problem. We are presently part of the\n> problem, not part of the solution.\n\nBut we only search our source and other includes. Who installs a\nconfig.h into publicly-readable /include directory? Oh, I see we do. \nYes, I guess we should change it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Aug 2001 12:27:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rename config.h to pg_config.h?"
},
{
"msg_contents": "On Monday 13 August 2001 23:26, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I vote for ignore. Don't tons of projects have a config.h file?\n>\n> That's exactly why there's a problem. We are presently part of the\n> problem, not part of the solution.\n\nThe solution usually to have a dir. Like\n\n#include <pgsql/config.h>\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Tue, 14 Aug 2001 00:28:14 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": false,
"msg_subject": "Re: Rename config.h to pg_config.h?"
},
{
"msg_contents": "Tom Lane writes:\n\n> Another bug report complaining of include file name conflicts came in\n> just now. The only solution I can see is to rename config.h to\n> something more project-specific. Should we do this, or keep ignoring\n> the problem?\n\nAnother problem in all of this is that even if you hide the config.h\nsufficiently, you'll end up including it anyway of course, and more likely\nthan not you will have a symbol clash with what the user wants to define\nin his config.h file.\n\nWhat I would envision is this:\n\n1. Make all headers that are installed by default (client side) not use\nconfig.h. This is mostly a problem with libpq++.h, but surely this\nproblem was solved somewhere before. (Worst case, we'll preprocess the\nfile ourselves.)\n\n2. Then we can install the above set of headers directly into $includedir\n(e.g., /usr/include), since they're relatively clearly named. This has\nbeen one of my pet peeves: right now we are forced to install in a\nsubdirectory of /usr[/local]/include because of this conflict, which\nrequires plain-old libpq programs to add an explicit -I compile flag,\nwhich is not nice.\n\n3. The \"internal\" headers are installed in a subdirectory of $includedir\nwhich is clearly named, such as .../include/postgresql-internal, possibly\nwith a version number in there. Users that want to build backend modules\nexplicitly need to ask for those things with an -I compile flag. At the\nsame time, those users would be forced to accept our config.h as gospel,\nwhich is probably a good thing. (If we're still hearing complaints we can\nponder renaming it again.)\n\nThis scheme would also give a clear separation between platform-dependent\nand platform-independent header sets, which would please \"power\ninstallers\" very much.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 13 Aug 2001 19:45:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rename config.h to pg_config.h?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Another problem in all of this is that even if you hide the config.h\n> sufficiently, you'll end up including it anyway of course, and more likely\n> than not you will have a symbol clash with what the user wants to define\n> in his config.h file.\n\nThis is true in theory, but in practice we've not seen very many\ncomplaints about it; perhaps that's because there's a fair amount of\nstandardization of Autoconf usage. (HAVE_FOO_H probably gets set the\nsame way by every package that might use it, for example.) We have\nseen the \"#include config.h gets the wrong thing\" problem complained\nof repeatedly, though.\n\n> 1. Make all headers that are installed by default (client side) not use\n> config.h. This is mostly a problem with libpq++.h, but surely this\n> problem was solved somewhere before. (Worst case, we'll preprocess the\n> file ourselves.)\n\nI think we would indeed have to preprocess the files ourselves, and it\nseems like a lot of work compared to the size of the problem.\n\n> 2. Then we can install the above set of headers directly into $includedir\n> (e.g., /usr/include), since they're relatively clearly named. This has\n> been one of my pet peeves: right now we are forced to install in a\n> subdirectory of /usr[/local]/include because of this conflict, which\n> requires plain-old libpq programs to add an explicit -I compile flag,\n> which is not nice.\n\nWouldn't renaming config.h be sufficient to accomplish that?\n\n> This scheme would also give a clear separation between platform-dependent\n> and platform-independent header sets, which would please \"power\n> installers\" very much.\n\nBut wouldn't the preprocessed client header files have to be regarded as\nplatform-dependent? I don't see how this restructuring improves matters\nin that regard.\n\nThe bottom line for me is that renaming config.h would accomplish quite\na bit for a very small amount of work. I'm willing to get it done,\nwhereas I'm not willing to take on the above-described project any time\nsoon. What are your prospects for time to get the more complete\nsolution in place?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 14:22:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rename config.h to pg_config.h? "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Another bug report complaining of include file name conflicts came in\n> just now. The only solution I can see is to rename config.h to\n> something more project-specific. Should we do this, or keep ignoring\n> the problem?\n\nI would vote for renaming it. I've run into the problem of getting\nthe wrong config.h file. config.h is a fine name to use for a\nstandalone tool. It's not particularly good for a library, and\nPostgres does have a library component.\n\nFYI, in BFD (the library used for gdb and the GNU binutils) we jump\nthrough hoops to to generate a bfd.h file which is properly configured\nbut does not include a config.h file--see, e.g., BFD_ARCH_SIZE and\nBFD_HOST_64BIT_LONG in /usr/include/bfd.h on Linux.\n\nIan\n",
"msg_date": "13 Aug 2001 11:59:11 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Rename config.h to pg_config.h?"
},
{
"msg_contents": "Tom Lane writes:\n\n> This is true in theory, but in practice we've not seen very many\n> complaints about it; perhaps that's because there's a fair amount of\n> standardization of Autoconf usage. (HAVE_FOO_H probably gets set the\n> same way by every package that might use it, for example.)\n\nAgreed in general. But consider things like USE_LOCALE.\n\n> > 2. Then we can install the above set of headers directly into $includedir\n> > (e.g., /usr/include), since they're relatively clearly named. This has\n> > been one of my pet peeves: right now we are forced to install in a\n> > subdirectory of /usr[/local]/include because of this conflict, which\n> > requires plain-old libpq programs to add an explicit -I compile flag,\n> > which is not nice.\n>\n> Wouldn't renaming config.h be sufficient to accomplish that?\n\nAt least os.h needs to be considered as well for that. Perhaps we could\nhave config.h not include os.h and instead let c.h do that (should still\nwork for libpq++). Or rename os.h as well.\n\nPutting the server side includes in the main path isn't ever going to\nhappen, I think, given the random set of names that give no indication\nwhich package they belong to. I will see if the idea of putting them in a\nseparate directory than the client side can be made to work smoothly.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 15 Aug 2001 00:01:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rename config.h to pg_config.h? "
}
] |
[
{
"msg_contents": "I sent the list a message a little while ago about what I do with postgres.\nI thought, after all this discussion, that it might be important to send a\nfurther message to the list indicating why I chose not to use Oracle.\n\nI'm going to go out on a limb and say that I have one of the largest\npostgres databases in the world. At our site, we have quite a few oracle\ninstances and something like 27 schemas (and, I'm afraid to report, a few\nAccess as well). Our PG database dwarfs all the other databases combined.\n\nI chose postgres not because of any cost issue at all. The project that I am\nworking on here is accustomed to spending $25,000 in a given week for new\nhardware, we bring on consultants as we need them et cetera. To me, what\nmade postgres attractive was how simple it was to install and configure.\nFurthermore, I have had discussions with our oracle DBA's. When I mentioned\nI thought our number of rows could reach into the tens of millions, and the\neventual storage estimate is something on the order of several terabytes, he\ntold me I would need to hire on 2-3 fulltime DBA's if I wanted to use\nOracle.\n\nMy feeling on whether postgres is a \"drop-in\" replacement for Oracle is\nsimilarly simple. Any sufficiently competent programmer can make\ndatabase-independant code. Up to this point, I have been working hard to\nmake sure that, should higher management decide to use Oracle, I can use\npg_dump and actually just move on over to Oracle. I would have chosen a\nsimilar approach if I had started with Oracle. Perhaps it is my background\nas a Perl programmer (it is the DataBase Independant driver, afterall), or\nperhaps it is my fear of change. But if you guys are concerned that one\ndatabase is not a drop-in replacement for another, you are paying your\nprogrammers too much. I think perhaps it is the \"Oracle Mentality\" (or dare\nI say, Microsoft Mentality) to not worry about portability and compatibility\nthat breeds this kind of programming and production. It is very much\nagainst, however, the opensource mentality.\n\nDont beat yourself up, guys, over making postgres a drop-in replacement for\nOracle. The people that would benefit from actually \"dropping in\" postgres\ninto an Oracle install will have already eased the burden on themselves by\nbeing responsible in their database construction and programming. I haven't\neven been able to convince our Oracle guys that Postgres is actually a \"real\ndatabase\" (its free?!! how can it be free?!). They would never, (ever!)\nconsider dropping in postgres. If you're intent on taking \"customers\" from\nOracle, catch them where youll be able to convert them -- before Oracle is\neven installed.\n\nalex\n",
"msg_date": "Mon, 13 Aug 2001 10:54:34 -0400",
"msg_from": "Alex Avriette <a_avriette@acs.org>",
"msg_from_op": true,
"msg_subject": "drop-in-ability (was: RE: Re: [PATCHES] Select parser a\n\tt runtime )"
},
{
"msg_contents": "Alex Avriette <a_avriette@acs.org> writes:\n\n> Dont beat yourself up, guys, over making postgres a drop-in replacement for\n> Oracle. The people that would benefit from actually \"dropping in\" postgres\n> into an Oracle install will have already eased the burden on themselves by\n> being responsible in their database construction and programming. I haven't\n> even been able to convince our Oracle guys that Postgres is actually a \"real\n> database\" (its free?!! how can it be free?!). They would never, (ever!)\n> consider dropping in postgres. If you're intent on taking \"customers\" from\n> Oracle, catch them where youll be able to convert them -- before Oracle is\n> even installed.\n\nIn principle, I agree with you. However, my company (Zembu) has a\nbusiness need for better Oracle compatibility, based on real customer\nneed.\n\nZembu doesn't don't have a need to make Postgres a drop-in replacement\nfor Oracle; that is a nearly impossible task in any case, because\nOracle DBAs don't know how to manage Postgres, and most don't want to\nlearn. Zembu does have a need to permit applications to speak to both\nOracle and Postgres.\n\nJust to be clear, I am not on the Postgres development team. I'm\ntalking about work which is important for Zembu, and which I believe\ncan be quite helpful to Postgres in general. I'm trying to structure\nthe work so that even if accepted it would not take too many cycles\naway from core Postgres development.\n\nIan\n",
"msg_date": "13 Aug 2001 10:26:41 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: drop-in-ability (was: RE: Re: [PATCHES] Select parser a t runtime\n\t)"
}
] |
[
{
"msg_contents": "This is an attempt to flesh out the ideas of my earlier proposal\n(http://fts.postgresql.org/db/mw/msg.html?mid=67786) into a complete\ndescription of how things should work. It's still largely the same\nproposal, but I have adopted a couple of the ideas from the followup\ndiscussion --- notably Vadim's suggestion that pg_log should be divided\ninto segments.\n\nI still think that expanding transaction IDs (XIDs) to 8 bytes is no help.\nAside from portability and performance issues, allowing pg_log to grow\nwithout bound just isn't gonna do. So, the name of the game is to recycle\nXIDs in an appropriate fashion. The intent of this design is to allow XID\nrecycling in a true 24x7 environment (ie, postmaster uptime must be able\nto exceed 4G transactions --- no \"restart\" events are required).\n\nThis plan still depends on periodic VACUUMs, which was a point some people\ndidn't like the last time around. However, given that we now have a\nlightweight VACUUM that's meant to be run frequently, I don't think this\nis a significant objection anymore.\n\nHere's the plan:\n\n1. XIDs will be divided into two kinds, \"permanent\" and \"normal\". There\nwill be just two permanent XIDs: BootstrapXID (= 1) and FrozenXID (= 2).\n(Actually we could get along with only one permanent XID, but it seems\nuseful for debugging to distinguish the bootstrap XID from transactions\nfrozen later.) All XIDs >= 3 are \"normal\". The XID generator starts at\n3, and wraps around to 3 when it overflows its 32-bit limit.\n\n2. Permanent XIDs are always considered committed and are older than all\nnormal XIDs. Two normal XIDs are compared using modulo-2^31 arithmetic,\nie, x < y if ((int32) (y - x)) > 0. This will work as long as no normal\nXID survives in the database for more than 2^31 (2 billion) transactions;\nif it did, it would suddenly appear to be \"in the future\" and thus be\nconsidered uncommitted. To allow a tuple to live for more than 2 billion\ntransactions, we must replace its xmin with a permanent XID sometime\nbefore its initial normal XID expires. FrozenXID is used for this\npurpose.\n\n3. VACUUM will have the responsibility of replacing old normal XIDs with\nFrozenXID. It will do this whenever a committed-good tuple has xmin less\nthan a cutoff XID. (There is no need to replace xmax, since if xmax is\ncommitted good then the tuple itself will be removed.) The cutoff XID\ncould be anything less than XmaxRecent (the oldest XID that might be\nconsidered still running by any current transaction). I believe that by\ndefault it ought to be pretty old, say 1 billion transactions in the past.\nThis avoids expending I/O to update tuples that are unlikely to live long;\nfurthermore, keeping real XIDs around for some period of time is useful\nfor debugging.\n\n4. To make this work, VACUUM must be run on every table at least once\nevery billion transactions. To help keep track of this maintenance\nrequirement, we'll add two columns to pg_database. Upon successful\ncompletion of a database-wide (all tables) VACUUM, VACUUM will update the\ncurrent database's row in pg_database with the cutoff XID and XmaxRecent\nXID that it used. Inspection of pg_database will then show which\ndatabases are in need of re-vacuuming. The use of the XmaxRecent entry\nwill be explained below.\n\n5. There should be a VACUUM option (\"VACUUM FREEZE\", unless someone can\ncome up with a better name) that causes the cutoff XID to be exactly\nXmaxRecent, not far in the past. Running VACUUM FREEZE in an otherwise\nidle database guarantees that every surviving tuple is frozen. I foresee\ntwo uses for this:\n\tA. Doing VACUUM FREEZE at completion of initdb ensures that\n\t template1 and template0 will have no unfrozen tuples.\n\t This is particularly critical for template0, since ordinarily\n\t one couldn't connect to it to vacuum it.\n\tB. VACUUM FREEZE would be useful for pg_upgrade, since pg_log\n\t is no longer critical data after a FREEZE.\n\n6. The oldest XmaxRecent value shown in pg_database tells us how far back\npg_log is interesting; we know that all tuples with XIDs older than that\nare marked committed in the database, so we won't be probing pg_log to\nverify their commit status anymore. Therefore, we can discard pg_log\nentries older than that. To make this feasible, we should change pg_log\nto a segmented representation, with segments of say 256KB (1 million\ntransaction entries). A segment can be discarded once oldest-XmaxRecent\nadvances past it. At completion of a database-wide VACUUM, in addition\nto updating our own pg_database row we'll scan the other rows to determine\nthe oldest XmaxRecent, and then remove no-longer-needed pg_log segments.\n\n7. A small difficulty with the above is that if template0 is never\nvacuumed after initdb, its XmaxRecent entry would always be the oldest and\nwould keep us from discarding any of pg_log. A brute force answer is to\nignore template0 while calculating oldest-XmaxRecent, but perhaps someone\ncan see a cleaner way.\n\n8. Currently, pg_log is accessed through the buffer manager as if it were\nan ordinary relation. It seems difficult to continue this approach if we\nare going to allow removal of segments before the current segment (md.c\nwill not be happy with that). Instead, I plan to build a special access\nmechanism for pg_log that will buffer a few pages of pg_log in shared\nmemory. I think much of this can be lifted verbatim from the WAL access\ncode.\n\n9. WAL redo for pg_log updates will be handled like this: (A) Whenever a\ntransaction is assigned the first XID in a new pg_log page's worth of\nXIDs, we will allocate and zero that page of pg_log, and enter a record\ninto the WAL that reports having done so. (We must do this while holding\nthe XidGenLock lock, which is annoying but it only happens once every 32K\ntransactions. Note that no actual I/O need happen, we are just zeroing a\nbuffer in shared memory and emitting a WAL record.) Now, before any\ntransaction commit can modify that page of pg_log, we are guaranteed that\nthe zero-the-page WAL entry will be flushed to disk. On crash and\nrestart, we re-zero the page when we see the zeroing WAL entry, and then\nreapply the transaction commit and abort operations shown later in WAL.\nAFAICS we do not need to maintain page LSN or SUI information for pg_log\npages if we do it this way (Vadim, do you agree?). NOTE: unless I'm\nmissing something, 7.1's WAL code fails to guard against loss of pg_log\npages at all, so this should provide an improvement in reliability.\n\n10. It'd be practical to keep a checksum on pg_log pages with this\nimplementation (the checksum would be updated just before writing out a\npg_log page, and checked on read). Not sure if this is worth doing,\nbut given the critical nature of pg_log it might be a good idea.\n\n\nThings to do later\n------------------\n\nIt's looking more and more like an automatic VACUUM scheduler would be a\ngood idea --- aside from the normal use of VACUUM for space reclamation,\nthe scheduler could be relied on to dispatch VACUUM to databases whose\ncutoff XID was getting too far back. I don't think I have time to work on\nthis for 7.2, but it should be a project for 7.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 12:25:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Surviving transaction-ID wraparound, take 2"
},
{
"msg_contents": "> 3. VACUUM will have the responsibility of replacing old normal XIDs with\n> FrozenXID. It will do this whenever a committed-good tuple has xmin less\n> than a cutoff XID. (There is no need to replace xmax, since if xmax is\n> committed good then the tuple itself will be removed.) The cutoff XID\n> could be anything less than XmaxRecent (the oldest XID that might be\n> considered still running by any current transaction). I believe that by\n> default it ought to be pretty old, say 1 billion transactions in the past.\n> This avoids expending I/O to update tuples that are unlikely to live long;\n> furthermore, keeping real XIDs around for some period of time is useful\n> for debugging.\n> \n> 4. To make this work, VACUUM must be run on every table at least once\n> every billion transactions. To help keep track of this maintenance\n> requirement, we'll add two columns to pg_database. Upon successful\n> completion of a database-wide (all tables) VACUUM, VACUUM will update the\n> current database's row in pg_database with the cutoff XID and XmaxRecent\n> XID that it used. Inspection of pg_database will then show which\n> databases are in need of re-vacuuming. The use of the XmaxRecent entry\n> will be explained below.\n\nI like the 1 billion in the past idea, and recording it in pg_database\nso we can quickly know how far back we can go to recycle xids. Nice.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Aug 2001 13:21:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Surviving transaction-ID wraparound, take 2"
},
{
"msg_contents": "On Tuesday 14 August 2001 02:25, you wrote:\n\n> I still think that expanding transaction IDs (XIDs) to 8 bytes is no help.\n> Aside from portability and performance issues, allowing pg_log to grow\n> without bound just isn't gonna do. So, the name of the game is to recycle\n\nBut what about all of us who need to establish a true long term audit trail? \nFor us, still the most elegant solution would be a quasi unlimited supply of \nunique row identifiers. 64 bit would be a huge help (and will be ubiquitous \nin a few years time anyway). \n\nEverything else will have far greater performance impact for us. While it \nmight not suit everybody, I can't see why it should be a problem to offer \nthis as an *option*\n\nThere are other applications where we need database wide unique row \nidentifiers; in our project for example we allow row level encryption on \nnon-indexed attributes across alll tables. How would you implement such a \nthing without having unique row identifiers across your whole database? You \nwould need to reference both to row AND table, and that must certainly be \nmore expensive in terms of performance.\n\nHorst\n",
"msg_date": "Tue, 14 Aug 2001 11:03:30 +1000",
"msg_from": "Horst Herb <hherb@malleenet.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Surviving transaction-ID wraparound, take 2"
},
{
"msg_contents": "Horst Herb <hherb@malleenet.net.au> writes:\n> On Tuesday 14 August 2001 02:25, you wrote:\n>> I still think that expanding transaction IDs (XIDs) to 8 bytes is no help.\n\n> But what about all of us who need to establish a true long term audit trail? \n> For us, still the most elegant solution would be a quasi unlimited supply of \n> unique row identifiers. 64 bit would be a huge help (and will be ubiquitous \n> in a few years time anyway). \n\nUh, that has nothing to do with transaction identifiers ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 21:35:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Surviving transaction-ID wraparound, take 2 "
},
{
"msg_contents": "Tom Lane wrote:\n> Horst Herb <hherb@malleenet.net.au> writes:\n> > On Tuesday 14 August 2001 02:25, you wrote:\n> >> I still think that expanding transaction IDs (XIDs) to 8 bytes is no help.\n>\n> > But what about all of us who need to establish a true long term audit trail?\n> > For us, still the most elegant solution would be a quasi unlimited supply of\n> > unique row identifiers. 64 bit would be a huge help (and will be ubiquitous\n> > in a few years time anyway).\n>\n> Uh, that has nothing to do with transaction identifiers ...\n\n And he who needs that kind of long term row identifiers would\n be better off with 8-byte sequences anyway - IMNSVHO.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 14 Aug 2001 08:40:21 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Surviving transaction-ID wraparound, take 2"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> And he who needs that kind of long term row identifiers would\n> be better off with 8-byte sequences anyway - IMNSVHO.\n\nIndeed. I've been looking at switching sequences to be int8, and I see\njust one little problem, which is to uphold the promise that they'll\nstill work as if int4 on a machine that has no int64 C datatype.\nThe difficulty is that sequence.h has\n\ntypedef struct FormData_pg_sequence\n{\n\tNameData\tsequence_name;\n\tint32\t\tlast_value;\n\tint32\t\tincrement_by;\n\tint32\t\tmax_value;\n\tint32\t\tmin_value;\n\tint32\t\tcache_value;\n\tint32\t\tlog_cnt;\n\tchar\t\tis_cycled;\n\tchar\t\tis_called;\n} FormData_pg_sequence;\n\nIf I just change \"int32\" to \"int64\" here, all is well on machines where\nsizeof(int64) is 8. But if there's no 64-bit C datatype, int64 is\ntypedef'd as \"long int\", so sizeof(int64) is only 4. Result: the struct\ndeclaration won't agree with the heaptuple layout --- since the tuple\nroutines will believe that the datatype of these columns has size 8.\n\nWhat I need is a way to pad the struct declaration so that it leaves\n8 bytes per int64 column, no matter what. I thought of\n\ntypedef struct FormData_pg_sequence\n{\n\tNameData\tsequence_name;\n\tint64\t\tlast_value;\n#ifdef INT64_IS_BUSTED\n\tint32\t\tpad1;\n#endif\n\tint64\t\tincrement_by;\n#ifdef INT64_IS_BUSTED\n\tint32\t\tpad2;\n#endif\n\tint64\t\tmax_value;\n#ifdef INT64_IS_BUSTED\n\tint32\t\tpad3;\n#endif\n\tint64\t\tmin_value;\n#ifdef INT64_IS_BUSTED\n\tint32\t\tpad4;\n#endif\n\tint64\t\tcache_value;\n#ifdef INT64_IS_BUSTED\n\tint32\t\tpad5;\n#endif\n\tint64\t\tlog_cnt;\n#ifdef INT64_IS_BUSTED\n\tint32\t\tpad6;\n#endif\n\tchar\t\tis_cycled;\n\tchar\t\tis_called;\n} FormData_pg_sequence;\n\nThis would work, I think, but my goodness it's an ugly solution.\nHas any hacker got a better one?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 10:09:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "int8 sequences --- small implementation problem"
},
{
"msg_contents": "----- Original Message ----- \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Tuesday, August 14, 2001 10:09 AM\n\n\n> typedef struct FormData_pg_sequence\n> {\n> NameData sequence_name;\n> int64 last_value;\n> #ifdef INT64_IS_BUSTED\n> int32 pad1;\n[snip]\n> } FormData_pg_sequence;\n> \n> This would work, I think, but my goodness it's an ugly solution.\n\nIs anything wrong with just having two int32 per value for this case?\n\ntypedef struct FormData_pg_sequence\n{\n int32 last_value;\n int32 pad1;\n...\n} FormData_pg_sequence;\n\nS.\n\n\n",
"msg_date": "Tue, 14 Aug 2001 11:18:05 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: int8 sequences --- small implementation problem"
},
{
"msg_contents": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n>> This would work, I think, but my goodness it's an ugly solution.\n\n> Is anything wrong with just having two int32 per value for this case?\n\nWell, we do want it to be int64 on machines where int64 is properly\ndefined. Or are you suggesting\n\n#ifdef INT64_IS_BUSTED\n\tint32 last_value;\n\tint32 pad1;\n#else\n\tint64 last_value;\n#endif\n\nThat does seem marginally more robust, now that you mention it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 11:28:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: int8 sequences --- small implementation problem "
},
{
"msg_contents": "> typedef struct FormData_pg_sequence\n> {\n> NameData sequence_name;\n> int32 last_value;\n> int32 increment_by;\n> int32 max_value;\n> int32 min_value;\n> int32 cache_value;\n> int32 log_cnt;\n> char is_cycled;\n> char is_called;\n> } FormData_pg_sequence;\n>\n> If I just change \"int32\" to \"int64\" here, all is well on machines where\n> sizeof(int64) is 8. But if there's no 64-bit C datatype, int64 is\n> typedef'd as \"long int\", so sizeof(int64) is only 4. Result: the struct\n> declaration won't agree with the heaptuple layout --- since the tuple\n> routines will believe that the datatype of these columns has size 8.\n>\n> What I need is a way to pad the struct declaration so that it leaves\n> 8 bytes per int64 column, no matter what. I thought of\n>\n\nWhat if you defined int64 as a union made up of one \"long int\" member and\none 8 byte char member, and then always refer to the \"long int\"?\n\n-- Joe\n\n\n",
"msg_date": "Tue, 14 Aug 2001 08:36:43 -0700",
"msg_from": "\"Joe Conway\" <joseph.conway@home.com>",
"msg_from_op": false,
"msg_subject": "Re: int8 sequences --- small implementation problem"
},
{
"msg_contents": "\n----- Original Message ----- \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Tuesday, August 14, 2001 11:28 AM\n\n\n> \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> >> This would work, I think, but my goodness it's an ugly solution.\n> \n> > Is anything wrong with just having two int32 per value for this case?\n> \n> Well, we do want it to be int64 on machines where int64 is properly\n> defined. Or are you suggesting\n> \n> #ifdef INT64_IS_BUSTED\n> int32 last_value;\n> int32 pad1;\n> #else\n> int64 last_value;\n> #endif\n> \n> That does seem marginally more robust, now that you mention it...\n\nYes, this version is more robust, but you till have to cope with all\nthose #ifdef INT64_IS_BUSTED #else #endif. I guess if you want explicitly\nint64 type in here for those platforms that do support it, then there is no\nother way maybe. What I was thinking (for this particular struct only!) is just jave padded\nint32's for every value, which will always be correct and no marginal problems.\nAnd the accessor functions using the struct just employ int64 whatever it means.\n\nS.\n\n\n\n\n",
"msg_date": "Tue, 14 Aug 2001 12:10:59 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: int8 sequences --- small implementation problem "
},
{
"msg_contents": "Tom Lane wrote:\n[clip]\n> \n> This would work, I think, but my goodness it's an ugly solution.\n> Has any hacker got a better one?\n> \n> regards, tom lane\n\nHow about:\n\n#ifdef INT64_IS_BUSTED\n#define int64aligned(name) int32 name##_; int64 name\n#else\n#define int64aligned(name) int64 name\n#endif\n\ntypedef struct FormData_pg_sequence\n{\n NameData sequence_name;\n int64aligned(last_value);\n int64aligned(increment_by);\n int64aligned(max_value);\n int64aligned(min_value);\n int64aligned(cache_value);\n int64aligned(log_cnt);\n char is_cycled;\n char is_called;\n} FormData_pg_sequence;\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Tue, 14 Aug 2001 13:12:26 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: int8 sequences --- small implementation problem"
},
{
"msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n>> What I need is a way to pad the struct declaration so that it leaves\n>> 8 bytes per int64 column, no matter what. I thought of\n\n> What if you defined int64 as a union made up of one \"long int\" member and\n> one 8 byte char member, and then always refer to the \"long int\"?\n\nWell, that'd remove the notational ugliness from the struct definition,\nat the cost of adding it to the code that uses the struct. I think I'd\nprefer to uglify the struct and keep the code simple. But it's a good\nthought.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 13:47:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: int8 sequences --- small implementation problem "
},
{
"msg_contents": "On Tue, 14 Aug 2001, Tom Lane wrote:\n\n> Jan Wieck <JanWieck@yahoo.com> writes:\n> > And he who needs that kind of long term row identifiers would\n> > be better off with 8-byte sequences anyway - IMNSVHO.\n> \n> What I need is a way to pad the struct declaration so that it leaves\n> 8 bytes per int64 column, no matter what. I thought of\n> \n> This would work, I think, but my goodness it's an ugly solution.\n> Has any hacker got a better one?\n\nThe only thing I could think of is using a struct to hide the\npadding details instead of directly using int64, but then you'd have to\nadd a '.value' or something to the references. I'm not sure that's really\nany cleaner.\n\n",
"msg_date": "Tue, 14 Aug 2001 11:17:33 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: int8 sequences --- small implementation problem"
},
{
"msg_contents": "Stephan Szabo wrote:\n> On Tue, 14 Aug 2001, Tom Lane wrote:\n>\n> > Jan Wieck <JanWieck@yahoo.com> writes:\n> > > And he who needs that kind of long term row identifiers would\n> > > be better off with 8-byte sequences anyway - IMNSVHO.\n> >\n> > What I need is a way to pad the struct declaration so that it leaves\n> > 8 bytes per int64 column, no matter what. I thought of\n> >\n> > This would work, I think, but my goodness it's an ugly solution.\n> > Has any hacker got a better one?\n>\n> The only thing I could think of is using a struct to hide the\n> padding details instead of directly using int64, but then you'd have to\n> add a '.value' or something to the references. I'm not sure that's really\n> any cleaner.\n\n What I'm asking myself all the time is \"which platforms do we\n support that doesn't have 8-byte integers?\". Could someone\n enlighten me please?\n\n And what does int8 do on these platforms?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 14 Aug 2001 15:53:18 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: int8 sequences --- small implementation problem"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> What I'm asking myself all the time is \"which platforms do we\n> support that doesn't have 8-byte integers?\". Could someone\n> enlighten me please?\n\nRelease a version that doesn't work without 8-byte ints, and I'm sure\nwe'll find out soon enough ;-). QNX and MIPS SysVR4 are documented\nnot to have int8 support in our \"supported platforms\" list, but we've\nnot heard from anyone still using 'em for awhile.\n\nBasically, my feeling about it is that it's not ANSI C, and we shouldn't\nyet be *requiring* C99 support to build Postgres.\n\n> And what does int8 do on these platforms?\n\nActs like int4, except for taking up 8 bytes anyway (because pg_type\nsays so, not because sizeof() says so). See c.h.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 15:54:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: int8 sequences --- small implementation problem "
}
] |
[
{
"msg_contents": "There are currently a bunch of items on the Todo relating to SQL\nINSERT statements, one of which is allowing INSERTs of the following\nform:\n\nINSERT INTO tab [(c1, c2, ...)] VALUES (x1, y1, ...), (x2, y2, ...), ...\n\nI had written a quick 'n dirty patch that accomplished this by\nsplitting an INSERT of multiple tuples into multiple individual\nINSERT statements. However, this approach was not the right way to\ngo, and so I would like to propose the following as a plan for\ngetting implementing the feature:\n\nComing out of the parser (modified to handle the new syntax), an\nInsertStmt variable will hold a list of lists of ResTargets. These\nlists of ResTargets will be transformed one by one into lists of\nTargetEntry's. The transformed lists would be kept in the Query node\nthat represents the INSERT statement in a new structure member.\n\nWith the Result node, PostgreSQL currently has support in the executor\nfor retrieving a single tuple that is not stored in any base\nrelation. When doing an INSERT of a single tuple via VALUES, the\ncurrent scheme suffices. However, for inserting multiple tuples via\nINSERT, the use of the Result node doesn't work, AFAICS. While it\nshould be possible to modify the Result node to handle multiple\ntuples, I would rather not use Result as Result is also used for\nconstant qualifications. It seems cleaner to have a seperate node\nstructure that deals with tuples that have no real base relation;\nthere would be no need to deal with extra cruft in Result-related\nfunctions.\n\nI would like to add a new executor node called Values (suggestions for a\nbetter name are welcome -- there is already a Value node) that would\nreplace the use of Result for insert statements. A tuple would just be a\nList of TargetEntry's (like what Result currently does). The Values node\nwould just keep a List of those and when asked for a tuple, would return\nthe next element on the List. It would return NULL when done. The planner\nwould need to know when to create one -- that can be done query_planner()\nas a call to a make_values() function if query->valuesList is not null.\n\nAlso, at some point, it would be nice to put together a new statement\nnode type that represents the SQL <query expression>. This node would\nbe used everywhere that a <query expression> could be used, hiding\nthe complexity of having to deal with either a SELECT statement or a\nVALUES clause. For example, the parser rule for INSERT statements\nwould be simplified as well as the transformInsertStmt function. I\nhaven't thought about this much though. I would like to get multiple\ninsertion working first before looking at simplification.\n\nComments?\n\nLiam\n\n-- \nLiam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com\n",
"msg_date": "Mon, 13 Aug 2001 16:03:04 -0400",
"msg_from": "Liam Stewart <liams@redhat.com>",
"msg_from_op": true,
"msg_subject": "RFC: Inserting multiple values via INSERT ... VALUES ..."
},
{
"msg_contents": "Liam Stewart <liams@redhat.com> writes:\n> While it\n> should be possible to modify the Result node to handle multiple\n> tuples, I would rather not use Result as Result is also used for\n> constant qualifications.\n\nIf you don't want to extend Result, I'd suggest generating a plan that\nis an Append of Result nodes. There's no need to add a new node type.\n\nPersonally I wouldn't have a problem with allowing Result to handle\na list of targetlists, though. It's already capable of returning\nmultiple rows (when there's a function-returning-set in the targetlist)\nso I don't find multiple targetlists to be that much of an uglification.\n\n> Also, at some point, it would be nice to put together a new statement\n> node type that represents the SQL <query expression>. This node would\n> be used everywhere that a <query expression> could be used, hiding\n> the complexity of having to deal with either a SELECT statement or a\n> VALUES clause. For example, the parser rule for INSERT statements\n> would be simplified as well as the transformInsertStmt function.\n\nThe present handling of INSERT/SELECT is really really ugly. The whole\nquerytree structure desperately needs to be rethought, actually. You\ncan find some theorizing about what to do in the mail list archives,\nbut we're still far from having a detailed plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 13:39:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RFC: Inserting multiple values via INSERT ... VALUES ... "
}
] |
[
{
"msg_contents": "I am tempted to replace all attempts to build text data on your own (with\nVARDATA, VARHDRSZ, etc.) with proper calls to textin/textout in all places\nthat can't claim to be truly internal (especially all contrib). The\nbackground is that the internal format might change sometime when more\nlocale features are implemented (see recent idea). Is this a good idea?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 14 Aug 2001 00:30:30 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Using textin/textout vs. scribbling around"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I am tempted to replace all attempts to build text data on your own (with\n> VARDATA, VARHDRSZ, etc.) with proper calls to textin/textout in all places\n> that can't claim to be truly internal (especially all contrib).\n\nOther than contrib, what places do you have in mind?\n\n> The background is that the internal format might change sometime when\n> more locale features are implemented (see recent idea). Is this a\n> good idea?\n\nNot sure. The I/O interface --- specifically the dependency on\nnull-terminated strings --- isn't necessarily graven on stone tablets\neither. It could end up that this change adds more future work rather\nthan preventing it.\n\nI'd be inclined to wait until we actually do change the internal\nrepresentation of text, if we do so, before expending this effort.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 19:07:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using textin/textout vs. scribbling around "
},
{
"msg_contents": " I am tempted to replace all attempts to build text data on your own (with\n VARDATA, VARHDRSZ, etc.) with proper calls to textin/textout in all places\n that can't claim to be truly internal (especially all contrib). The\n background is that the internal format might change sometime when more\n locale features are implemented (see recent idea). Is this a good idea?\n\nI would be in favor of this for the following reasons. One of the\ngreat advantages of PostgreSQL is that adding new types is relatively\nstraightforward. In many cases new types could be coded largely in\nterms of preexisting types. An example I am facing currently involves\nthe problem of constructing basically a TEXT type with rather unusual\nparsing and output semantics. If a reasonably well-encapsulated\nversion of the TEXT type existed--including header files with the\nright granularity and with the right functions provided in an\ninstalled library--the natural means of providing the new type we need\nwould be to simply define a type with different *in/*out functions\nimplemented over an instance of the TEXT internal data\nrepresentation.\n\nPeter's suggestion appears to be a natural step towards the goal of\nbeing able to provide a defined interface that could be used for\nextensions. The concern that the _external_ format might change seems\ncounter to the effort of providing a stable platform for extending\nPostgreSQL. If there is a serious possibility that this might occur,\nand because of that we cannot provide any external interface to the\npredefined types, then the well-known advantages of composing software\nmodules from well-defined and well-tested components will be largely\nlost for anyone wishing to rapidly extend the system.\n\nCheers,\nBrook\n",
"msg_date": "Wed, 15 Aug 2001 09:54:53 -0600 (MDT)",
"msg_from": "Brook Milligan <brook@biology.nmsu.edu>",
"msg_from_op": false,
"msg_subject": "Re: Using textin/textout vs. scribbling around"
},
{
"msg_contents": "> Peter's suggestion appears to be a natural step towards the goal of\n> being able to provide a defined interface that could be used for\n> extensions. The concern that the _external_ format might change seems\n> counter to the effort of providing a stable platform for extending\n> PostgreSQL. If there is a serious possibility that this might occur,\n> and because of that we cannot provide any external interface to the\n> predefined types, then the well-known advantages of composing software\n> modules from well-defined and well-tested components will be largely\n> lost for anyone wishing to rapidly extend the system.\n\nMost of the entension API's are written by commercial companies with\nclosed-source code, so they have to have an API to interface to their\nprograms. Also, the API's are often unsuccessful because they are\neither unnecessarily complex or can't adapt to new features. We ship\nthe code and I think an API could actually hurt us in the long run.\n\nFor example, when we added TOAST, it changed how we had to do a few\nthings. No one could have anticipated it, and with a few changes all\nthe plugins worked just like native code. I don't think an API would\nhave helped, or if it did, it would have necessiated overhead not\npresent in the native types.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Aug 2001 13:11:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using textin/textout vs. scribbling around"
}
] |
[
{
"msg_contents": "I am thinking about embarking on changing the typedef of OID to unsigned long\nlong.\n\nMy plan is to make it conditional at configure time, i.e.\n\n#ifdef OID_ULONGLONG\ntypedef unsigned long long Oid;\n#define OID_MAX ULLONG_MAX\n#else\ntypedef unsigned int Oid;\n#define OID_MAX UINT_MAX\n#endif\n\nAside from adding %llu to all the %u everywhere an OID is used in a printf, and\nany other warnings, are there any other things I should be specially concerned\nabout?\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Mon, 13 Aug 2001 22:02:42 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "OID unsigned long long"
},
{
"msg_contents": "* mlw <markw@mohawksoft.com> [010813 21:06]:\n> I am thinking about embarking on changing the typedef of OID to unsigned long\n> long.\n> \n> My plan is to make it conditional at configure time, i.e.\n> \n> #ifdef OID_ULONGLONG\n> typedef unsigned long long Oid;\n> #define OID_MAX ULLONG_MAX\n> #else\n> typedef unsigned int Oid;\n> #define OID_MAX UINT_MAX\n> #endif\n> \n> Aside from adding %llu to all the %u everywhere an OID is used in a printf, and\n> any other warnings, are there any other things I should be specially concerned\n> about?\n> \nThe wire protocol.......\n\nLER\n\n> \n> -- \n> 5-4-3-2-1 Thunderbirds are GO!\n> ------------------------\n> http://www.mohawksoft.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 13 Aug 2001 21:26:24 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: OID unsigned long long"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Aside from adding %llu to all the %u everywhere an OID is used in a\n> printf, and any other warnings, are there any other things I should be\n> specially concerned about?\n\nFE/BE protocol, a/k/a client/server interoperability. Flagging a\ndatabase so that a backend with the wrong OID size won't try to run in\nit. Alignment --- on machines where long long has to be 8-byte aligned,\nTOAST references as presently constituted will crash, because varlena\ndatatypes in general are only 4-byte aligned. There are more, but that\nwill do for starters.\n\nBTW, I think #ifdef would be a totally unworkable way to attack the\nformat-string problem. The code clutter of #ifdef'ing everyplace that\npresently uses %u would be a nightmare; the impact on\ninternationalization files would be worse. And don't forget that %llu\nwould be the right thing on only some machines; others like %qu, and\nDEC Alphas think %lu is just fine. The only workable answer I can see\nis for the individual messages to use some special code, maybe \"%O\" for\nOid. The problem is then (a) translating this to the right\nplatform-dependent thing, and (b) persuading gcc to somehow type-check\nthe elog calls anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 22:37:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID unsigned long long "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Aside from adding %llu to all the %u everywhere an OID is used in a\n> > printf, and any other warnings, are there any other things I should be\n> > specially concerned about?\n> \n> FE/BE protocol, a/k/a client/server interoperability. Flagging a\n> database so that a backend with the wrong OID size won't try to run in\n> it. Alignment --- on machines where long long has to be 8-byte aligned,\n> TOAST references as presently constituted will crash, because varlena\n> datatypes in general are only 4-byte aligned. There are more, but that\n> will do for starters.\n\nI will have to look at that, thanks.\n\n> \n> BTW, I think #ifdef would be a totally unworkable way to attack the\n> format-string problem. The code clutter of #ifdef'ing everyplace that\n> presently uses %u would be a nightmare; the impact on\n> internationalization files would be worse. And don't forget that %llu\n> would be the right thing on only some machines; others like %qu, and\n> DEC Alphas think %lu is just fine.\n\nWhat do you think of making two entries in the various printf strings, and\nusing macros to split up an OID, as:\n\nprintf(\"OID: %u:%u\", HIGHOID(od) LOWOID(oid))\n\nThat may satisfy your concern for #ifdef's everywhere, and it could mean I\ncould submit my patches back without breaking any code, so PostgreSQL could be\ncloser to a 64 bit OID.\n\n\n> The only workable answer I can see\n> is for the individual messages to use some special code, maybe \"%O\" for\n> Oid. The problem is then (a) translating this to the right\n> platform-dependent thing, and (b) persuading gcc to somehow type-check\n> the elog calls anyway.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Tue, 14 Aug 2001 07:39:57 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: OID unsigned long long"
},
{
"msg_contents": "mlw writes:\n\n> I am thinking about embarking on changing the typedef of OID to unsigned long\n> long.\n\n> Aside from adding %llu to all the %u everywhere an OID is used in a printf, and\n> any other warnings, are there any other things I should be specially concerned\n> about?\n\nYou can start with my patch at\n\nhttp://www.ca.postgresql.org/~petere/oid8.html\n\nSee the comments on that page and the other responses. It ain't pretty.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 14 Aug 2001 17:00:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: OID unsigned long long"
},
{
"msg_contents": "Tom Lane wrote:\n\n>[...]\n>\n>BTW, I think #ifdef would be a totally unworkable way to attack the\n>format-string problem. The code clutter of #ifdef'ing everyplace that\n>presently uses %u would be a nightmare; the impact on\n>internationalization files would be worse. And don't forget that %llu\n>would be the right thing on only some machines; others like %qu, and\n>DEC Alphas think %lu is just fine. The only workable answer I can see\n>is for the individual messages to use some special code, maybe \"%O\" for\n>Oid. The problem is then (a) translating this to the right\n>platform-dependent thing, and (b) persuading gcc to somehow type-check\n>the elog calls anyway.\n>\n\nYou can ask gcc to typecheck format strings for printf type functions \nwith something like the following:\n\nextern int\nmy_printf (void *my_object, const char *my_format, ...)\n __attribute__ ((format (printf, 2, 3)));\n\n\nRef: http://www.delorie.com/gnu/docs/gcc/gcc_77.html\n\nDavid\n\n\n",
"msg_date": "Wed, 15 Aug 2001 22:34:31 -0400",
"msg_from": "David Ford <david@erisksecurity.com>",
"msg_from_op": false,
"msg_subject": "Re: OID unsigned long long"
}
] |
[
{
"msg_contents": "Is there a bug listing for PostgreSQL somewhere?\n\n",
"msg_date": "Mon, 13 Aug 2001 22:13:43 -0400",
"msg_from": "Dwayne Miller <dmiller@espgroup.net>",
"msg_from_op": true,
"msg_subject": "Bug List"
}
] |
[
{
"msg_contents": "[Forward to pgsql-announce if appropriate. --Dave]\nAlright, I have gone ahead and posted what I have to date regarding the port\nof Bugzilla to PostgreSQL that I have been working on. Give it a try and let\nme know what I can improve on and what you think. It is dependent on version\n7.1.2 of PostgreSQL which is the latest version available from www.postgresql.org.\nThere is a README.postgresql file that gives a little info on how to get it \ngoing and some changes that had to be made to get it to work so far. One note,\nbuglist.cgi is still not fully operation but will do some simple queries. I am\nworking on getting that working in the next few days. I will also make a release\navailable in the next few weeks that has some of the look and feel changes \nincorporated in similar to the redhat version of Bugzilla for any interested.\n\nftp://people.redhat.com/dkl/pgzilla-latest.tar.gz\n\nThanks\n\n-- \n-------------------------------\nDavid Lawrence <dkl@redhat.com>\n Red Hat Quality Assurance\n-------------------------------\nwww.redhat.com ftp.redhat.com",
"msg_date": "Tue, 14 Aug 2001 08:41:48 -0500",
"msg_from": "Ddkilzer <ddkilzer@lubricants-oil.com>",
"msg_from_op": true,
"msg_subject": "Fwd: PostgreSQL Bugzilla"
}
] |
[
{
"msg_contents": "I have a custom datatype (the PostGIS geometry type), which I have\nindexed using a GiST index.\n\nThe problem is, its difficult to get PostgreSQL to actually use the GiST\nindex. The only way I can get it to be used is by 'set enable_seqscan =\noff', which seems a bit cheezy. What am I missing? Do I have to make\nsome sort of amcostestimate() function or something?\n\n\nthanks,\ndave\n",
"msg_date": "Tue, 14 Aug 2001 10:18:17 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": true,
"msg_subject": "Forcing GiST index to be used"
},
{
"msg_contents": "Dave Blasby <dblasby@refractions.net> writes:\n> I have a custom datatype (the PostGIS geometry type), which I have\n> indexed using a GiST index.\n> The problem is, its difficult to get PostgreSQL to actually use the GiST\n> index. The only way I can get it to be used is by 'set enable_seqscan =\n> off', which seems a bit cheezy. What am I missing? Do I have to make\n> some sort of amcostestimate() function or something?\n\nWhat sort of selectivity estimator (oprrest entry) do you have attached\nto the indexable operator? If there's no estimator, the default\nselectivity is something like 0.5 --- way too high to cause an index to\nbe used.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 16:01:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Forcing GiST index to be used "
}
] |
[
{
"msg_contents": "Hello all!!!\n\n\n I need to retrieve all the users from a group, for my Delphi\nInterface for PostgreSQL.\n The problem is, I can't accomplish a query that does it, since the\nusers belonging from a group are all stored in an array field\n(pg_group.groulist).\n Does anyone have a solution for this ? Using contrib array functions\nis certainly not a good idea, because it would require ALL users of the\ninterface to compile and install it.\n I still don't understand anyway why there is not a regular catalog\nrelating users and groups, like:\n\n CREATE TABLE pg_groupusers(grosysid integer, usesysid integer);\n\n That makes much more sense for me, unless at least the contrib array\nfunctions get implemented as builtins, so that we can test user groupship.\n Can that be added to the TODO least ?...\n\nBest Regards,\nSteve Howe\n\n\n",
"msg_date": "Tue, 14 Aug 2001 17:25:43 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Retriving users from group ?..."
}
] |
[
{
"msg_contents": "Anyone interested in a project that would allow a person to parrallelize\npostgres\nin like a beowulf or some other cluster? does one already exist?\nThis could certainly be a huge benefit for the linux world and postgres, as\nmost clusters are formed to either do AI, or more commonly, databasing.\n\njust a thought,\n\nRick Richardson\n\n\n",
"msg_date": "Tue, 14 Aug 2001 21:19:14 GMT",
"msg_from": "\"Rick Richardson\" <rickandcharlene@home.com>",
"msg_from_op": true,
"msg_subject": "Beowulf support"
}
] |
[
{
"msg_contents": "\nThe Register has an interesting interview with the vp of Microsoft's SQL\nServer team:\n\nhttp://www.theregister.co.uk/content/53/21003.html\n\nNear the end he gets specifically asked about \"Red Hat Database\" as a\ncompetitive threat, and he responds that he doesn't think anyone can match\ntheir \"investment\" of \"800 professionals\" to work on SQL Server.\n\nNow I'm sure he didn't mean it to sound this way, but what I conclude from\nthat is that you fellows are all an order of magnitude or two more\nproductive than anyone at Microsoft :-).\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen tim@proximity.com.au\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n",
"msg_date": "Wed, 15 Aug 2001 10:49:37 +1000 (EST)",
"msg_from": "Tim Allen <tim@proximity.com.au>",
"msg_from_op": true,
"msg_subject": "MS interview"
},
{
"msg_contents": "What is OLAP and why is it so good? (According to MS)\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tim Allen\n> Sent: Wednesday, 15 August 2001 8:50 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] MS interview\n>\n>\n>\n> The Register has an interesting interview with the vp of Microsoft's SQL\n> Server team:\n>\n> http://www.theregister.co.uk/content/53/21003.html\n>\n> Near the end he gets specifically asked about \"Red Hat Database\" as a\n> competitive threat, and he responds that he doesn't think anyone can match\n> their \"investment\" of \"800 professionals\" to work on SQL Server.\n>\n> Now I'm sure he didn't mean it to sound this way, but what I conclude from\n> that is that you fellows are all an order of magnitude or two more\n> productive than anyone at Microsoft :-).\n>\n> Tim\n>\n> --\n> -----------------------------------------------\n> Tim Allen tim@proximity.com.au\n> Proximity Pty Ltd http://www.proximity.com.au/\n> http://www4.tpg.com.au/users/rita_tim/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Wed, 15 Aug 2001 09:23:08 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: MS interview"
},
{
"msg_contents": "I'm sure that \"800 professionals\" equates to something like 4 \ndevelopers, 1 tester (part-time), 2 documentation specialist, and 792 \nmarketing, sales, administration, legal staff and others required to \njustify its cost, and 1 CEO who has his fingers into everything at MS.\n\nTim Allen wrote:\n\n>The Register has an interesting interview with the vp of Microsoft's SQL\n>Server team:\n>\n>http://www.theregister.co.uk/content/53/21003.html\n>\n>Near the end he gets specifically asked about \"Red Hat Database\" as a\n>competitive threat, and he responds that he doesn't think anyone can match\n>their \"investment\" of \"800 professionals\" to work on SQL Server.\n>\n>Now I'm sure he didn't mean it to sound this way, but what I conclude from\n>that is that you fellows are all an order of magnitude or two more\n>productive than anyone at Microsoft :-).\n>\n>Tim\n>\n\n\n",
"msg_date": "Tue, 14 Aug 2001 21:55:02 -0400",
"msg_from": "Dwayne Miller <dmiller@espgroup.net>",
"msg_from_op": false,
"msg_subject": "Re: MS interview"
},
{
"msg_contents": " OLAP Council White Paper\n\n Introduction\n\n The purpose of the paper that follows is to define On-Line\n Analytical Processing (OLAP), who uses it and why, and to review\n the key features required for OLAP software as referenced in the\n OLAP Council benchmark specification.\n\nhttp://www.olapcouncil.org/research/whtpapco.htm\n\nAnd\n\n Data Warehousing and OLAP\n\n A Research-Oriented Bibliography (in progress)\n\n Alberto Mendelzon\n University of Toronto\n\n\nhttp://www.cs.toronto.edu/~mendel/dwbib.html\n\nSeems like a fairly large amount of talk about stuff which should be taken\ncare of internally by corporations who have such interests.\n\nGavin\n\nOn Wed, 15 Aug 2001, Christopher Kings-Lynne wrote:\n\n> What is OLAP and why is it so good? (According to MS)\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tim Allen\n> > Sent: Wednesday, 15 August 2001 8:50 AM\n> > To: pgsql-hackers@postgresql.org\n> > Subject: [HACKERS] MS interview\n> >\n> >\n> >\n> > The Register has an interesting interview with the vp of Microsoft's SQL\n> > Server team:\n> >\n> > http://www.theregister.co.uk/content/53/21003.html\n> >\n> > Near the end he gets specifically asked about \"Red Hat Database\" as a\n> > competitive threat, and he responds that he doesn't think anyone can match\n> > their \"investment\" of \"800 professionals\" to work on SQL Server.\n> >\n> > Now I'm sure he didn't mean it to sound this way, but what I conclude from\n> > that is that you fellows are all an order of magnitude or two more\n> > productive than anyone at Microsoft :-).\n> >\n> > Tim\n> >\n> > --\n> > -----------------------------------------------\n> > Tim Allen tim@proximity.com.au\n> > Proximity Pty Ltd http://www.proximity.com.au/\n> > http://www4.tpg.com.au/users/rita_tim/\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n",
"msg_date": "Wed, 15 Aug 2001 11:58:12 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "RE: MS interview"
},
{
"msg_contents": "Tim Allen <tim@proximity.com.au> writes:\n> Near the end he gets specifically asked about \"Red Hat Database\" as a\n> competitive threat, and he responds that he doesn't think anyone can match\n> their \"investment\" of \"800 professionals\" to work on SQL Server.\n\nROTFL ...\n\nThe longer that Oracle, MS, et al don't believe we're a threat, the\nbetter. But I wonder how they *really* see us. This article was too\nobviously a pile of marketing BS to be taken seriously by anyone.\nWhat's their real internal perception of us, do you think?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Aug 2001 00:09:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MS interview "
},
{
"msg_contents": "> The longer that Oracle, MS, et al don't believe we're a threat, the\n> better. But I wonder how they *really* see us. This article was\n> too obviously a pile of marketing BS to be taken seriously by\n> anyone.\n\nNot necessarily - business guys are incredibly naive when it comes to\ntechnology options.\n\nI had to fight tooth and nail to use PostgreSQL/Linux on a recent project.\nThe business didn't care about feature comparisons, they cared about two\nthings:\n\n1) Putting Oracle+Solaris logos on our technology page\n2) Support\n\nI got it through by arguing about the cost difference and the fact that\nRedHat is on board (they knew who RedHat was from Business Review Weekly\n*sigh*).\n\nI forwarded that article to them, and their response to the quote of\n\n...Open source systems \"are a great way for our future customers to learn\nabout relational databases,\" says Bob Shimp, Oracle's senior director of\ndatabase marketing\n\nwas \"that makes sense, after all Oracle has many more features than\nPostgreSQL\".\n\nSo, I guess the point I am trying to make is that image is everything - 800\npeople working on MS SQL Server is much more impressive to a business guy\nthan a couple of dozen people all over the world. Remember, these are the\npeople that still believe that all programmers are alike and can just be\nswapped around on projects without any impact.\n\nHopefully, RedHat's involvement will boost the mindshare and image of\nPostgreSQL and I don't have to keep doing Oracle admin :)\n\n\nMark Pritchard\nSenior Technical Architect\nTangent Systems Australia\n--------------------------------------------------\nemail mark@tangent.net.au\nph +61 3 9809 1311\nfax +61 3 9809 1322\nmob 0411 402 034\n--------------------------------------------------\nThe central task of a natural science is to make the wonderful commonplace:\nto show that complexity, correctly viewed, is only a mask for simplicity; to\nfind pattern hidden in apparent chaos. � Herb Simon\n\n",
"msg_date": "Wed, 15 Aug 2001 15:07:48 +1000",
"msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "RE: MS interview "
},
{
"msg_contents": "> Hopefully, RedHat's involvement will boost the mindshare and image of\n> PostgreSQL and I don't have to keep doing Oracle admin :)\n\nWe had four articles in one day today. That shows some major momentum.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Aug 2001 01:54:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MS interview"
},
{
"msg_contents": "On Wed, 15 Aug 2001, Mark Pritchard wrote:\n\n> > The longer that Oracle, MS, et al don't believe we're a threat, the\n> > better. But I wonder how they *really* see us. This article was\n> > too obviously a pile of marketing BS to be taken seriously by\n> > anyone.\n> \n> Not necessarily - business guys are incredibly naive when it comes to\n> technology options.\n\nSome of the companies I've worked with have been seriously over committed\nto vendors - one had a 2 processor license for a trivial internal\napplication which, had their app been designed correctly, should have\nneeded only a flat file data storage system. Yet all of these companies\nhave been extremely concerned about moving off of expensive RDBMS\nsoftware citing 'support' and 'safe-guards' ('if it breaks, we'll\nsue!'). But none of this is every actually worth the cost.\n\nThe problem is more complicated, however.\n\nMany of the Oracle DBAs who I've worked with or am friends with will curse\nOracle, for example, to the end but defend it to the death if someone else\nstarts criticising it. Oracle, IBM, Sybase and the like take people\nearning pretty average money doing pretty average IT work and start them\nout to big bucks (just like MS, Cisco, etc). These databases are their\nfinancial livelihood and when they push product, they get paid well.\n\nTo earn this kind of money with Postgres or any open source software\nrequires skill, insight, enthusiasm and commitment. So, PostgreSQL does not \nimmediately affect Oracle, IBM DB2, Sybase etc. It affects certified DBAs\nand developers working with these products. \n\nBy the same token, many of the programmers currently working on\nthe development of these RDBMSs have probably taken a good look at\nPostgres. But this would not have been any kind of policy and therefore,\nin the scheme of things, the quality of of Postgres wouldn't have\ninfiltrated the decision markers.\n\nAs such, the big vendors will only really take notice of Postgres once\ntheir certified professions start to push less proprietary product. Who\nknows what will happen if this takes place - maybe the same thing as is\nhappening with Linux and IBM and HP. That is, they'll stop ignoring it and\ntake it on as an 'induction' or 'entry level' system, packaging some\nuseless crud with it but all the time intending to sell, in the long run,\nmore expensive licenses.\n\nGavin\n\n",
"msg_date": "Wed, 15 Aug 2001 16:39:08 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "RE: MS interview "
},
{
"msg_contents": "Gavin Sherry wrote:\n > Seems like a fairly large amount of talk about stuff which should be \ntaken\n > care of internally by corporations who have such interests.\n\nNot entirely. As a freelancer, I've used OLAP (front-end only, ie pivot\ntables in Excel) to help me produce invoices from my timesheet data.\nIt's *very* useful. I found out, almost by accident, which client I've\nspent the most time working for, and which client has the largest ratio\nof unpaid to paid hours :-(\n\nAFAIK, OLAP backends essentially provide a cache of denormalised data\nthat provide fast access (no need to re-run complex queries) to large\ndata sets, and a set of aggregate functions to analyse the data.\n\nThere's also a language called MDX that goes with it, but I haven't\nworked with that.\n\nbye\nJohn\n\n\n",
"msg_date": "Wed, 15 Aug 2001 10:39:24 +0200",
"msg_from": "John Anderson <panic-postgres@semiosix.com>",
"msg_from_op": false,
"msg_subject": "Re: MS interview"
},
{
"msg_contents": "\ntim@proximity.com.au (Tim Allen) writes:\n\n: [...] Near the end he gets specifically asked about \"Red Hat\n: Database\" as a competitive threat, and he responds that he doesn't\n: think anyone can match their \"investment\" of \"800 professionals\" to\n: work on SQL Server. [...]\n\nIt would be naive to dismiss Microsoft's (or Oracle's, or IBM's)\ndatabase teams. They have many very smart developers/researchers\nworking full time on these systems. The investment is quite real.\n\n\ntgl wrote:\n\n: The longer that Oracle, MS, et al don't believe we're a threat, the\n: better. But I wonder how they *really* see us. [...]\n\nGood question.\n\n\n- FChE\n",
"msg_date": "15 Aug 2001 12:13:48 -0400",
"msg_from": "fche@redhat.com (Frank Ch. Eigler)",
"msg_from_op": false,
"msg_subject": "Re: MS interview"
},
{
"msg_contents": "On Wed, Aug 15, 2001 at 10:39:24AM +0200, John Anderson allegedly wrote:\n> AFAIK, OLAP backends essentially provide a cache of denormalised data\n> that provide fast access (no need to re-run complex queries) to large\n> data sets, and a set of aggregate functions to analyse the data.\n> \n> There's also a language called MDX that goes with it, but I haven't\n> worked with that.\n\nMDX is the query language used for querying cubes of data in an multi\ndimensional database. MDX is usually automatically generated, as it is\nfar more complex than SQL.\n\nCheers,\n\nMathijs\n-- \nBeauty and music seduce us first...\nLater, ashamed of our own sensuality, we insist on meaning.\n -- Clive Barker\n",
"msg_date": "Thu, 16 Aug 2001 11:24:14 +0200",
"msg_from": "Mathijs Brands <mathijs@ilse.nl>",
"msg_from_op": false,
"msg_subject": "Re: MS interview"
},
{
"msg_contents": "> > >\n> > > Near the end he gets specifically asked about \"Red Hat Database\" as a\n> > > competitive threat, and he responds that he doesn't think anyone can match\n> > > their \"investment\" of \"800 professionals\" to work on SQL Server.\n> > >\n> > > Now I'm sure he didn't mean it to sound this way, but what I conclude from\n> > > that is that you fellows are all an order of magnitude or two more\n> > > productive than anyone at Microsoft :-).\n\nThere is a basic reality to IT purchasing, Microsoft, Oracle, DB2, and to a\nlesser extent informix, and Sybase have an amount of \"clout\" that PostgreSQL\ndoes not. This clout isn't based on functionality so much as a big company that\nyou can \"sue.\" Nobody can sue these companies, of course, because the license\nagreements indicate that you can not. It is also based on support, \"who will\nsupport you when you have trouble?\" This is a quaint notion, but Oracle support\nis very expensive.\n\nThe war for PostgreSQL, IMHO, is the same war that Linux fought and won over\nthe last couple years, perception. Three years ago, it would have been risky\nfor an IT guy to suggest, openly, that the infrastructure rely upon Linux.\nToday, while it isn't a forgone conclusion, you can raise that point in a\nmeeting and not be ridiculed. People would consider it.\n\nI use PostgreSQL all the time, I think it is a great system, and you guys do\ngreat work. I am currently using Postgres for data analysis and as the\npresentation system for a text search and music ID engine. However, I would\nhesitate to move it to replace an Oracle or a DB2 because if Oracle or DB2\nfail, everyone gets to blame the vendor, if PostgreSQL fails, everyone gets to\nblame me.\n\nIMHO, if The PostgreSQL team is serious about moving PostgreSQL out of the\nniche tool market and into the general SQL market place along side of Oracle,\nDB2, and MSSQL there is a lot of work to be done. \n\nThink about a website, where you have session management. 10,000 users online\nat one time each doing something that affects their account once a minute. That\nis about 166 updates a second on a session table. How often would you need to\nrun vacuum for these operations to remain efficient? I submit that PostgreSQL\nwill never be able to perform well in this environment as long as updates\naffect performance prior to a vacuum.\n\nOf late the 32 bit OID issue. If you have an OID wrap around, you have some\nprobability that two records in a table could have the same OID. The\nprobability, of course, is based on the number of tables and the distribution\nof activity on the tables, but it is likely to happen. Is this a problem?\n\nThen there is data security. Oracle is very good here. One can restore from\ntheir last backup, and using the REDO logs, bring the database to the point\njust before the crash. When I've had to answer this in meetings, I have to\nshrug and concede that point. (I have actually seen this work and it is cool.)\n\nThen there is the laundry list of functionality, queries across databases,\nfunctions like cube and rollup, etc. \n\nBelieve me, I'm not knocking PostgreSQL, but if I am to recommend PostgreSQL in\nplace of an Oracle or a DB2 or an MSSQL, it needs these things, even if they\nare never used, I have to convince people that PostgreSQL is \"safe\" to deploy.\n",
"msg_date": "Thu, 16 Aug 2001 10:56:07 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: MS interview"
}
] |
[
{
"msg_contents": "\n Transaction id is incremented even in sql queries like\n\"select\" which does not change the state of database, is it not\nunnecesary?.\n\n\n\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp\n\n",
"msg_date": "Wed, 15 Aug 2001 05:56:22 +0000",
"msg_from": "\"maruthi maruthi\" <maruthi49@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Transaction id inrement"
},
{
"msg_contents": "\"maruthi maruthi\" <maruthi49@hotmail.com> writes:\n> Transaction id is incremented even in sql queries like\n> \"select\" which does not change the state of database, is it not\n> unnecesary?.\n\nNo, it's not unnecessary. Every DB operation has to have a transaction\nID; what's more, we have to assign one long before we have any idea\nwhether the transaction will prove to be read-only.\n\nIt's at least theoretically possible that we could recycle the\ntransaction ID of a completed transaction that's proven to be read-only,\nbut the bookkeeping involved would be far more trouble than it's worth.\nNot least because it would break MVCC assumptions about transactions\nstarting in sequence number order.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 11:22:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Transaction id inrement "
}
] |
[
{
"msg_contents": " Transaction id is incremented even in sql queries like\n\"select\" which does not change the state of database, is it not\nunnecesary?.\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp\n\n",
"msg_date": "Wed, 15 Aug 2001 06:54:48 +0000",
"msg_from": "\"maruthi maruthi\" <maruthi49@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Transaction id increment"
}
] |
[
{
"msg_contents": "I have added new function called \"convert\" similar to SQL99's convert.\nConvert converts encoding according to parameters. For example, if you\nhave a table named \"unicode\" in an Unicode database,\n\nSELECT convert(text_field, 'LATIN1') FROM unicode;\n\nwill return text in ISO-8859-1 representation. This is not very\nimpressive because PGCLIENTENCODING could do same thing. However\nconsider a table:\n\nCREATE TABLE mixed (\n french TEXT,\t-- ISO-8859-1\n czech TEXT\t-- ISO-8859-2\n);\n\nThose columns hold texts in different language. Since we do not have\nNCHAR yet, we cannot extract data from \"mixed\" very well (note that we\ncould store french and czech data by INSERT french and then UPDATE\nczech, so on. Even we could store them at once actually since they are\n\"binary compatible.\") However, using convert, (and if you have an\nUnicode aware termminal), you could view them:\n\nSELECT convert(french, 'LATIN1', 'UNICODE'),\n convert(czech, 'LATIN2', 'UNICODE') FROM unicode;\n\nConvert is especially usefull if you want to sort Unicode data\naccording to a specific locale. For example:\n\nSELECT * FROM unicode order by convert(text_field, 'LATIN1');\n\nshould return data in proper sort order if you enable European locale.\nNote that to make above possible, you will need to turn on all of\n--enable-multibyte, --enable-locale, --enable-unicode-conversion\noptions.\n\nSee Table 4-7 in the string function section in the docs for more\ndetails.\n\nTestings are welcome. I only understand Japanese and English!\n--\nTatsuo Ishii\n\n",
"msg_date": "Wed, 15 Aug 2001 16:09:02 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "convert function"
},
{
"msg_contents": "> I have added new function called \"convert\" similar to SQL99's convert.\n> Convert converts encoding according to parameters. For example, if you\n> have a table named \"unicode\" in an Unicode database,\n\nForgot to mention that anyone who wants to try the new function should\ndo initdb. I did not increment the version id so that people who are\nnot interested in the function were not forced to do an initdb.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 15 Aug 2001 18:29:30 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: convert function"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have added new function called \"convert\" similar to SQL99's convert.\n> Convert converts encoding according to parameters. For example, if you\n> have a table named \"unicode\" in an Unicode database,\n\n> SELECT convert(text_field, 'LATIN1') FROM unicode;\n\n> will return text in ISO-8859-1 representation.\n\nI don't understand how this works. If you have a multibyte-enabled\nbackend, won't backend libpq try to convert all outgoing text to\nwhatever PGCLIENTENCODING says? How can it know that one particular\ncolumn of a result set is not in the regular encoding of this database,\nbut something else? Seems like libpq is going to mess up the results\nby applying an inappropriate multibyte conversion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Aug 2001 09:58:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: convert function "
},
{
"msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I have added new function called \"convert\" similar to SQL99's convert.\n> > Convert converts encoding according to parameters. For example, if you\n> > have a table named \"unicode\" in an Unicode database,\n> \n> > SELECT convert(text_field, 'LATIN1') FROM unicode;\n> \n> > will return text in ISO-8859-1 representation.\n> \n> I don't understand how this works. If you have a multibyte-enabled\n> backend, won't backend libpq try to convert all outgoing text to\n> whatever PGCLIENTENCODING says? How can it know that one particular\n> column of a result set is not in the regular encoding of this database,\n> but something else? Seems like libpq is going to mess up the results\n> by applying an inappropriate multibyte conversion.\n\nIf the encodings of frontend and backend are same, no conversion would\nbe applied by libpq.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 15 Aug 2001 23:31:54 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: convert function "
}
] |
[
{
"msg_contents": "Hi,\n\nwe finished first stage of our proposal for changing of index AM tables\n(see for reference http://fts.postgresql.org/db/mw/msg.html?mid=1029290)\n\nI attached 3 files:\n\n1. patch_72_systbl.gz - patch to current CVS\n2. btree_gist.tar.gz - contrib/btree_gist module -\n implementation of Btree using GiST with\n support of int4 and timestamp types.\n3. test.tar.gz - test suite for brave (not for applying !)\n\nRegression tests and our tests passed fine\nPatch is for today CVS, please apply them asap to avoid possible\nconflicts.\n\nNow we're going to 2nd stage of our proposal.\nWe plan to remove pg_index.indislossy (now we have pg_amop.amopreqcheck)\nand pg_index.indhaskeytype (it's just don't used, all functionality\n is in pg_opclass.opckeytype now)\n\nquestion:\n\nDo we need to normalize pg_amop and pg_amproc tables ?\nTom was concerned (http://fts.postgresql.org/db/mw/msg.html?mid=1025860)\nabout possible performance degradation.\n\nWe think it's possible to leave tables as is. Of course it'd require\nsome attention when updating these tables.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Wed, 15 Aug 2001 15:07:20 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Patches (current CVS) for changes if index AM tables"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Hi,\n> \n> we finished first stage of our proposal for changing of index AM tables\n> (see for reference http://fts.postgresql.org/db/mw/msg.html?mid=1029290)\n> \n> I attached 3 files:\n> \n> 1. patch_72_systbl.gz - patch to current CVS\n> 2. btree_gist.tar.gz - contrib/btree_gist module -\n> implementation of Btree using GiST with\n> support of int4 and timestamp types.\n> 3. test.tar.gz - test suite for brave (not for applying !)\n> \n> Regression tests and our tests passed fine\n> Patch is for today CVS, please apply them asap to avoid possible\n> conflicts.\n> \n> Now we're going to 2nd stage of our proposal.\n> We plan to remove pg_index.indislossy (now we have pg_amop.amopreqcheck)\n> and pg_index.indhaskeytype (it's just don't used, all functionality\n> is in pg_opclass.opckeytype now)\n> \n> question:\n> \n> Do we need to normalize pg_amop and pg_amproc tables ?\n> Tom was concerned (http://fts.postgresql.org/db/mw/msg.html?mid=1025860)\n> about possible performance degradation.\n> \n> We think it's possible to leave tables as is. Of course it'd require\n> some attention when updating these tables.\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Aug 2001 12:39:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patches (current CVS) for changes if index AM tables"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I will try to apply it within the next 48 hours.\n\nI will take responsibility for reviewing and applying this patch. In a\nquick lookover, I saw a few adjustments I wanted to make...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 13:04:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patches (current CVS) for changes if index AM tables "
},
{
"msg_contents": "\nGot it.\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will try to apply it within the next 48 hours.\n> \n> I will take responsibility for reviewing and applying this patch. In a\n> quick lookover, I saw a few adjustments I wanted to make...\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Aug 2001 13:04:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patches (current CVS) for changes if index AM tables"
},
{
"msg_contents": "On Thu, 16 Aug 2001, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will try to apply it within the next 48 hours.\n>\n> I will take responsibility for reviewing and applying this patch. In a\n> quick lookover, I saw a few adjustments I wanted to make...\n\nthanks, please know we prepared a patch for contrib/btree_gist. It fixes\nmemory leak and improve performance. Should I sent it right now or\nwait until you apply previous patches.\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 16 Aug 2001 20:36:56 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Patches (current CVS) for changes if index AM tables"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> thanks, please know we prepared a patch for contrib/btree_gist. It fixes\n> memory leak and improve performance. Should I sent it right now or\n> wait until you apply previous patches.\n\nI haven't touched that contrib code yet at all. Why don't you just\nre-wrap that tarball with the updates and resubmit it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 13:47:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patches (current CVS) for changes if index AM tables "
},
{
"msg_contents": "On Thu, 16 Aug 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > thanks, please know we prepared a patch for contrib/btree_gist. It fixes\n> > memory leak and improve performance. Should I sent it right now or\n> > wait until you apply previous patches.\n>\n> I haven't touched that contrib code yet at all. Why don't you just\n> re-wrap that tarball with the updates and resubmit it?\n\nok. I attached btree_gist.tar.gz which should be used instead of\nolder one.\n\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Thu, 16 Aug 2001 20:58:51 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Patches (current CVS) for changes if index AM tables"
}
] |
[
{
"msg_contents": "Is there any intention to implement ub-trees?\nFor more information visit the page of the\nMistral project. There is information of how to\nintegrate ub-trees in a db-kernel or extend the\nb-tree implementation. ub-trees enhances very\nmuch multi conditioned selects. The link is\n\nhttp://mistral.in.tum.de/index.html\n\nThanks for any information!\n\nCiao!\nMatthias\n\n",
"msg_date": "Wed, 15 Aug 2001 14:18:35 +0200",
"msg_from": "\"Wald, Matthias\" <Matthias.Wald@atm-computer.de>",
"msg_from_op": true,
"msg_subject": "UB-Trees"
},
{
"msg_contents": "Probably it would be possible to implement ub-tree\nusing GiST interface.\n\n\tOleg\nOn Wed, 15 Aug 2001, Wald, Matthias wrote:\n\n> Is there any intention to implement ub-trees?\n> For more information visit the page of the\n> Mistral project. There is information of how to\n> integrate ub-trees in a db-kernel or extend the\n> b-tree implementation. ub-trees enhances very\n> much multi conditioned selects. The link is\n>\n> http://mistral.in.tum.de/index.html\n>\n> Thanks for any information!\n>\n> Ciao!\n> Matthias\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 15 Aug 2001 15:54:35 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: UB-Trees"
}
] |
[
{
"msg_contents": "\n Hi,\n\n before some time I was discuss with Tatsuo and Thomas about support\nfor synonyms of encoding names (for example allows to use \n\"ISO-8859-1\" as the encoding name) and use binary search for searching\nin encoding names. \n I mean that we can during this change a little clean up encoding \nstuff too. Now PG use for same operations with encoding names different\nroutines on FE and BE. IMHO it's a little strange. Well, here is a\npossible solution:\n\n - use 'enum' instead current #define for encoding identificators\n (in pg_wchar.h).\n \n - create separate table only with encoding names for conversion from\n encoding name (char) to encoding numerical identificator,\n and searching routines based on binary search (from Knut -- see \n datetime.c).\n \n All these will *shared* between FE and BE.\n\n\n - For BE create table that handle conversion functions (like current\n pg_conv_tbl[]). All items in this table will available by access to \n array, like 'pg_conv_tbl[ LATIN1 ]', instead current search via for()\n cycle.\n\n May be also define all tables as 'static' and work with it by some \nroutines only. PG has like robust code :-)\n\n Comments, better ideas?\n\n\t\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 15 Aug 2001 14:45:26 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "encoding names"
},
{
"msg_contents": "Thank you for your suggestions. I'm not totally against your\nsuggestions (for example, I'm not against the idea that changing all\ncurrent encoding names to more \"standard\" ones for 7.2 if it's your\nconcern). However, I think we should focus on more fundamental issues\nthan those trivial ones. Recently Thomas gave an idea how to deal with\nthe internationalization (I18N) of PostgreSQL: create character set\netc. I don't think I18N PostgreSQL will happen in 7.2, but we should\ntackle them in the near future in my opinion.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 15 Aug 2001 23:28:35 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Wed, Aug 15, 2001 at 11:28:35PM +0900, Tatsuo Ishii wrote:\n> Thank you for your suggestions. I'm not totally against your\n> suggestions (for example, I'm not against the idea that changing all\n> current encoding names to more \"standard\" ones for 7.2 if it's your\n> concern). However, I think we should focus on more fundamental issues\n> than those trivial ones. Recently Thomas gave an idea how to deal with\n> the internationalization (I18N) of PostgreSQL: create character set\n> etc. I don't think I18N PostgreSQL will happen in 7.2, but we should\n> tackle them in the near future in my opinion.\n\n I have now some time for implement this my suggestion. Or is better\nlet down this for 7.2? You are right that it's trivial :-)\n\n Note: My motivate for this is that I have some multi-language DB\n with Web interface and for current version of PG I must maintain\n separate table for transformation \"standard\" names to PG encoding\n names and vice-versa:-) \n\n Well, I try send some patch.\n\n\t\t\t\t\tKarel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 15 Aug 2001 16:41:08 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Karel Zak writes:\n\n> before some time I was discuss with Tatsuo and Thomas about support\n> for synonyms of encoding names (for example allows to use\n> \"ISO-8859-1\" as the encoding name) and use binary search for searching\n> in encoding names.\n\nFunny, I was thinking the same thing last night...\n\nA couple of other things I was thinking about in the encoding area:\n\nIf you want to have codeset synonyms, you should also implement the\nnormalization of codeset names, defined as such:\n\n 1. Remove all characters beside numbers and letters.\n\n 2. Fold letters to lowercase.\n\n 3. If the same only contains digits prepend the string `\"iso\"'.\n[quote glibc]\n\nThis allows ISO_8859-1 and iso88591 to be treated the same.\n\nHere's a good resource of official character set names and aliases:\n\nhttp://www.iana.org/assignments/character-sets\n\nAlso, we ought to have support for the ISO_8859-15 character set, or\npeople will spread the word that PostgreSQL is not ready for the Euro.\n\nThen I figured, if the client is configured with locale, it should\nautomatically determine the client's encoding. Not sure if this is\nportably possible, but it would be very nice to have.\n\nFinally, as I've mentioned before I'd like to try out the iconv interface.\nMight become an option in 7.2 even.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 15 Aug 2001 17:16:42 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> However, I think we should focus on more fundamental issues\n> than those trivial ones. Recently Thomas gave an idea how to deal with\n> the internationalization (I18N) of PostgreSQL: create character set\n> etc.\n\nI haven't actually seen any real implementation proposal yet. We all know\n(I suppose) the requirements of this project and the interface is mostly\nspecified by SQL. But what I haven't seen yet is how this will work\ninternally. If we encode the charset into the header of the text datum\nthen each and every function will have to be concerned that its output\nvalue has the right character set. If we use the type system and create a\nnew text type for each character set then we'll probably have to implement\nN^X (where N is the number of character sets, and X is not known yet but\n>1) functions, operators, casts, etc. (not even thinking about\nuser-pluggable character sets) and we'll really uglify all the psql \\d and\npg_dump work. It's not at all clear. What I'm thinking these days is\nthat we'd need something completely new and unprecedented -- a separate\ncharset mix-and-match subsystem, similar to the type system, but\ndifferent. Not a pretty outlook, of course.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 15 Aug 2001 17:32:40 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Wed, Aug 15, 2001 at 05:16:42PM +0200, Peter Eisentraut wrote:\n> Karel Zak writes:\n> \n> > before some time I was discuss with Tatsuo and Thomas about support\n> > for synonyms of encoding names (for example allows to use\n> > \"ISO-8859-1\" as the encoding name) and use binary search for searching\n> > in encoding names.\n> \n> Funny, I was thinking the same thing last night...\n\n :-)\n\n> A couple of other things I was thinking about in the encoding area:\n> \n> If you want to have codeset synonyms, you should also implement the\n> normalization of codeset names, defined as such:\n> \n> 1. Remove all characters beside numbers and letters.\n> \n> 2. Fold letters to lowercase.\n> \n> 3. If the same only contains digits prepend the string `\"iso\"'.\n> [quote glibc]\n> \n> This allows ISO_8859-1 and iso88591 to be treated the same.\n\n My idea is (was:-) create table with all \"standard\" synonyms and \nsearch in this table case insensitive. Something like:\n\nPGencname pg_encnames[] =\n{\n\t{ \"ISO-8859-1\", LATIN1 },\n\t{ \"LATIN1\", LATIN1 }\n};\n\n But your idea with encoding name \"clearing\" (remove irrelavant chars)\nis more cool. \n\n> Here's a good resource of official character set names and aliases:\n> \n> http://www.iana.org/assignments/character-sets\n\n Thanks.\n\n> Also, we ought to have support for the ISO_8859-15 character set, or\n> people will spread the word that PostgreSQL is not ready for the Euro.\n\nIt require prepare some conversion functions and tables (UTF). Tatsuo?\n \n> Finally, as I've mentioned before I'd like to try out the iconv interface.\n\n Do you want integrate iconv stuff to current PG multibyte routines or as\nsome extension (functions?) only?\n\n BTW, is on psql some \\command that print list of all supported \n encodings? \n\n Maybe allows use something like: SELECT pg_encoding_names();\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 15 Aug 2001 17:37:31 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> Finally, as I've mentioned before I'd like to try out the iconv interface.\n> Might become an option in 7.2 even.\n\nI'm curious how do you handle conversion between multibyte strings\nand wide characters using iconv. This is necessary to implement\nmultibyte aware like, regex, char_length etc. functions. I think at\nleast you need to have a way to determine the letter boundaries in a\nmultibyte string.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 16 Aug 2001 10:10:40 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> I have now some time for implement this my suggestion. Or is better\n> let down this for 7.2? You are right that it's trivial :-)\n\nI think you should target for 7.2.\n\n> Note: My motivate for this is that I have some multi-language DB\n> with Web interface and for current version of PG I must maintain\n> separate table for transformation \"standard\" names to PG encoding\n> names and vice-versa:-) \n> \n> Well, I try send some patch.\n\nThanks. BTW, I'm working on for dynamically loading the Unicode\nconversion functions to descrease the runtime memory requirement. The\nreason why I want to do this is:\n\no they are huge (--enable-unicode-conversion will increase ~1MB in the\n load module size)\n\no nobody will use all of them at once. For example most Japanese users\n are only interested in EUC/SJIS maps.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 16 Aug 2001 15:39:28 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Thu, Aug 16, 2001 at 03:39:28PM +0900, Tatsuo Ishii wrote:\n>\n> o they are huge (--enable-unicode-conversion will increase ~1MB in the\n> load module size)\n> \n> o nobody will use all of them at once. For example most Japanese users\n> are only interested in EUC/SJIS maps.\n> --\n\n Good idea.\n\n I have a question, the PostgreSQL encoding name \"KOI8\" is KOI8-R or \nKOI8-U or both? I need it for correct alias setting.\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 16 Aug 2001 12:26:04 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Thu, Aug 16, 2001 at 03:39:28PM +0900, Tatsuo Ishii wrote:\n> > I have now some time for implement this my suggestion. Or is better\n> > let down this for 7.2? You are right that it's trivial :-)\n> \n> I think you should target for 7.2.\n> \n> > Note: My motivate for this is that I have some multi-language DB\n> > with Web interface and for current version of PG I must maintain\n> > separate table for transformation \"standard\" names to PG encoding\n> > names and vice-versa:-) \n> > \n> > Well, I try send some patch.\n> \n> Thanks. BTW, I'm working on for dynamically loading the Unicode\n\n Sorry Tatsuo, but I have again question :-)\n\n Why is here hole between encoding numbers?\n\n#define KOI8 16 /* KOI8-R/U */\n#define WIN 17 /* windows-1251 */\n#define ALT 18 /* Alternativny Variant */\n\n 19..31 ?\n\n#define SJIS 32 /* Shift JIS */\n#define BIG5 33 /* Big5 */\n#define WIN1250 34 /* windows-1250 */\n\n It's trouble create arrays with encoding stuff, like:\n\n pg_encoding_conv_tbl[ ALT ]->to_mic\n pg_encoding_conv_tbl[ SJIS ]->to_mic\n\n Has this hole between 19..31 some effect? \n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 16 Aug 2001 15:13:26 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> I have a question, the PostgreSQL encoding name \"KOI8\" is KOI8-R or \n> KOI8-U or both? I need it for correct alias setting.\n\nI think it's KOI8-R. Oleg, am I correct?\n\nP.S.\nI use Makefile.shlib to create each shared object. This way is, I\nthink, handy and portble. However, I need to make lots of subdirs for\neach encoding conversion function. Any suggestions?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 16 Aug 2001 22:22:48 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Thu, Aug 16, 2001 at 10:22:48PM +0900, Tatsuo Ishii wrote:\n> > I have a question, the PostgreSQL encoding name \"KOI8\" is KOI8-R or \n> > KOI8-U or both? I need it for correct alias setting.\n> \n> I think it's KOI8-R. Oleg, am I correct?\n> \n> P.S.\n> I use Makefile.shlib to create each shared object. This way is, I\n> think, handy and portble. However, I need to make lots of subdirs for\n> each encoding conversion function. Any suggestions?\n\nPlease make separate directory for encoding translation table \nprograms too. The current mb/ is mix of standard files and files\nwith main()....\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 16 Aug 2001 15:31:41 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Thanks. BTW, I'm working on for dynamically loading the Unicode\n> conversion functions to descrease the runtime memory requirement. The\n> reason why I want to do this is:\n> o they are huge (--enable-unicode-conversion will increase ~1MB in the\n> load module size)\n> o nobody will use all of them at once. For example most Japanese users\n> are only interested in EUC/SJIS maps.\n\nBut is it really important? All Unixen that I know of handle process\ntext segments on a page-by-page basis; pages that aren't actually being\ntouched won't get swapped in. Thus, the unused maps will just sit on\ndisk, whether they are part of the main executable or a separate file.\nI doubt there's any real performance gain to be had by making the maps\ndynamically loadable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 09:43:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: encoding names "
},
{
"msg_contents": "On Thu, 16 Aug 2001, Tatsuo Ishii wrote:\n\n> > I have a question, the PostgreSQL encoding name \"KOI8\" is KOI8-R or\n> > KOI8-U or both? I need it for correct alias setting.\n>\n> I think it's KOI8-R. Oleg, am I correct?\n\nYES\n\n>\n> P.S.\n> I use Makefile.shlib to create each shared object. This way is, I\n> think, handy and portble. However, I need to make lots of subdirs for\n> each encoding conversion function. Any suggestions?\n> --\n> Tatsuo Ishii\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 16 Aug 2001 19:59:36 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> I use Makefile.shlib to create each shared object. This way is, I\n> think, handy and portble. However, I need to make lots of subdirs for\n> each encoding conversion function. Any suggestions?\n\nGiven Tom Lane's comment, I think that this would be a wasted effort.\nShared objects are normally used for extensibility at runtime, not core\nmemory savings. (This would most likely take more memory in fact, given\nthat the code is larger and you need all the shared object handling\ninfrastructure.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 17 Aug 2001 19:56:40 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> But is it really important? All Unixen that I know of handle process\n> text segments on a page-by-page basis; pages that aren't actually being\n> touched won't get swapped in. Thus, the unused maps will just sit on\n> disk, whether they are part of the main executable or a separate file.\n> I doubt there's any real performance gain to be had by making the maps\n> dynamically loadable.\n\nI did some testing on my Linux box (kernel 2.2) and confirmed that no\nperformance drgration was found with unicode-conversion-enabled\npostgres. This proves that your theory is correct at least on Linux.\n\nOk, I will make the unicode conversion functionality as a default if\n--enable-multibyte is on.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 18 Aug 2001 15:46:45 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names "
}
] |
[
{
"msg_contents": "I was just looking at adding SERIAL4 and SERIAL8 pseudo-types to the\nsystem, per our previous discussion, and I started to wonder why SERIAL\nis treated as a keyword by the grammar. Wouldn't it be better to remove\nthat keyword and allow the grammar to parse \"serial\" as a type name?\nThen analyze.c would just do a strcmp on the type name to see if it's\n\"serial\" (or one of the new variants) before trying to look it up in\npg_type.\n\nThe only downside I can see to this is that it's currently possible to\nuse a user-defined type named \"serial\", if you are determined enough:\n\ncreate table foo (\n\tf1 serial,\t\t-- it's a serial column\n\tf2 \"serial\",\t\t-- user-defined type named \"serial\"\n\nThis would stop working with my proposed approach. But it seems pretty\nugly and fragile anyway; is it likely that anyone's doing this?\n\nA variant idea is for analyze.c to check for \"serial\" etc. only if\nlookup of the type name fails. This would mean that one could actually\noverride the system definitions of \"serial\" etc. by creating\nuser-defined types with these names. I'm quite unconvinced that that\nwould be a good idea, but if anyone wants to argue for it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Aug 2001 10:19:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Why is SERIAL a keyword?"
},
{
"msg_contents": "Tom Lane wrote:\n\n> The only downside I can see to this is that it's currently possible to\n> use a user-defined type named \"serial\", if you are determined enough:\n> \n> create table foo (\n> f1 serial, -- it's a serial column\n> f2 \"serial\", -- user-defined type named \"serial\"\n\nI would call it a bug actually...\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Thu, 16 Aug 2001 09:18:25 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is SERIAL a keyword?"
}
] |
[
{
"msg_contents": " I'm not sure if it's according to or violating the standard.\n But most other databases allow a '$' inside of identifiers.\n Well, most of them recommend not to use it, but hey guy's,\n what's a recommendation for a programmer?\n\n In order to lower porting issues, I think it'd be nice to add\n that to PostgreSQL as well. It's two more characters in\n scan.l and doesn't break the regression test.\n\n Objections?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 15 Aug 2001 12:56:39 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Dollar in identifiers"
},
{
"msg_contents": "> I'm not sure if it's according to or violating the standard.\n> But most other databases allow a '$' inside of identifiers.\n> Well, most of them recommend not to use it, but hey guy's,\n> what's a recommendation for a programmer?\n> \n> In order to lower porting issues, I think it'd be nice to add\n> that to PostgreSQL as well. It's two more characters in\n> scan.l and doesn't break the regression test.\n> \n> Objections?\n\nYes. We would move from standard C identifiers to $ identifiers. We\nhave had zero requests for this so I see no need to add it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Aug 2001 14:26:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "> > > In order to lower porting issues, I think it'd be nice to add\n> > > that to PostgreSQL as well. It's two more characters in\n> > > scan.l and doesn't break the regression test.\n> > >\n> > > Objections?\n> >\n> > Yes. We would move from standard C identifiers to $ identifiers. We\n> > have had zero requests for this so I see no need to add it.\n> \n> Standard C? I was talking about *allowing* the dollar\n> character in table-, column-, function-names!\n> \n> And not all requests show up directly on the mailing lists\n> any more. We'll see those (compatibility) requeses from\n> Toronto as well pretty soon I guess.\n> \n> The thing is that the dollar isn't mentioned in the\n> definition of the <SQL terminal character> (chapter 5.1 of\n> SQL3) at all. But all DB vendors seem to treat it at least as\n> <SQL language identifier part>.\n> \n> Could you live with it when we don't allow a name to start\n> with a dollar, but allow the dollar inside or at the end of\n> the name?\n\nWe do currently use $1 for params, so allowing dollar in the middle\nseems better. However, I need to see multiple people who need it before\nI would say OK. If we go adding things because _one_ person wants it,\nwe will end up with a mess. Someone is working on an\nOracle-compatibility parser. It would be OK in there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Aug 2001 15:21:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > I'm not sure if it's according to or violating the standard.\n> > But most other databases allow a '$' inside of identifiers.\n> > Well, most of them recommend not to use it, but hey guy's,\n> > what's a recommendation for a programmer?\n> >\n> > In order to lower porting issues, I think it'd be nice to add\n> > that to PostgreSQL as well. It's two more characters in\n> > scan.l and doesn't break the regression test.\n> >\n> > Objections?\n>\n> Yes. We would move from standard C identifiers to $ identifiers. We\n> have had zero requests for this so I see no need to add it.\n\n Standard C? I was talking about *allowing* the dollar\n character in table-, column-, function-names!\n\n And not all requests show up directly on the mailing lists\n any more. We'll see those (compatibility) requeses from\n Toronto as well pretty soon I guess.\n\n The thing is that the dollar isn't mentioned in the\n definition of the <SQL terminal character> (chapter 5.1 of\n SQL3) at all. But all DB vendors seem to treat it at least as\n <SQL language identifier part>.\n\n Could you live with it when we don't allow a name to start\n with a dollar, but allow the dollar inside or at the end of\n the name?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 15 Aug 2001 15:21:58 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > We do currently use $1 for params, so allowing dollar in the middle\n> > seems better. However, I need to see multiple people who need it before\n> > I would say OK. If we go adding things because _one_ person wants it,\n> > we will end up with a mess. Someone is working on an\n> > Oracle-compatibility parser. It would be OK in there.\n> \n> Exactly that was my first response in the meeting yesterday.\n> Put it into the Oracle-compatibility parser when we have it.\n> The question is \"will we for sure have that parser and\n> when?\".\n> \n> But let's see. Is there anybody else out there who would like\n> this feature? Ian?\n\nYes, if you can get other votes for the feature, it has a good chance. \nSeems pretty small. In fact, you could probably enable it with a\n#define that could be safer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Aug 2001 15:37:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Bruce Momjian wrote:\n> We do currently use $1 for params, so allowing dollar in the middle\n> seems better. However, I need to see multiple people who need it before\n> I would say OK. If we go adding things because _one_ person wants it,\n> we will end up with a mess. Someone is working on an\n> Oracle-compatibility parser. It would be OK in there.\n\n Exactly that was my first response in the meeting yesterday.\n Put it into the Oracle-compatibility parser when we have it.\n The question is \"will we for sure have that parser and\n when?\".\n\n But let's see. Is there anybody else out there who would like\n this feature? Ian?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 15 Aug 2001 15:45:39 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Jan Wieck writes:\n\n> Could you live with it when we don't allow a name to start\n> with a dollar, but allow the dollar inside or at the end of\n> the name?\n\nAt the end would also be a problem because of parsing conflicts with\noperators. (E.g., foo$<$bar) I don't really like this idea; we don't\nhave to follow all the nonsense of other people.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 15 Aug 2001 22:08:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n\n> Bruce Momjian wrote:\n> > We do currently use $1 for params, so allowing dollar in the middle\n> > seems better. However, I need to see multiple people who need it before\n> > I would say OK. If we go adding things because _one_ person wants it,\n> > we will end up with a mess. Someone is working on an\n> > Oracle-compatibility parser. It would be OK in there.\n> \n> Exactly that was my first response in the meeting yesterday.\n> Put it into the Oracle-compatibility parser when we have it.\n> The question is \"will we for sure have that parser and\n> when?\".\n> \n> But let's see. Is there anybody else out there who would like\n> this feature? Ian?\n\nThe $ issue isn't one I've run into in practice. The schemas which\nI've seen don't use it. (Of course, that just means that tomorrow\nI'll see one which does use it.)\n\nAs you probably know, the Oracle rules on names are:\n\n* 1 to 30 characters long\n* must start with an alphabetic character from database character set\n* may contain only alphanumeric characters from database character\n set, or underscore ('_'), dollar sign ('$'), or pound sign ('#')\n\nEven I would not argue that the pound sign should be permitted in\nPostgres identifiers. I do think that permitting dollar signs would\ndo no harm and might help some people.\n\nAs far as the comparison with C identifiers goes, I'll note that many\nC compilers permit using a dollar sign in an identifier. For example,\ngcc does:\n http://gcc.gnu.org/onlinedocs/gcc-3.0/gcc_5.html#SEC96\n\nIan\n",
"msg_date": "15 Aug 2001 13:32:02 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "> Peter Eisentraut wrote:\n> > Jan Wieck writes:\n> >\n> > > Could you live with it when we don't allow a name to start\n> > > with a dollar, but allow the dollar inside or at the end of\n> > > the name?\n> >\n> > At the end would also be a problem because of parsing conflicts with\n> > operators. (E.g., foo$<$bar) I don't really like this idea; we don't\n> > have to follow all the nonsense of other people.\n> \n> I allways found it bad coding style not to separate things\n> with whitespaces. But that's just my opinion.\n\nA bigger issue is how we handle future uses of $. Because $ is such a\nnatural Unix use for environment variables and positional parameters, we\nmay fine a use for it some day.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Aug 2001 16:58:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Jan Wieck writes:\n>\n> > Could you live with it when we don't allow a name to start\n> > with a dollar, but allow the dollar inside or at the end of\n> > the name?\n>\n> At the end would also be a problem because of parsing conflicts with\n> operators. (E.g., foo$<$bar) I don't really like this idea; we don't\n> have to follow all the nonsense of other people.\n\n I allways found it bad coding style not to separate things\n with whitespaces. But that's just my opinion.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 15 Aug 2001 17:05:22 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Could you live with it when we don't allow a name to start\n> with a dollar, but allow the dollar inside or at the end of\n> the name?\n\nWe had *better* not allow an identifier to start with $ --- or have\nyou forgotten about parameters?\n\nI tend to agree with Bruce on this; we have not seen any requests for\nthis so far, and I don't much like the idea of decreasing our compliance\nwith the standard without strong reason.\n\nI'm also concerned about changing the behavior of the lexer for\nparameter identifiers adjacent to keywords. \"select$1from foo\"\nmight be horrible coding style, but who's to promise that there\nare no applications out there that emit things like that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Aug 2001 17:56:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Could you live with it when we don't allow a name to start\n> > with a dollar, but allow the dollar inside or at the end of\n> > the name?\n>\n> We had *better* not allow an identifier to start with $ --- or have\n> you forgotten about parameters?\n\n Interestingly enough, allowing it did no break anything in\n the regression test. And even PL/pgSQL functions are able to\n deal with these objects out of the box.\n\n> I tend to agree with Bruce on this; we have not seen any requests for\n> this so far, and I don't much like the idea of decreasing our compliance\n> with the standard without strong reason.\n>\n> I'm also concerned about changing the behavior of the lexer for\n> parameter identifiers adjacent to keywords. \"select$1from foo\"\n> might be horrible coding style, but who's to promise that there\n> are no applications out there that emit things like that?\n\n Does *that* work currently? Which application could possibly\n emit such a statement. Parameters can only occur in server\n side queries. So someone must do that crap over SPI.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 15 Aug 2001 18:27:24 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Hi,\n\nDollar in identifier is currently working, you just have to doublequote the\nidentifier.\n\ncreate table \"foo$\" (\n \"foo$\" int4\n);\n\nselect * from \"foo$\";\nselect \"foo$\" from \"foo$\";\n\nworks just fine. Or\n\ncreate table \"$foo\" (\n \"$foo\" int4\n);\n\nselect * from \"$foo\";\nselect \"$foo\" from \"$foo\";\n\nalso works.\n\nPerhaps it may be some problems with pl/pgsql, not tested...\n\n\n\n",
"msg_date": "Thu, 16 Aug 2001 10:20:46 +0200",
"msg_from": "Gilles DAROLD <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "> Hi,\n> \n> Dollar in identifier is currently working, you just have to doublequote the\n> identifier.\n> \n> create table \"foo$\" (\n> \"foo$\" int4\n> );\n\n\nYes, my guess is that they don't want to double-quote.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Aug 2001 09:31:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "> I'm not sure if it's according to or violating the standard.\n> But most other databases allow a '$' inside of identifiers.\n...\n> Objections?\n\nNot an objection really, but...\n\nAre dollar signs currently allowed in operators? I'd hate to reduce the\nallowed number of characters for operators.\n\n - Thomas\n",
"msg_date": "Thu, 16 Aug 2001 15:02:03 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Are dollar signs currently allowed in operators? I'd hate to reduce the\n> allowed number of characters for operators.\n\nThey are, therefore identifiers couldn't start or end with a dollar in any\ncase. However, single \"$\" operator cannot exist, so foo$bar wouldn't be\nambiguous, but the tendency to confuse it with an operator syntax would\nreduce the readability of code greatly, IMO. However, a $$ operator can\nexist, so a construct like foo$$bar would parse completely differently,\nleaving you with a big mess.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 16 Aug 2001 18:01:12 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Are dollar signs currently allowed in operators?\n\nNot at present. If they were, you'd have a problem telling whether\n\"$12\" is a parameter identifier or a prefix \"$\" operator applied to an\ninteger constant.\n\nHowever, this is no argument for allowing them into identifiers, since\ndoing so will equally create lexing ambiguity where there was none\nbefore.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 13:10:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Thomas Lockhart writes:\n>> Are dollar signs currently allowed in operators? I'd hate to reduce the\n>> allowed number of characters for operators.\n\n> They are, therefore identifiers couldn't start or end with a dollar in any\n> case. However, single \"$\" operator cannot exist, so foo$bar wouldn't be\n> ambiguous, but the tendency to confuse it with an operator syntax would\n> reduce the readability of code greatly, IMO. However, a $$ operator can\n> exist, so a construct like foo$$bar would parse completely differently,\n> leaving you with a big mess.\n\nWups. Peter is more nearly correct than my previous message was.\nI was misled by\n\nregression=# select 1 $ 2;\nERROR: parser: parse error at or near \"$\"\n\nBut he's correct that \"$\" is part of the set of allowed operator\ncharacters:\n\nregression=# select 1 $$ 2;\nERROR: Unable to identify an operator '$$' for types 'integer' and 'integer'\n You will have to retype this query using an explicit cast\n\nThe reason a single $ does not work the same is that scan.l returns it\nas itself (because it's in the \"self\" set), not as an Op. And gram.y\nhas no productions that involve '$' as a terminal symbol.\n\nI am inclined to think that we should remove $ from the \"self\" list in\nscan.l, which'd allow a single $ to be lexed as an Op. (This'd not\nbreak parameters, since $n will still be preferentially lexed as a\n\"param\", being longer than the token that could be formed using Op.)\n\nIn any case, this is sufficient reason why we cannot allow $ to be\nallowed in identifiers: it will break any extant applications that use $\nin user-defined operators.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 13:28:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> I would've expected you, Tom, to suggest removing it from the\n> operators as well :-)\n\nWell, adding a non-standard extension is one thing. Doing it by\nremoving a different non-standard extension brings up backwards\ncompatibility issues. In this case, I'm not thrilled about the\npotential for breaking applications that have relied on our\nlong-standing, *documented* behavior.\n\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/sql-syntax.html#SQL-SYNTAX-OPERATORS\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 15:12:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "Tom Lane wrote:\n> In any case, this is sufficient reason why we cannot allow $ to be\n> allowed in identifiers: it will break any extant applications that use $\n> in user-defined operators.\n\n Than again we're no better than the other DB's. The standard\n excludes $ from any character class. Oracle and others\n violate the standard by allowing them in identifiers while we\n alone violate it by allowing them in operators. Well, at\n least we're different!\n\n I would've expected you, Tom, to suggest removing it from the\n operators as well :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 16 Aug 2001 15:12:58 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "I've been thinking some more about this dollar-sign business. There\nare a couple of points that haven't been made yet. If you'll allow\nme to recap:\n\nIt seems like there are two reasonable paths we could take:\n\n1. Keep $ as an operator character. If we go this way, I think we\nshould allow a single $ as an operator name too (by removing $ from\nthe set of \"self\" characters in scan.l, so that it lexes as an Op).\n\n2. Make $ an identifier character. Remove it from the set of allowed\noperator characters, and instead allow it as second-or-later character\nin identifiers. (It cannot be allowed as first character, else it's\ntotally ambiguous whether $12 is meant to be a parameter or identifier.)\n\nOption 2 improves Oracle compatibility, at the price of breaking\nbackwards compatibility for applications that presently use $ as part\nof multi-character operator names. (But does anyone know of any?)\n\nAn important thing to think about here is the effects on lexing of\nparameter symbols ($digits). Option 1 does not complicate parameter\nlexing; $digits will still be read as a parameter since it's a longer\ntoken than could be formed by taking the $ as an Op. However, this\noption doesn't make things any better either: in particular, we still\nhave the lexing ambiguity of multicharacter operator vs. parameter.\n\"x+$12\" will be read as x +$ 12, though more likely x + $12 was meant.\n\nWith $-as-identifier, it'd no longer be possible for adjacent operators\nand parameters to be confused. Instead we have a new ambiguity with\nadjacent parameters and identifiers/keywords. Presently \"select$1from\"\nis read as SELECT param FROM, but with $-as-identifier it'd be read as\na single identifier. But the interesting point is that this'd make\nparameters work a lot more like identifiers. People don't expect to\nbe able to write identifiers adjacent to other identifiers with no\nwhitespace. They do expect to be able to write them adjacent to\noperators.\n\nIn fact, with $-as-identifier we'd have this useful property: given a\nlexically-recognizable identifier, substitution of a parameter token\nfor the identifier does not require insertion of any whitespace to\nkeep the parameter lexically recognizable. Some of you will recall\nplpgsql bugs associated with the fact that the current lexer behavior\ndoes not have this property. (The other direction doesn't work 100%,\nfor example: \"select $1from\" is lexable, \"select foofrom\" isn't. But\nthat direction is much less interesting in practice.)\n\nIn short, $-as-identifier makes the lexer behavior noticeably cleaner\nthan it is now.\n\nI started out firmly in the \"keep $ an operator character\" camp. But\nafter thinking this through I'm sitting on the fence: both options seem\nabout equally attractive to me.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Aug 2001 11:00:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "> In fact, with $-as-identifier we'd have this useful property: given a\n> lexically-recognizable identifier, substitution of a parameter token\n> for the identifier does not require insertion of any whitespace to\n> keep the parameter lexically recognizable. Some of you will recall\n> plpgsql bugs associated with the fact that the current lexer behavior\n> does not have this property. (The other direction doesn't work 100%,\n> for example: \"select $1from\" is lexable, \"select foofrom\" isn't. But\n> that direction is much less interesting in practice.)\n> \n> In short, $-as-identifier makes the lexer behavior noticeably cleaner\n> than it is now.\n> \n> I started out firmly in the \"keep $ an operator character\" camp. But\n> after thinking this through I'm sitting on the fence: both options seem\n> about equally attractive to me.\n\nSure, if you want to remove it from operators, that is fine, but adding\nit to identifiers seems weird seeing as only one person wants it and it\nisn't standard.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 11:25:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Sure, if you want to remove it from operators, that is fine, but adding\n> it to identifiers seems weird seeing as only one person wants it and it\n> isn't standard.\n\n?? I don't see any value in not using $ for *either* purpose. That\nbreaks backwards compatibility for hardly any gain at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Aug 2001 11:31:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Sure, if you want to remove it from operators, that is fine, but adding\n> > it to identifiers seems weird seeing as only one person wants it and it\n> > isn't standard.\n> \n> ?? I don't see any value in not using $ for *either* purpose. That\n> breaks backwards compatibility for hardly any gain at all.\n\nOK, if you think it should be kept for backward compatibility, then go\nahead and keep it, but I see little value in adding it to identifiers\nunless it is part of an Oracle-compatibility module or at least an\nOracle-compatibility #define.\n\nHow many user-defined $ operators do you think are out there? I doubt\nvery many. I would be surprised to find even one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 11:34:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Tom Lane writes:\n\n> Option 2 improves Oracle compatibility, at the price of breaking\n> backwards compatibility for applications that presently use $ as part\n> of multi-character operator names. (But does anyone know of any?)\n\nHmm, postgresql-7.2devel_petere_privatebranch... :-(\n\nWell, the Euro is already allowed in identifiers, so I rest.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 17 Aug 2001 17:49:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "> Tom Lane writes:\n> \n> > Option 2 improves Oracle compatibility, at the price of breaking\n> > backwards compatibility for applications that presently use $ as part\n> > of multi-character operator names. (But does anyone know of any?)\n> \n> Hmm, postgresql-7.2devel_petere_privatebranch... :-(\n> \n> Well, the Euro is already allowed in identifiers, so I rest.\n\nOK, I will give my idea on this and people can make a decision. \n\nWe have pulled over Oracle specific stuff when they added features to\nour existing code. In this case, we are addinng dollarsign to the\nidentifiers just for compability. No one is saying, \"Gee, I like to see\ndollarsigns in my tablenames, and I can't do that.\" They want it only\nfor compatibility. \n\nNow, I know we have accepted compatibility stuff when it was easy to do.\nIt is just that changing the identifiers we accept seems pretty major to\nme. However, if others think it isn't a big deal, I can go along with\nit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 13:46:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Quite awhile back, we had a discussion about removing \"$\" from the set\nof allowed characters in operator names, and instead allowing it as a\nnon-first character in identifiers. (It'd have to be non-first to avoid\nambiguity with parameter symbols \"$nnn\".) See, eg,\nhttp://archives.postgresql.org/pgsql-hackers/2001-08/msg00629.php\n\nThat discussion petered out without any definite consensus being\nreached, but I think it's time to reconsider the idea. We're getting\nflak about \"x<$n\" being parsed as \"x <$ n\" rather than \"x < $n\" (see\ncurrent thread in pgsql-sql). While this has always been a hazard for\nSQL and plpgsql function writers, it is now also a hazard in direct\nSQL, if you use PREPAREd queries. So I think the importance of avoiding\nsuch problems has moved up a notch as of 7.3.\n\nSo, I'd like to put that proposal back on the table. Comments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jan 2003 12:09:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "\nI agree. I think $ is too special to be mixed in with operators. It is\njust used too frequently for variables.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Quite awhile back, we had a discussion about removing \"$\" from the set\n> of allowed characters in operator names, and instead allowing it as a\n> non-first character in identifiers. (It'd have to be non-first to avoid\n> ambiguity with parameter symbols \"$nnn\".) See, eg,\n> http://archives.postgresql.org/pgsql-hackers/2001-08/msg00629.php\n> \n> That discussion petered out without any definite consensus being\n> reached, but I think it's time to reconsider the idea. We're getting\n> flak about \"x<$n\" being parsed as \"x <$ n\" rather than \"x < $n\" (see\n> current thread in pgsql-sql). While this has always been a hazard for\n> SQL and plpgsql function writers, it is now also a hazard in direct\n> SQL, if you use PREPAREd queries. So I think the importance of avoiding\n> such problems has moved up a notch as of 7.3.\n> \n> So, I'd like to put that proposal back on the table. Comments anyone?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Jan 2003 13:14:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Quite awhile back, we had a discussion about removing \"$\" from the set\n>> of allowed characters in operator names, and instead allowing it as a\n>> non-first character in identifiers.\n\n> I agree with the first one, but does it have to imply the second?\n\nIt does not have to, but then we'd only be using \"$\" for parameters,\nwhich seems like we're not getting our money's worth out of the\ncharacter (pun intended). Also, the original point of that discussion\nwas that Oracle allows \"$\" in identifiers, and people wanted to port\nOracle code without having to rename their stuff.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jan 2003 16:48:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "Tom Lane writes:\n\n> Quite awhile back, we had a discussion about removing \"$\" from the set\n> of allowed characters in operator names, and instead allowing it as a\n> non-first character in identifiers.\n\nI agree with the first one, but does it have to imply the second?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 9 Jan 2003 22:51:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Quite awhile back, we had a discussion about removing \"$\" from the set\n> > of allowed characters in operator names, and instead allowing it as a\n> > non-first character in identifiers.\n> \n> I agree with the first one, but does it have to imply the second?\n\nI believe he wanted the second because Oracle supports it, and some\nOracle apps use that feature. I think in the old days, before\nunderscore, Oracle used $ for space (double yuck).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 9 Jan 2003 16:55:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Peter Eisentraut wrote:\n> > Tom Lane writes:\n> >\n> > > Quite awhile back, we had a discussion about removing \"$\" from the set\n> > > of allowed characters in operator names, and instead allowing it as a\n> > > non-first character in identifiers.\n> >\n> > I agree with the first one, but does it have to imply the second?\n> \n> I believe he wanted the second because Oracle supports it, and some\n> Oracle apps use that feature. I think in the old days, before\n> underscore, Oracle used $ for space (double yuck).\n\nDollar is not allowed as per SQL spec. And I think Oracle discouraged\npeople from using it, but used it in their own stuff. Good way to avoid\nany possible conflicts and I would've liked our version of it to be pg$\ninstead of pg_ ... I think that's a bit too much to ask for, is it?\n\nThe problem is, discouraged or not, if there's a slot people will stick\nsomething into ... meaning if it accepts a dollar, to hell with vendor\nrecommendations!\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Thu, 09 Jan 2003 17:12:44 -0500",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> The problem is, discouraged or not, if there's a slot people will stick\n> something into ... meaning if it accepts a dollar, to hell with vendor\n> recommendations!\n\nI'm confused; are you voting against allowing dollar in identifiers?\nI thought it was you that proposed allowing it in the first place ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jan 2003 17:27:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > The problem is, discouraged or not, if there's a slot people will stick\n> > something into ... meaning if it accepts a dollar, to hell with vendor\n> > recommendations!\n> \n> I'm confused; are you voting against allowing dollar in identifiers?\n> I thought it was you that proposed allowing it in the first place ...\n\nYou are, I don't and I was, indeed.\n\nRemove Dollar from the operators and allow it as non-first identifier\nchar. Please :-)\n\n\nJan\n\n\n> \n> regards, tom lane\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Thu, 09 Jan 2003 17:49:55 -0500",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Dollar in identifiers"
},
{
"msg_contents": "Change it - but just put it in the release notes :)\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Friday, 10 January 2003 1:10 AM\n> To: Jan Wieck; Peter Eisentraut\n> Cc: PostgreSQL HACKERS\n> Subject: Re: [HACKERS] Dollar in identifiers \n> \n> \n> Quite awhile back, we had a discussion about removing \"$\" from the set\n> of allowed characters in operator names, and instead allowing it as a\n> non-first character in identifiers. (It'd have to be non-first to avoid\n> ambiguity with parameter symbols \"$nnn\".) See, eg,\n> http://archives.postgresql.org/pgsql-hackers/2001-08/msg00629.php\n> \n> That discussion petered out without any definite consensus being\n> reached, but I think it's time to reconsider the idea. We're getting\n> flak about \"x<$n\" being parsed as \"x <$ n\" rather than \"x < $n\" (see\n> current thread in pgsql-sql). While this has always been a hazard for\n> SQL and plpgsql function writers, it is now also a hazard in direct\n> SQL, if you use PREPAREd queries. So I think the importance of avoiding\n> such problems has moved up a notch as of 7.3.\n> \n> So, I'd like to put that proposal back on the table. Comments anyone?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n",
"msg_date": "Fri, 10 Jan 2003 09:36:41 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Dollar in identifiers "
}
] |
[
{
"msg_contents": "\nhellow all\n\nhow can I store Binary values in a table of postgresql database?\n\nthanks...\n",
"msg_date": "15 Aug 2001 19:20:35 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "Binary Field??"
}
] |
[
{
"msg_contents": "We're trying to migrate from Oracle to Postgres and I've been having\nproblems converting the procedural language stuff. I've looked at the\nweb documentation and my functions/triggers seem like they should\nwork. What am I doing wrong? Any help you could give me would be\ngreatly appreciated. I know I must be missing something, but I can't\nfigure out what it is.\n\n\nRunning this query:\n\ninsert into EXTRANET_SECTION (ID, section_name, parent, extranetname)\nvalues (255,' Main',0, 'test');\n\n\nGives me this error:\n\nfmgr_info: function 19464: cache lookup failed\n\n\n\nThese are the triggers/functions and the table they access:\n\n\ndrop function increment_section();\n\ncreate function increment_section()\nreturns opaque\nas 'BEGIN\n DECLARE\n x integer;\n BEGIN\n SELECT COUNT(*) INTO x\n FROM EXTRANET_ids\n WHERE extranetname = :NEW.extranetname;\n IF x = 0\n then insert into EXTRANET_ids (extranetname, EXTRANET_section_id,\nEXTRANET_docs_id) values (:NEW.extranetname, 0, 0);\n END IF;\n update EXTRANET_ids\n set EXTRANET_section_id = EXTRANET_section_id +1\n WHERE extranetname = :NEW.extranetname;\n select EXTRANET_section_id INTO :NEW.ID from EXTRANET_ids where\nextranetname = :NEW.extranetname;\n return NEW;\nEND;'\nlanguage 'plpgsql';\n\n\nDrop trigger ins_EXTRANET_section on EXTRANET_section;\n\nCREATE TRIGGER ins_EXTRANET_section\n BEFORE INSERT ON EXTRANET_section\n FOR EACH ROW\n execute procedure increment_section();\n\n\n\nTABLES THIS TRIGGER ACCESSES:\n\n\ncreate table EXTRANET_ids\n(extranetname varchar(40) NOT NULL primary key,\n EXTRANET_section_id int NOT NULL,\n EXTRANET_docs_id int NOT NULL);\n\n\n\n\nThanks for your help,\nJoseph\n",
"msg_date": "15 Aug 2001 14:51:04 -0700",
"msg_from": "joseph.castille@wcom.com (Joseph Castille)",
"msg_from_op": true,
"msg_subject": "Problems Converting Triggers From Oracle PLSQL to PLPGSQL"
}
] |
[
{
"msg_contents": "I have what may be a half-baked idea for allowing nextval and friends to\nwork with a true sequence-name parameter, rather than a string\nequivalent.\n\nSuppose that we invent a new datatype \"regclass\", similar to regproc:\nit's actually an OID, but it has the additional implication that it is\nthe OID of a pg_class row, and the I/O operations for the type try to\naccept or print a class name not just a numeric OID.\n\nNext, hack the parser to understand that when a function has an argument\ndeclared as type regclass and is invoked with the syntax relname.func or\nfunc(relname), what is wanted is for the OID of the relation to be\npassed as a constant argument; the relation is NOT inserted into the\nquery's rangetable.\n\nThen, it's a simple matter to write a variant of nextval that identifies\nits target sequence by OID rather than name. The function will still be\nresponsible for ensuring that what it's pointed at is indeed a sequence,\nsince the parser won't enforce that.\n\nI haven't yet studied the parser to see how much of a hack this would\nbe, but it seems doable. The facility might be of use for other\nfunctions besides the sequence ones, too.\n\nThoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Aug 2001 20:36:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Rough idea for supporting \"sequencename.nextval\" syntax"
},
{
"msg_contents": "I said:\n> Suppose that we invent a new datatype \"regclass\", similar to regproc:\n> it's actually an OID, but it has the additional implication that it is\n> the OID of a pg_class row, and the I/O operations for the type try to\n> accept or print a class name not just a numeric OID.\n\n> Next, hack the parser to understand that when a function has an argument\n> declared as type regclass and is invoked with the syntax relname.func or\n> func(relname), what is wanted is for the OID of the relation to be\n> passed as a constant argument; the relation is NOT inserted into the\n> query's rangetable.\n\n> Then, it's a simple matter to write a variant of nextval that identifies\n> its target sequence by OID rather than name.\n\nActually, there'd be no need to have two versions of nextval().\nConsider what happens when you write:\n\n\tselect nextval('foo');\n\n'foo' is an unknown-type literal, so if the only available function\nnextval is one that takes \"regclass\", guess what happens: 'foo' is fed\nto the input conversion routine for regclass. Given the above proposal,\nthe result would be the OID for sequence foo, and away we go.\n\nInterestingly, this'd result in an automatic upgrade path for nextval\ncalls: an expression like nextval('foo') would be parsed into the same\nexpression tree as nextval(foo), and with appropriate smarts in\nruleutils.c, it'd get listed that way in your next pg_dump.\n\nThere might be some value in continuing to accept \"text\" input for\nnextval, for example to support\n\tselect nextval('tabname' || 'seqname' || '_seq');\nwhich seems like a plausible thing for someone to do. My inclination\nwould be to handle this by defining a text-to-regclass conversion\nfunction, and still have just one nextval().\n\nThis is starting to seem less like a kluge and more like a real\nfeature...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 22:36:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rough idea for supporting \"sequencename.nextval\" syntax "
},
{
"msg_contents": "On Thu, Aug 16, 2001 at 10:36:49PM -0400, Tom Lane wrote:\n> I said:\n> > Suppose that we invent a new datatype \"regclass\", similar to regproc:\n> > it's actually an OID, but it has the additional implication that it is\n> > the OID of a pg_class row, and the I/O operations for the type try to\n> > accept or print a class name not just a numeric OID.\n\nTom, would it make sense to use this new type in the system tables where\npg_class oids currently are used, such as pg_attribute.attrelid ? \n\nThen, one could do:\n\nselect attname from pg_attributes where attrelid = 'mytablename';\n\nIf the appropriate type conversions where in place. (I just tried this\nwith pg_aggregate, looking for aggregates that use a particular operator,\nand failed, since text(<some regproc>) yields the oid, rather than\nthe name.)\n\nThis would essentially special case the join of two system tables. Hmm,\nsounds like a step down the trail to not needing user visible oids for\nsystem tables, even.\n \n> This is starting to seem less like a kluge and more like a real\n> feature...\n\nRoss\n",
"msg_date": "Fri, 17 Aug 2001 11:08:32 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: Rough idea for supporting \"sequencename.nextval\" syntax"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> Tom, would it make sense to use this new type in the system tables where\n> pg_class oids currently are used, such as pg_attribute.attrelid ? \n\nProbably. We already use regproc where pg_proc OIDs are used --- not\ncompletely consistently IIRC, but it'd be good to be more consistent.\n\n> Then, one could do:\n> select attname from pg_attributes where attrelid = 'mytablename';\n\n> If the appropriate type conversions where in place. (I just tried this\n> with pg_aggregate, looking for aggregates that use a particular operator,\n> and failed, since text(<some regproc>) yields the oid, rather than\n> the name.)\n\nGood thought. At the moment an explicit cast is needed for regproc,\nand probably the same would be true of regclass unless we did some\nfurther hacking:\n\nregression=# select * from pg_aggregate where aggfinalfn = 'interval_avg';\nERROR: oidin: error in \"interval_avg\": can't parse \"interval_avg\"\nregression=# select * from pg_aggregate where aggfinalfn = 'interval_avg'::regproc;\n aggname | aggowner | aggtransfn | aggfinalfn | aggbasetype | aggtranstype | aggfinaltype | agginitval\n---------+----------+----------------+--------------+-------------+--------------+--------------+---------------------\n avg | 256 | interval_accum | interval_avg | 1186 | 1187 | 1186 | {0 second,0 second}\n(1 row)\n\nI think the reason the literal is resolved as OID not regproc is that we\nare using the OID equality operator here (relying on binary equivalence\nof OID and regproc). I don't much want to invent a whole set of regproc\nand regclass operators to avoid that. Perhaps the unknown-type\nresolution rules could be fine-tuned somehow to resolve as the type of\nthe other operand, rather than the declared input type of the operator,\nin cases like this. (Thomas, any thoughts about that?)\n\nLooking at this, I can't help wondering about \"regtype\" too ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Aug 2001 12:39:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: Rough idea for supporting \"sequencename.nextval\" syntax "
}
] |
[
{
"msg_contents": "Hi all,\n\nJust wondering if anyone knows of or has tested for PostgreSQL buffer\nexploits over the various interfaces (JDBC, ODBC, psql, etc) or directly\nthrough socket connections?\n\nWorking on a sensitive application at the moment, and I've realised I've\nnever seen anyone mention testing PostgreSQL in this regard yet.\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 16 Aug 2001 19:27:18 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL buffer exploits"
},
{
"msg_contents": "> Hi all,\n> \n> Just wondering if anyone knows of or has tested for PostgreSQL buffer\n> exploits over the various interfaces (JDBC, ODBC, psql, etc) or directly\n> through socket connections?\n> \n> Working on a sensitive application at the moment, and I've realised I've\n> never seen anyone mention testing PostgreSQL in this regard yet.\n\nI never heard of any tests, nor any security failures either.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Aug 2001 09:33:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL buffer exploits"
},
{
"msg_contents": "Thanks Bruce,\n\nThe lack of tests is more worrying than the lack of reported failures I\nreckon. :-( I'll check through the BugTRAQ archives later on.\n\nOn a good note however, the Open Source Database Benchmarking project\n(osdb.sourceforge.net) has finally gotten around to getting it's code\nworking with PostgreSQL 7.1.x and I'm setting up a place on the techdocs\nsite to store any results which people want to report after running it.\n\nIt'll be good to start creating a publicly available database of what\nhardware and settings gives what levels of performance with PostgreSQL. \nI'll do an [ANNOUNCE] when it's all up and ready.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nBruce Momjian wrote:\n> \n> > Hi all,\n> >\n> > Just wondering if anyone knows of or has tested for PostgreSQL buffer\n> > exploits over the various interfaces (JDBC, ODBC, psql, etc) or directly\n> > through socket connections?\n> >\n> > Working on a sensitive application at the moment, and I've realised I've\n> > never seen anyone mention testing PostgreSQL in this regard yet.\n> \n> I never heard of any tests, nor any security failures either.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Fri, 17 Aug 2001 02:33:46 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL buffer exploits"
}
] |
[
{
"msg_contents": "After more than 3 months of hard testing I found a small bug in\nPLPGSQL. (It works _too_ fine due to your excellent work... :-) Thanks!)\n\nConsider this function:\n\nCREATE FUNCTION testfunc () RETURNS int4 AS '\ndeclare\n ret int4;\nbegin\n ret := column1 FROM table WHERE column2 LIKE ''%anything%''\n\tORDER BY column3 LIMIT 1;\n return ret;\nend;\n' LANGUAGE 'PLPGSQL';\n\nUnfortunately I'm getting\n\ntestdb=# select testfunc();\nERROR: query \"SELECT column1 FROM table WHERE column2 LIKE '%anything%'\nORDER BY column3 LIMIT 1\" returned more than one column\n\nIn psql there is no such problem. My PostgreSQL version is \"PostgreSQL\n7.1.1 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\" patched with\nfour small patches (in fact I almost have a 7.1.2).\n\nMy workaround for the test function is:\n\nCREATE FUNCTION testfunc () RETURNS int4 AS '\ndeclare\n ret int4;\nbegin\n SELECT column1 into ret FROM table WHERE column2 LIKE ''%anything%''\n\tORDER BY column3 LIMIT 1;\n return ret;\nend;\n' LANGUAGE 'PLPGSQL';\n\nIs this bug a reported one?\n\nRegards,\nZoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n",
"msg_date": "Thu, 16 Aug 2001 19:47:26 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "PLPGSQL bug in implicit SELECT"
},
{
"msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> Unfortunately I'm getting\n> testdb=# select testfunc();\n> ERROR: query \"SELECT column1 FROM table WHERE column2 LIKE '%anything%'\n> ORDER BY column3 LIMIT 1\" returned more than one column\n\nThis appears fixed in current sources. I believe the relevant bugfix is:\n\n2001-05-27 16:48 tgl\n\n\t* src/: backend/executor/execJunk.c, backend/executor/execMain.c,\n\tinclude/executor/executor.h, include/nodes/execnodes.h: When using\n\ta junkfilter, the output tuple should NOT be stored back into the\n\tsame tuple slot that the raw tuple came from, because that slot has\n\tthe wrong tuple descriptor. Store it into its own slot with the\n\tcorrect descriptor, instead. This repairs problems with SPI\n\tfunctions seeing inappropriate tuple descriptors --- for example,\n\tplpgsql code failing to cope with SELECT FOR UPDATE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 15:55:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PLPGSQL bug in implicit SELECT "
}
] |
[
{
"msg_contents": "I got this cd from GreatBridge that has 7.0.3 on it, so I tried to do a\nconfigure on my AIX 4.2.1.0.06 machine. I have egcs and gnu make installed\nfrom the bull archives.\n\nconfigure says that xlc is not installed, and gcc(egcs) cant create\nexecutables.\n\nI wrote a simple \"hello\" program and it compiled fine.\n\nIdeas?\n\nWalter L. Preuninger II\n\n\n",
"msg_date": "Thu, 16 Aug 2001 13:42:29 -0500",
"msg_from": "\"Walter L. Preuninger II\" <tazmc2k@hotmail.com>",
"msg_from_op": true,
"msg_subject": "7.0.3 on AIX 4.2.1.0.06"
}
] |
[
{
"msg_contents": "Look at this from the BSD/OS crypt() manual page:\n\n The crypt function performs password encryption. It is derived from the\n NBS Data Encryption Standard. Additional code has been added to deter\n key search attempts. The first argument to crypt is a NUL-terminated\n string (normally a password typed by a user). The second is a character\n array, 9 bytes in length, consisting of an underscore (``_'') followed by\n 4 bytes of iteration count and 4 bytes of salt. Both the iteration count\n and the salt are encoded with 6 bits per character, least significant\n bits first. The values 0 to 63 are encoded by the characters ``./0-9A-\n Za-z'', respectively.\n\n...\n\n For compatibility with historical versions of crypt(3), the setting may\n consist of 2 bytes of salt, encoded as above, in which case an iteration\n count of 25 is used, fewer perturbations of DES are available, at most 8\n characters of key are used, and the returned value is a NUL-terminated\n string 13 bytes in length.\n\nIt seems to say that the salt passed to crypt should be null-terminated, but\nwe call crypt from libpq as:\n\n\tcrypt_pwd = crypt(password, conn->salt);\n\nand conn.salt is char[2]. Isn't this a problem?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Aug 2001 22:10:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "crypt and null termination"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It seems to say that the salt passed to crypt should be null-terminated,\n\nHmm. The HPUX man page for crypt() just says that\n\tsalt is a two-character string chosen from the set [a-zA-Z0-9./]\nwhich I think is the traditional spec. Looks like BSD has adopted some\nlocal extensions.\n\nNote that the BSD page specifies that the extended salt format starts\nwith '_', which is not one of the allowed characters in the traditional\nformat. I bet they check that before trying to fetch more than 2 bytes.\nThe second paragraph you quote doesn't say anything about null\ntermination.\n\nStill, it wouldn't be a bad idea to add a null byte ... couldn't hurt.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Aug 2001 22:51:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: crypt and null termination "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It seems to say that the salt passed to crypt should be null-terminated,\n> \n> Hmm. The HPUX man page for crypt() just says that\n> \tsalt is a two-character string chosen from the set [a-zA-Z0-9./]\n> which I think is the traditional spec. Looks like BSD has adopted some\n> local extensions.\n> \n> Note that the BSD page specifies that the extended salt format starts\n> with '_', which is not one of the allowed characters in the traditional\n> format. I bet they check that before trying to fetch more than 2 bytes.\n> The second paragraph you quote doesn't say anything about null\n> termination.\n> \n> Still, it wouldn't be a bad idea to add a null byte ... couldn't hurt.\n\nThat was my feeling, and it may explain some of our crypt() platform\nfailures!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 16 Aug 2001 22:59:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: crypt and null termination"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Look at this from the BSD/OS crypt() manual page:\n> \n> The crypt function performs password encryption. It is derived from the\n> NBS Data Encryption Standard. Additional code has been added to deter\n> key search attempts. The first argument to crypt is a NUL-terminated\n> string (normally a password typed by a user). The second is a character\n> array, 9 bytes in length, consisting of an underscore (``_'') followed by\n> 4 bytes of iteration count and 4 bytes of salt. Both the iteration count\n> and the salt are encoded with 6 bits per character, least significant\n> bits first. The values 0 to 63 are encoded by the characters ``./0-9A-\n> Za-z'', respectively.\n> \n> ...\n> \n> For compatibility with historical versions of crypt(3), the setting may\n> consist of 2 bytes of salt, encoded as above, in which case an iteration\n> count of 25 is used, fewer perturbations of DES are available, at most 8\n> characters of key are used, and the returned value is a NUL-terminated\n> string 13 bytes in length.\n> \n> It seems to say that the salt passed to crypt should be null-terminated, but\n> we call crypt from libpq as:\n> \n> \tcrypt_pwd = crypt(password, conn->salt);\n> \n> and conn.salt is char[2]. Isn't this a problem?\n\nI don't think it is. Note that it refers to the salt as a \"character\narray\", not a string. Also, since '_' isn't in the allowed encoding\nset, it can tell the difference between a 9-byte salt and a 2-byte\nsalt without a terminating NUL.\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n",
"msg_date": "16 Aug 2001 23:06:21 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: crypt and null termination"
},
{
"msg_contents": "> > and conn.salt is char[2]. Isn't this a problem?\n> \n> I don't think it is. Note that it refers to the salt as a \"character\n> array\", not a string. Also, since '_' isn't in the allowed encoding\n> set, it can tell the difference between a 9-byte salt and a 2-byte\n> salt without a terminating NUL.\n\nI didn't pick up that array item.\n\nAnyway, the patch is small so I will apply it. There is no telling what\nOS's expect a character string there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/libpq/crypt.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/libpq/crypt.c,v\nretrieving revision 1.35\ndiff -c -r1.35 crypt.c\n*** src/backend/libpq/crypt.c\t2001/08/17 02:59:19\t1.35\n--- src/backend/libpq/crypt.c\t2001/08/17 03:07:19\n***************\n*** 295,302 ****\n \tswitch (port->auth_method)\n \t{\n \t\tcase uaCrypt:\n! \t\t\tcrypt_pwd = crypt(passwd, port->cryptSalt);\n \t\t\tbreak;\n \t\tcase uaMD5:\n \t\t\tcrypt_pwd = palloc(MD5_PASSWD_LEN+1);\n \t\t\tif (isMD5(passwd))\n--- 295,306 ----\n \tswitch (port->auth_method)\n \t{\n \t\tcase uaCrypt:\n! \t\t{\n! \t\t\tchar salt[3];\n! \t\t\tStrNCpy(salt, port->cryptSalt,3);\n! \t\t\tcrypt_pwd = crypt(passwd, salt);\n \t\t\tbreak;\n+ \t\t}\n \t\tcase uaMD5:\n \t\t\tcrypt_pwd = palloc(MD5_PASSWD_LEN+1);\n \t\t\tif (isMD5(passwd))\nIndex: src/interfaces/libpq/fe-auth.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-auth.c,v\nretrieving revision 1.51\ndiff -c -r1.51 fe-auth.c\n*** src/interfaces/libpq/fe-auth.c\t2001/08/17 02:59:19\t1.51\n--- src/interfaces/libpq/fe-auth.c\t2001/08/17 03:07:27\n***************\n*** 443,450 ****\n \tswitch (areq)\n \t{\n \t\tcase AUTH_REQ_CRYPT:\n! \t\t\tcrypt_pwd = crypt(password, conn->cryptSalt);\n \t\t\tbreak;\n \t\tcase AUTH_REQ_MD5:\n \t\t\t{\n \t\t\t\tchar *crypt_pwd2;\n--- 443,455 ----\n \tswitch (areq)\n \t{\n \t\tcase AUTH_REQ_CRYPT:\n! \t\t{\n! \t\t\tchar salt[3];\n! \n! \t\t\tStrNCpy(salt, conn->cryptSalt,3);\n! \t\t\tcrypt_pwd = crypt(password, salt);\n \t\t\tbreak;\n+ \t\t}\n \t\tcase AUTH_REQ_MD5:\n \t\t\t{\n \t\t\t\tchar *crypt_pwd2;",
"msg_date": "Thu, 16 Aug 2001 23:09:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: crypt and null termination"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > > and conn.salt is char[2]. Isn't this a problem?\n> > \n> > I don't think it is. Note that it refers to the salt as a \"character\n> > array\", not a string. Also, since '_' isn't in the allowed encoding\n> > set, it can tell the difference between a 9-byte salt and a 2-byte\n> > salt without a terminating NUL.\n> \n> I didn't pick up that array item.\n> \n> Anyway, the patch is small so I will apply it. There is no telling what\n> OS's expect a character string there.\n\nCertainly won't hurt. I just looked at the docs for glibc on Linux,\nand it has its own semi-weird extension format for MD5-based hashing,\nbut doesn't seem to require null termination--it uses an initial '$',\nwhich again isn't part of the encoding set, as a discriminator rather\nthan '_', and will treat either another '$' or a NUL as the terminator \nfor the extended salt.\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n",
"msg_date": "16 Aug 2001 23:17:16 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: crypt and null termination"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Anyway, the patch is small so I will apply it. There is no telling what\n> OS's expect a character string there.\n\nThere's a pretty good telling: Nobody ever reported a problem related to\nthis.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 17 Aug 2001 21:44:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: crypt and null termination"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Anyway, the patch is small so I will apply it. There is no telling what\n> > OS's expect a character string there.\n> \n> There's a pretty good telling: Nobody ever reported a problem related to\n> this.\n\nWe have had crypts that didn't work across platforms.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 18:49:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: crypt and null termination"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > Bruce Momjian writes:\n> >\n> > > Anyway, the patch is small so I will apply it. There is no telling what\n> > > OS's expect a character string there.\n> >\n> > There's a pretty good telling: Nobody ever reported a problem related to\n> > this.\n>\n> We have had crypts that didn't work across platforms.\n\nThat's because they used different encoding algorithms. A missing null\nterminator has different effects, across platforms or not.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 18 Aug 2001 01:08:29 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: crypt and null termination"
}
] |
[
{
"msg_contents": "Hi, all\n I've found a problem in pl/pgsql: the variable declared can't be the\n\nsame name of table's column name, here is a example:\n-----------------------------------8<----------------\n\ndrop table userdata;\ncreate table userdata (\n userid text,\n txnid text,\n passwd text,\n sdate timestamp,\n edate timestamp,\n amt numeric(12,2),\n localtime timestamp\n);\ndrop table logdata;\ncreate table logdata (\n userdata text\n);\n---------------------8<------------------\nif I create a function & trigger like these:\n\n-------------8<--------------------------\ndrop function parse_userdata();\ncreate function parse_userdata() returns opaque as'\n DECLARE\n user_id text;\n txn_id text;\n pswd text;\n ttt numeric;\n amt numeric(12,2); --userdata.amt%TYPE; -- I can not use\nnumeric(12,2)\n startdate timestamp;\n crtime timestamp;\n BEGIN\n if length(new.userdata) < 33 then\n raise exception ''userdata''''s length error'';\n return new;\n else\n raise NOTICE ''it''''s a normal txn.'';\n txn_id := substr(new.userdata, 14+19+1, 2);\n raise notice ''txn_id is: %'', txn_id;\n end if;\n if txn_id = ''00'' then\n\n raise notice ''it''''s login txn'';\n user_id := substr(new.userdata, 14+1, 19);\n pswd := substr(new.userdata, 14+19+1+2, 6);\n INSERT INTO userdata\n (userid, txnid, passwd, localtime)\n VALUES\n (user_id, txn_id, pswd,crtime);\n\n else if txn_id =''01'' then\n raise NOTICE ''it''''s a fix all in one inq\ntxn.'';\n end if;\n end if;\n return new;\n END; 'LANGUAGE 'plpgsql';\ndrop trigger log2userdata on logdata;\ncreate trigger log2userdata after insert on logdata for each row\nexecute procedure parse_userdata();\n-----------8<------------------\n\nthe creation went smoothly, but when I do a:\n\n-------------8<--------------------------------------------\ninsert into logdata(userdata)\nvalues('20000000000000456351010000019989700111111');\n-------------8<--------------------------------------------\n\nit reports:\nlaser_db=# insert into logdata(userdata)\nvalues('20000000000000456351010000019989700111111');\nNOTICE: plpgsql: ERROR during compile of parse_userdata near line 6\nERROR: parse error at or near \"(\"\n\nbut if I change the definition to:\n-----------------------8<------------------------------------\ndrop function parse_userdata();\ncreate function parse_userdata() returns opaque as'\n DECLARE\n user_id text;\n txn_id text;\n pswd text;\n ttt numeric;\n amt userdata.amt%TYPE; -- I can not use numeric(12,2)\n startdate timestamp;\n crtime timestamp;\n BEGIN\n if length(new.userdata) < 33 then\n raise exception ''userdata''''s length error'';\n return new;\n else\n raise NOTICE ''it''''s a normal txn.'';\n txn_id := substr(new.userdata, 14+19+1, 2);\n raise notice ''txn_id is: %'', txn_id;\n end if;\n\n if txn_id = ''00'' then\n\n raise notice ''it''''s login txn'';\n user_id := substr(new.userdata, 14+1, 19);\n pswd := substr(new.userdata, 14+19+1+2, 6);\n INSERT INTO userdata\n (userid, txnid, passwd, localtime)\n VALUES\n (user_id, txn_id, pswd,crtime);\n\n else if txn_id =''01'' then\n raise NOTICE ''it''''s a fix all in one inq\ntxn.'';\n end if;\n end if;\n return new;\n END; 'LANGUAGE 'plpgsql';\ndrop trigger log2userdata on logdata;\ncreate trigger log2userdata after insert on logdata for each row\nexecute procedure parse_userdata();\n-----------------------8<------------------------------------\n\nthen it' ok, and still another problem, if I declare the vairable pswd\nto passwd\n(same with userdata's column `paswd' name) then I'll get the error:\n\nlaser_db=# insert into logdata(userdata)\nvalues('20000000000000456351010000019989700111111');\nNOTICE: it's a normal txn.\nNOTICE: txn_id is: 00\nNOTICE: it's login txn\nERROR: parser: parse error at or near \"$1\"\n\nI don't konw if it's reported, but I can't found any where in docs\nmentioning these.\nso I think at lease we should make it clear in docs, or, am I doing\nsomething wrong?\n\n regards laser\n\n",
"msg_date": "Fri, 17 Aug 2001 11:33:12 +0800",
"msg_from": "He Weiping <laser@zhengmai.com.cn>",
"msg_from_op": true,
"msg_subject": "plpgsql's variable name can't be the same with table column?"
}
] |
[
{
"msg_contents": "Hi,\n\n attached is patch with:\n\n- new encoding names stuff with better performance (binary search\n intead for() and prevent some needless searching)\n\n- possible is use synonyms for encoding (an example ISO-8859-1, \n Latin1, l1)\n\n- implemented is Peter's idea about \"encoding names clearing\" \n (other chars than [A-Za-z0-9] are irrelevan -- 'ISO-8859-1' is \n same as 'iso8859_1' or iso-8-8-5-9-1 :-) \n\n- share routines for this between FE and BE (never more define \n encoding names separate in FE and BE)\n\n- add prefix PG_ to encoding identificator macros, something like 'ALT' \n is pretty dirty in source code, rather use PG_ALT.\n\n (Note: patch add new file mb/encname.c and remove mb/common.c)\n\n\t\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz",
"msg_date": "Fri, 17 Aug 2001 15:50:18 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "encoding names"
},
{
"msg_contents": "Karel Zak writes:\n\n> - possible is use synonyms for encoding (an example ISO-8859-1,\n> Latin1, l1)\n\nOn the choice of synonyms: Do we really want to add that many synonyms\nthat are not the standard name? I think the following are not necessary:\n\ncyrillic, cp819, ibm819, isoir100x, l1-4\n\nISO 8859 is a pretty well-know term these days.\n\nKOI8 needs to be aliased as koi8r. Unicode is not a valid encoding name,\nactually. Do you know what encoding is stands for and could you add that\nas an alias?\n\nOn the code:\n\n#ifdef WIN32\n #include \"win32.h\"\n#else\n #include <unistd.h>\n#endif\n\nneeds to be written as\n\n#ifdef WIN32\n# include \"win32.h\"\n#else\n# include <unistd.h>\n#endif\n\nfor portability.\n\nFor extra credit: A patch to configure and the documentation.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 17 Aug 2001 18:11:00 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "This patch will break the JDBC driver. The jdbc driver relies on the \nvalue returned by getdatabaseencoding() to determine the server encoding \nso that it can convert to unicode. This patch changes the return values \nfor getdatabaseencoding() such that the driver will no longer work. For \nexample \"LATIN1\" which used to be returned will now come back as \n\"iso88591\". This change in behaviour impacts the JDBC driver and any \nother application that is depending on the output of the \ngetdatabaseencoding() function.\n\nI would recommend that getdatabaseencoding() return the old names for \nbackword compatibility and then deprecate this function to be removed in \nthe future. Then create a new function that returns the new encoding \nnames that can be used going forward.\n\nthanks,\n--Barry\n\nKarel Zak wrote:\n> Hi,\n> \n> attached is patch with:\n> \n> - new encoding names stuff with better performance (binary search\n> intead for() and prevent some needless searching)\n> \n> - possible is use synonyms for encoding (an example ISO-8859-1, \n> Latin1, l1)\n> \n> - implemented is Peter's idea about \"encoding names clearing\" \n> (other chars than [A-Za-z0-9] are irrelevan -- 'ISO-8859-1' is \n> same as 'iso8859_1' or iso-8-8-5-9-1 :-) \n> \n> - share routines for this between FE and BE (never more define \n> encoding names separate in FE and BE)\n> \n> - add prefix PG_ to encoding identificator macros, something like 'ALT' \n> is pretty dirty in source code, rather use PG_ALT.\n> \n> (Note: patch add new file mb/encname.c and remove mb/common.c)\n> \n> \t\t\t\tKarel\n> \n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> Part 1.1\n> \n> Content-Type:\n> \n> text/plain\n> \n> \n> ------------------------------------------------------------------------\n> mb-08172001.patch.gz\n> \n> Content-Type:\n> \n> application/x-gzip\n> Content-Encoding:\n> \n> base64\n> \n> \n> ------------------------------------------------------------------------\n> Part 1.3\n> \n> Content-Type:\n> \n> text/plain\n> Content-Encoding:\n> \n> binary\n> \n> \n\n\n",
"msg_date": "Fri, 17 Aug 2001 10:37:18 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Barry Lind writes:\n\n> This patch will break the JDBC driver. The jdbc driver relies on the\n> value returned by getdatabaseencoding() to determine the server encoding\n> so that it can convert to unicode.\n\nThen the driver needs to be changed to accept the new encoding names as\nwell. (Or couldn't we convert it to Unicode in the server?)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 18 Aug 2001 01:16:43 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "\n----- Original Message ----- \nFrom: Peter Eisentraut <peter_e@gmx.net>\nSent: Friday, August 17, 2001 12:11 PM\n\n\n> Karel Zak writes:\n> \n> > - possible is use synonyms for encoding (an example ISO-8859-1,\n> > Latin1, l1)\n> \n> On the choice of synonyms: Do we really want to add that many synonyms\n> that are not the standard name? I think the following are not necessary:\n> \n> cyrillic, cp819, ibm819, isoir100x, l1-4\n\nI'm not sure about others, but 'cyrillic' is a quite ambigous alias,\nbecause it can denote many slavic languages: Russian, Ukranian,\nBulgarian, Serbian are few examples, so I believe it should be excluded\nfrom the list of synomyms.\n\n> KOI8 needs to be aliased as koi8r.\n\n... and Karel can you change these so they are consistent with\nothers:\n\n> KOI8_to_utf(unsigned char *iso, unsigned char *utf, int len)\n> {\n> local_to_utf(iso, utf, LUmapKOI8, sizeof(LUmapKOI8) / sizeof(pg_local_to_utf), PG_KOI8, len);\n> }\n\nto\n\nkoi8r_to_utf(unsigned char *iso, unsigned char *utf, int len)\n^^^^^\n{\n local_to_utf(iso, utf, LUmapKOI8R, sizeof(LUmapKOI8R) / sizeof(pg_local_to_utf), PG_KOI8R, len);\n} ^^^^^ ^^^^^ ^^^^^\n\n> WIN_to_utf(unsigned char *iso, unsigned char *utf, int len)\n> {\n> local_to_utf(iso, utf, LUmapWIN, sizeof(LUmapWIN) / sizeof(pg_local_to_utf), PG_WIN1251, len);\n> }\n\nto\n\nwin1251_to_utf(unsigned char *iso, unsigned char *utf, int len)\n^^^^^^^\n{\n local_to_utf(iso, utf, LUmapWIN1251, sizeof(LUmapWIN1251) / sizeof(pg_local_to_utf), PG_WIN1251, len);\n ^^^^^^^ ^^^^^^^ ^^^^^^^\n}\n\nS.\n\n\n",
"msg_date": "Fri, 17 Aug 2001 23:02:06 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> Barry Lind writes:\n> \n> > This patch will break the JDBC driver. The jdbc driver relies on the\n> > value returned by getdatabaseencoding() to determine the server encoding\n> > so that it can convert to unicode.\n> \n> Then the driver needs to be changed to accept the new encoding names as\n> well. (Or couldn't we convert it to Unicode in the server?)\n\nThis will break the backward compatibility. I agree with Barry's opinion.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 18 Aug 2001 15:46:36 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > Barry Lind writes:\n> >\n> > > This patch will break the JDBC driver. The jdbc driver relies on the\n> > > value returned by getdatabaseencoding() to determine the server encoding\n> > > so that it can convert to unicode.\n> >\n> > Then the driver needs to be changed to accept the new encoding names as\n> > well. (Or couldn't we convert it to Unicode in the server?)\n>\n> This will break the backward compatibility.\n\nHow so?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 18 Aug 2001 12:48:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> > > > This patch will break the JDBC driver. The jdbc driver relies on the\n> > > > value returned by getdatabaseencoding() to determine the server encoding\n> > > > so that it can convert to unicode.\n> > >\n> > > Then the driver needs to be changed to accept the new encoding names as\n> > > well. (Or couldn't we convert it to Unicode in the server?)\n> >\n> > This will break the backward compatibility.\n> \n> How so?\n\nApparently 7.1 JDBC driver does not understand the value 7.2\ngetdatabaseencoding() returns.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 18 Aug 2001 20:14:25 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> Apparently 7.1 JDBC driver does not understand the value 7.2\n> getdatabaseencoding() returns.\n\nThen the server needs to look at the protocol number to decide what to\nsend back. But we need to be able to move forward with the encoding names\nsooner or later anyway.\n\nHowever, the 7.1 JDBC driver is going to be incompatible with a 7.2 server\nin a number of other areas as well, so I'm not completely sure whether\nit'd be worth the effort.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 18 Aug 2001 16:34:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> Then the server needs to look at the protocol number to decide what to\n> send back. But we need to be able to move forward with the encoding names\n> sooner or later anyway.\n\nI'm not sure if we are going to raise the FE/BE protocol number for\n7.2.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 19 Aug 2001 08:30:34 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> > Then the server needs to look at the protocol number to decide what to\n> > send back. But we need to be able to move forward with the encoding names\n> > sooner or later anyway.\n> \n> I'm not sure if we are going to raise the FE/BE protocol number for\n> 7.2.\n\nWe are not, as far as I know. I have made my changes without doing\nthat.\n\nHowever, this brings up the issue of how a backend will fail if the\nclient provides a newer protocol version. I think we should get it to\nsend back its current protocol version and see if the client responds\nwith a protocol version we can accept. I know we don't need it now, but\nwhen we do need to up the protocol version number, we are stuck because\nof the older releases that can't cope with this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 18 Aug 2001 21:40:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> > I'm not sure if we are going to raise the FE/BE protocol number for\n> > 7.2.\n> \n> We are not, as far as I know. I have made my changes without doing\n> that.\n\nOk. I think we should keep getdatabaseencoding() behaves as 7.1, and\nadd a new function which returns official encoding names.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 19 Aug 2001 11:02:49 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> Hi,\n> \n> attached is patch with:\n> \n> - new encoding names stuff with better performance (binary search\n> intead for() and prevent some needless searching)\n> \n> - possible is use synonyms for encoding (an example ISO-8859-1, \n> Latin1, l1)\n> \n> - implemented is Peter's idea about \"encoding names clearing\" \n> (other chars than [A-Za-z0-9] are irrelevan -- 'ISO-8859-1' is \n> same as 'iso8859_1' or iso-8-8-5-9-1 :-) \n> \n> - share routines for this between FE and BE (never more define \n> encoding names separate in FE and BE)\n> \n> - add prefix PG_ to encoding identificator macros, something like 'ALT' \n> is pretty dirty in source code, rather use PG_ALT.\n> \n> (Note: patch add new file mb/encname.c and remove mb/common.c)\n> \n> \t\t\t\tKarel\n\nThanks for the patches, but...\n\n1) There is a compiler error if --enable-unicode-conversion is not\n enabled\n\n2) The patches break createdb. createdb should raise an error if\n client-only encodings such as SJIS etc. is specified.\n\n3) I don't like following ugliness. Why not changing all of SQL_ASCII\n occurrences in the sources.\n\n /*\n * A lot of PG stuff use 'SQL_ASCII' without prefix (dirty...)\n */\n #define SQL_ASCII\tPG_SQL_ASCII\n\n4) Encoding \"official\" names are inconsistent. Here are my suggested\n changes (referring http://www.iana.org/assignments/character-sets,\n according to Peter's suggestiuon):\n\n ALT -> IBM866\n KOI8 -> KOI8_R\n UNICODE -> UTF_8 (Peter's suggestion)\n \n Also, I'm wondering why windows-1251, not windows_1251? or\n ISO_8859_1, not ISO-8859-1? there seems a confusion about the\n usage of \"_\" and \"-\".\n\npg_enc2name pg_enc2name_tbl[] =\n{\n\t{ \"SQL_ASCII\",\tPG_SQL_ASCII },\n\t{ \"EUC_JP\",\tPG_EUC_JP },\n\t{ \"EUC_CN\",\tPG_EUC_CN },\n\t{ \"EUC_KR\",\tPG_EUC_KR },\n\t{ \"EUC_TW\",\tPG_EUC_TW },\n\t{ \"UNICODE\",\tPG_UNICODE },\n\t{ \"MULE_INTERNAL\",PG_MULE_INTERNAL },\n\t{ \"ISO_8859_1\",\tPG_LATIN1 },\n\t{ \"ISO_8859_2\",\tPG_LATIN2 },\n\t{ \"ISO_8859_3\",\tPG_LATIN3 },\n\t{ \"ISO_8859_4\",\tPG_LATIN4 },\n\t{ \"ISO_8859_5\",\tPG_LATIN5 },\n\t{ \"KOI8\",\tPG_KOI8 },\n\t{ \"window-1251\",PG_WIN1251 },\n\t{ \"ALT\",\tPG_ALT },\n\t{ \"Shift_JIS\",\tPG_SJIS },\n\t{ \"Big5\",\tPG_BIG5 },\n\t{ \"window-1250\",PG_WIN1251 }\n};\n\n",
"msg_date": "Sun, 19 Aug 2001 11:02:57 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] encoding names"
},
{
"msg_contents": "----- Original Message ----- \nFrom: Tatsuo Ishii <t-ishii@sra.co.jp>\nSent: Saturday, August 18, 2001 10:02 PM\n\n\n> ALT -> IBM866\n\nJust a quick comment: ALT is not necessarily IBM866.\nIt can be any US-ASCII or 26-character-alphabet Latin set, for example\nIBM819 or ISO8859-1. Is actually quite different from IBM866 in its\ntrue meaning, and they shouldn't be aliased together. ALT is used for example,\nwhen none of KOI8-R, Windows-1251, or IBM866 are available to a Russian-speaking\nperson to read/write any text, messages and stuff, we use simple English letters \nto write words in Russian so that pronunciation sort of holds the same. It's\nsomething like russian_latin (as an equivalent to greek_latin in the\nhttp://www.iana.org/assignments/character-sets spec), and the writing this\nway reminds Polish or Serbian-Latin a bit.\n\nSerguei\n\n\n",
"msg_date": "Sun, 19 Aug 2001 00:37:36 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] encoding names"
},
{
"msg_contents": "On Fri, Aug 17, 2001 at 06:11:00PM +0200, Peter Eisentraut wrote:\n> Karel Zak writes:\n> \n> > - possible is use synonyms for encoding (an example ISO-8859-1,\n> > Latin1, l1)\n> \n> On the choice of synonyms: Do we really want to add that many synonyms\n> that are not the standard name? I think the following are not necessary:\n> \n> cyrillic, cp819, ibm819, isoir100x, l1-4\n\n IMHO is not problem if PG will understand to more aliases, or is here some\nrelevant problem with it? :-)\n\n> ISO 8859 is a pretty well-know term these days.\n> \n> KOI8 needs to be aliased as koi8r. Unicode is not a valid encoding name,\n\n Agree.\n\n> actually. Do you know what encoding is stands for and could you add that\n> as an alias?\n> \n> On the code:\n> \n> #ifdef WIN32\n> #include \"win32.h\"\n> #else\n> #include <unistd.h>\n> #endif\n> \n> needs to be written as\n> \n> #ifdef WIN32\n> # include \"win32.h\"\n> #else\n> # include <unistd.h>\n> #endif\n> \n> for portability.\n\n OK, but sounds curious (how compiler has problem with it?)\n\n> For extra credit: A patch to configure and the documentation.\n\n :-) needs time... but yes, I add it to next patch version.\n \n\n Thanks for suggestions.\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Sun, 19 Aug 2001 14:46:58 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Fri, Aug 17, 2001 at 10:37:18AM -0700, Barry Lind wrote:\n> This patch will break the JDBC driver. The jdbc driver relies on the \n> value returned by getdatabaseencoding() to determine the server encoding \n> so that it can convert to unicode. This patch changes the return values \n> for getdatabaseencoding() such that the driver will no longer work. For \n> example \"LATIN1\" which used to be returned will now come back as \n> \"iso88591\". This change in behaviour impacts the JDBC driver and any \n> other application that is depending on the output of the \n> getdatabaseencoding() function.\n\n Hmm.. but I agree with Peter that correct solution is rewrite it to\nstandard names.\n \n> I would recommend that getdatabaseencoding() return the old names for \n> backword compatibility and then deprecate this function to be removed in \n ^^^^^^^^^^^^^^^^^^^^^\n We can finish as great Microsoft systems... nice face but terrible old stuff\nin kernel.\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Sun, 19 Aug 2001 14:58:44 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Sun, Aug 19, 2001 at 11:02:49AM +0900, Tatsuo Ishii wrote:\n> > > I'm not sure if we are going to raise the FE/BE protocol number for\n> > > 7.2.\n> > \n> > We are not, as far as I know. I have made my changes without doing\n> > that.\n> \n> Ok. I think we should keep getdatabaseencoding() behaves as 7.1, and\n> add a new function which returns official encoding names.\n\n Ok, Is here some suggestion for name of this function?\n\n\t\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Sun, 19 Aug 2001 15:03:03 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Sun, Aug 19, 2001 at 11:02:57AM +0900, Tatsuo Ishii wrote:\n\n> 4) Encoding \"official\" names are inconsistent. Here are my suggested\n> changes (referring http://www.iana.org/assignments/character-sets,\n> according to Peter's suggestiuon):\n> \n> ALT -> IBM866\n> KOI8 -> KOI8_R\n> UNICODE -> UTF_8 (Peter's suggestion)\n> \n\n Right.\n\n But we will still need aliases UNICODE, ALT, KOI8 for back compatibility.\n\n Thanks, I try fix all.\n\t\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Sun, 19 Aug 2001 15:11:56 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] encoding names"
},
{
"msg_contents": "> > ALT -> IBM866\n> \n> Just a quick comment: ALT is not necessarily IBM866.\n> It can be any US-ASCII or 26-character-alphabet Latin set, for example\n> IBM819 or ISO8859-1. Is actually quite different from IBM866 in its\n> true meaning, and they shouldn't be aliased together. ALT is used for example,\n> when none of KOI8-R, Windows-1251, or IBM866 are available to a Russian-speaking\n> person to read/write any text, messages and stuff, we use simple English letters \n> to write words in Russian so that pronunciation sort of holds the same. It's\n> something like russian_latin (as an equivalent to greek_latin in the\n> http://www.iana.org/assignments/character-sets spec), and the writing this\n> way reminds Polish or Serbian-Latin a bit.\n\nOk. Let's leave ALT as it is.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 19 Aug 2001 22:39:05 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] encoding names"
},
{
"msg_contents": "> > 4) Encoding \"official\" names are inconsistent. Here are my suggested\n> > changes (referring http://www.iana.org/assignments/character-sets,\n> > according to Peter's suggestiuon):\n> > \n> > ALT -> IBM866\n> > KOI8 -> KOI8_R\n> > UNICODE -> UTF_8 (Peter's suggestion)\n> > \n> \n> Right.\n> \n> But we will still need aliases UNICODE, ALT, KOI8 for back compatibility.\n\nSure. \n\n> Thanks, I try fix all.\n\nThanks! But we seem to leave ALT as it is (Serguei's suggestion).\n--\nTatsuo Ishii\n\n",
"msg_date": "Sun, 19 Aug 2001 22:46:34 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] encoding names"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> However, this brings up the issue of how a backend will fail if the\n> client provides a newer protocol version. I think we should get it to\n> send back its current protocol version and see if the client responds\n> with a protocol version we can accept.\n\nWhy? A client that wants to do this can retry with a lower version\nnumber upon seeing the \"unsupported protocol version\" failure. There's\nno need to change the postmaster code --- indeed, doing so would negate\nthe main value of such a feature, namely being able to talk to *old*\npostmasters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Aug 2001 11:13:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: encoding names "
},
{
"msg_contents": "\n I found some other things:\n\n- why database encoding for new DB check 'createdb' script and not\n CREATE DATABASE statement? (means client only encodings, like BIG5)?\n\n Bug?\n\n\n- ODBC -- here is some multibyte stuff too. Why ODBC code don't use\n pg_wchar.h where is all defined? In odbc/multibyte.h is again defined\n all encoding identificators. \n\n IMHO we can use for ODBC same solution as for libpq and compile it\n with encname.c file too.\n\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 20 Aug 2001 15:36:04 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "encoding: ODBC, createdb"
},
{
"msg_contents": "Karel Zak wrote:\n> \n> I found some other things:\n> \n> - ODBC -- here is some multibyte stuff too. Why ODBC code don't use\n> pg_wchar.h where is all defined? In odbc/multibyte.h is again defined\n> all encoding identificators.\n> \n> IMHO we can use for ODBC same solution as for libpq and compile it\n> with encname.c file too.\n\nODBC under Windows needs no source/header files in PostgreSQL\nother than in src/interfaces/odbc. It's not preferable for \npsqlodbc driver to be sensitive about other PostgreSQL changes\nbecause the driver has to be able to talk multiple versions of\nPostgreSQL servers. In fact the current driver could talk to \nany server whose version >= 6.2(according to a person).\nAs for pg_wchar.h I'm not sure if it could be an exception\nand we could expect for the maintainer to take care of ODBC.\nIf I were he, I would hate it.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 21 Aug 2001 10:00:21 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding: ODBC, createdb"
},
{
"msg_contents": "> I found some other things:\n> \n> - why database encoding for new DB check 'createdb' script and not\n> CREATE DATABASE statement? (means client only encodings, like BIG5)?\n> \n> Bug?\n\nOh, that must be a bug. Do yo want to take care of it by yourself?\n\n> - ODBC -- here is some multibyte stuff too. Why ODBC code don't use\n> pg_wchar.h where is all defined? In odbc/multibyte.h is again defined\n> all encoding identificators. \n> \n> IMHO we can use for ODBC same solution as for libpq and compile it\n> with encname.c file too.\n\nDon't know about ODBC. Hiroshi?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 21 Aug 2001 10:00:50 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding: ODBC, createdb"
},
{
"msg_contents": "On Tue, Aug 21, 2001 at 10:00:50AM +0900, Tatsuo Ishii wrote:\n> > I found some other things:\n> > \n> > - why database encoding for new DB check 'createdb' script and not\n> > CREATE DATABASE statement? (means client only encodings, like BIG5)?\n> > \n> > Bug?\n> \n> Oh, that must be a bug. Do yo want to take care of it by yourself?\n\n I check and fix it. The 'createdb' script needn't check somethig, all \nmust be in backend.\n\n\tKarel \n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 21 Aug 2001 09:28:39 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding: ODBC, createdb"
},
{
"msg_contents": "On Tue, Aug 21, 2001 at 10:00:21AM +0900, Hiroshi Inoue wrote:\n> Karel Zak wrote:\n> > \n> > I found some other things:\n> > \n> > - ODBC -- here is some multibyte stuff too. Why ODBC code don't use\n> > pg_wchar.h where is all defined? In odbc/multibyte.h is again defined\n> > all encoding identificators.\n> > \n> > IMHO we can use for ODBC same solution as for libpq and compile it\n> > with encname.c file too.\n> \n> ODBC under Windows needs no source/header files in PostgreSQL\n> other than in src/interfaces/odbc. It's not preferable for \n> psqlodbc driver to be sensitive about other PostgreSQL changes\n> because the driver has to be able to talk multiple versions of\n> PostgreSQL servers. In fact the current driver could talk to \n> any server whose version >= 6.2(according to a person).\n> As for pg_wchar.h I'm not sure if it could be an exception\n> and we could expect for the maintainer to take care of ODBC.\n> If I were he, I would hate it.\n\n In the odbc/multibyte.h is\n\nif (strstr(str, \"%27SJIS%27\") || strstr(str, \"'SJIS'\") || \n strstr(str, \"'sjis'\"))\n\n ..and same line for BIG5 \n\n I add here new names 'Shift_JIS' and 'Big5' only. \n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 21 Aug 2001 09:37:44 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding: ODBC, createdb"
},
{
"msg_contents": "> On Tue, Aug 21, 2001 at 10:00:50AM +0900, Tatsuo Ishii wrote:\n> > > I found some other things:\n> > > \n> > > - why database encoding for new DB check 'createdb' script and not\n> > > CREATE DATABASE statement? (means client only encodings, like BIG5)?\n> > > \n> > > Bug?\n> > \n> > Oh, that must be a bug. Do yo want to take care of it by yourself?\n> \n> I check and fix it. The 'createdb' script needn't check somethig, all \n> must be in backend.\n\nAgreed.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 21 Aug 2001 17:00:31 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding: ODBC, createdb"
},
{
"msg_contents": "> On Sun, Aug 19, 2001 at 11:02:49AM +0900, Tatsuo Ishii wrote:\n> > > > I'm not sure if we are going to raise the FE/BE protocol number for\n> > > > 7.2.\n> > > \n> > > We are not, as far as I know. I have made my changes without doing\n> > > that.\n> > \n> > Ok. I think we should keep getdatabaseencoding() behaves as 7.1, and\n> > add a new function which returns official encoding names.\n> \n> Ok, Is here some suggestion for name of this function?\n\nThe new function returns \"canonical database encoding names\". So \n\n\"get_canonical_database_encoding\" or shorter name looks appropriate\nto me.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 22 Aug 2001 17:09:50 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "On Wed, Aug 22, 2001 at 05:09:50PM +0900, Tatsuo Ishii wrote:\n> \n> The new function returns \"canonical database encoding names\". So \n> \n> \"get_canonical_database_encoding\" or shorter name looks appropriate\n> to me.\n\n Oops, I overlook this mail in my inbox. Hmm .. I use getdbencoding(),\nbut we can change it later (before 7.2 release of course). It's\ncosmetic change.\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 22 Aug 2001 15:52:11 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> On Wed, Aug 22, 2001 at 05:09:50PM +0900, Tatsuo Ishii wrote:\n> > \n> > The new function returns \"canonical database encoding names\". So \n> > \n> > \"get_canonical_database_encoding\" or shorter name looks appropriate\n> > to me.\n> \n> Oops, I overlook this mail in my inbox. Hmm .. I use getdbencoding(),\n> but we can change it later (before 7.2 release of course). It's\n> cosmetic change.\n\nI don't think you need to change the function name \"getdbencoding\". \n\"get_canonical_database_encoding\" is too long anyway.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 22 Aug 2001 23:39:55 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> I don't think you need to change the function name \"getdbencoding\".\n> \"get_canonical_database_encoding\" is too long anyway.\n\nBut getdbencoding isn't semantically different from the old\ngetdatabaseencoding. \"encoding\" isn't the right term anyway, methinks, it\nshould be \"character set\". So maybe database_character_set()? (No \"get\"\nplease.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n\n",
"msg_date": "Wed, 22 Aug 2001 18:20:50 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "> Tatsuo Ishii writes:\n> \n> > I don't think you need to change the function name \"getdbencoding\".\n> > \"get_canonical_database_encoding\" is too long anyway.\n> \n> But getdbencoding isn't semantically different from the old\n> getdatabaseencoding. \"encoding\" isn't the right term anyway, methinks, it\n> should be \"character set\". So maybe database_character_set()? (No \"get\"\n> please.)\n\nI'm not a native English speaker, so please feel free to choose more\nappropriate name.\n\nBTW, what's wrong with \"encoding\"? I don't think, for example EUC-JP\nor utf-8, are character set names.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 23 Aug 2001 09:55:06 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: encoding names"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > But getdbencoding isn't semantically different from the old\n> > getdatabaseencoding. \"encoding\" isn't the right term anyway, methinks, it\n> > should be \"character set\". So maybe database_character_set()? (No \"get\"\n> > please.)\n>\n> I'm not a native English speaker, so please feel free to choose more\n> appropriate name.\n>\n> BTW, what's wrong with \"encoding\"? I don't think, for example EUC-JP\n> or utf-8, are character set names.\n\nHmm, SQL talks of character sets, it has a CHARACTER_SETS view and such.\nIt's slightly incorrect, I agree.\n\nMaybe we should not touch getdatabaseencoding() right now, given that the\nnames we currently use are apparently almost correct anyway and\nconsidering the pain it creates to alter them, and instead implement the\ninformation schema views in the future?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 23 Aug 2001 17:36:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] encoding names"
},
{
"msg_contents": "> > BTW, what's wrong with \"encoding\"? I don't think, for example EUC-JP\n> > or utf-8, are character set names.\n> \n> Hmm, SQL talks of character sets, it has a CHARACTER_SETS view and such.\n> It's slightly incorrect, I agree.\n> \n> Maybe we should not touch getdatabaseencoding() right now, given that the\n> names we currently use are apparently almost correct anyway and\n> considering the pain it creates to alter them, and instead implement the\n> information schema views in the future?\n\nI thought schema stuffs would be introduced in 7.2 but apparently it\nwould not happen...\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 24 Aug 2001 13:21:03 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] encoding names"
},
{
"msg_contents": "> > > BTW, what's wrong with \"encoding\"? I don't think, for example EUC-JP\n> > > or utf-8, are character set names.\n> > \n> > Hmm, SQL talks of character sets, it has a CHARACTER_SETS view and such.\n> > It's slightly incorrect, I agree.\n> > \n> > Maybe we should not touch getdatabaseencoding() right now, given that the\n> > names we currently use are apparently almost correct anyway and\n> > considering the pain it creates to alter them, and instead implement the\n> > information schema views in the future?\n> \n> I thought schema stuffs would be introduced in 7.2 but apparently it\n> would not happen...\n\nI thought I could do it but ran out of time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 24 Aug 2001 10:43:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] encoding names"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > Maybe we should not touch getdatabaseencoding() right now, given that the\n> > names we currently use are apparently almost correct anyway and\n> > considering the pain it creates to alter them, and instead implement the\n> > information schema views in the future?\n>\n> I thought schema stuffs would be introduced in 7.2 but apparently it\n> would not happen...\n\nTrue, but right now we'd have to do rather elaborate changes just to\nswitch a couple of names to \"more correct\" versions. Accepting them as\ninput is good, but maybe we should hold back on the output part a bit\nuntil we can do it correctly.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 24 Aug 2001 21:29:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] encoding names"
},
{
"msg_contents": "> > I thought schema stuffs would be introduced in 7.2 but apparently it\n> > would not happen...\n> \n> True, but right now we'd have to do rather elaborate changes just to\n> switch a couple of names to \"more correct\" versions. Accepting them as\n> input is good, but maybe we should hold back on the output part a bit\n> until we can do it correctly.\n\nAgreed.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 25 Aug 2001 08:27:18 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] encoding names"
},
{
"msg_contents": "On Fri, Aug 24, 2001 at 09:29:06PM +0200, Peter Eisentraut wrote:\n> Tatsuo Ishii writes:\n> \n> > > Maybe we should not touch getdatabaseencoding() right now, given that the\n> > > names we currently use are apparently almost correct anyway and\n> > > considering the pain it creates to alter them, and instead implement the\n> > > information schema views in the future?\n> >\n> > I thought schema stuffs would be introduced in 7.2 but apparently it\n> > would not happen...\n> \n> True, but right now we'd have to do rather elaborate changes just to\n> switch a couple of names to \"more correct\" versions. Accepting them as\n> input is good, but maybe we should hold back on the output part a bit\n> until we can do it correctly.\n\n Change output is a very easy work (edit strings in one array). The \nimportant thing is clean internal code for encoding names to faster \nand non-duplicated code (use same code for FE and BE).\n\n Well, I prepare it with total back compatible output for all current\nroutines (pg_char_to_encoding too) and new names will visible by new\nroutines only (suggested database_character_set(), etc). Right?\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 27 Aug 2001 09:26:17 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] encoding names"
},
{
"msg_contents": "\nWas this completed?\n\n> \n> I found some other things:\n> \n> - why database encoding for new DB check 'createdb' script and not\n> CREATE DATABASE statement? (means client only encodings, like BIG5)?\n> \n> Bug?\n> \n> \n> - ODBC -- here is some multibyte stuff too. Why ODBC code don't use\n> pg_wchar.h where is all defined? In odbc/multibyte.h is again defined\n> all encoding identificators. \n> \n> IMHO we can use for ODBC same solution as for libpq and compile it\n> with encname.c file too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 16:11:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: encoding: ODBC, createdb"
},
{
"msg_contents": "On Fri, Sep 07, 2001 at 04:11:25PM -0400, Bruce Momjian wrote:\n> \n> Was this completed?\n> \n> > \n> > I found some other things:\n> > \n> > - why database encoding for new DB check 'createdb' script and not\n> > CREATE DATABASE statement? (means client only encodings, like BIG5)?\n\n It was include in my large multibyte patch and it's complete in\ndbcommands.c (It was non-reported bug in previous releases). \n\n> > - ODBC -- here is some multibyte stuff too. Why ODBC code don't use\n> > pg_wchar.h where is all defined? In odbc/multibyte.h is again defined\n> > all encoding identificators. \n\n Probably done, it check ODBC maintainer.\n\n\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Sat, 8 Sep 2001 10:01:55 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: encoding: ODBC, createdb"
}
] |
[
{
"msg_contents": "I want to make a field in a table contain numbers that increment each\ntime a record is updated. How would I go about doing this?\n\n\nExample:\n\nTable1\nCustID balance \nA12 255.32\nB20 132.20\n\nTable2\nCustID transactions \nA12 7\nB20 33\n\n\nEvery time something is inserted or updated in table1 I want to\nincrement that customer's number of transactions in table2. How would\nI go about doing this?\n\n\nTIA, \nJoseph\n",
"msg_date": "17 Aug 2001 06:58:38 -0700",
"msg_from": "joseph.castille@wcom.com (Joseph Castille)",
"msg_from_op": true,
"msg_subject": "How would I make a table of autonumbers/sequences?"
},
{
"msg_contents": "On 17 Aug 2001 06:58:38 -0700, Joseph Castille <joseph.castille@wcom.com> wrote:\n> I want to make a field in a table contain numbers that increment each\n> time a record is updated. How would I go about doing this?\n> \n> \n> Example:\n> \n> Table1\n> CustID balance \n> A12 255.32\n> B20 132.20\n> \n> Table2\n> CustID transactions \n> A12 7\n> B20 33\n> \n> \n> Every time something is inserted or updated in table1 I want to\n> increment that customer's number of transactions in table2. How would\n> I go about doing this?\n> \n\n\nThis sounds like a good place for a trigger.\n\nDid you ask this on the pgsql-general list yet?\nYou will probably have better luck there.\n\n",
"msg_date": "Fri, 17 Aug 2001 22:36:20 +0000 (UTC)",
"msg_from": "missive@frontiernet.net (Lee Harr)",
"msg_from_op": false,
"msg_subject": "Re: How would I make a table of autonumbers/sequences?"
}
] |
[
{
"msg_contents": "I am pretty new to PostgreSQL so please bare with me :-)\n\nWhen issuing the CREATEDB MyDb then creating some tables with CREATE \nTABLE, I then go back and do a search for the file I have just created \n(MyDb) but can't find the physical file.\n\nDoes one actually exist ??\n\nPete\n\n",
"msg_date": "Fri, 17 Aug 2001 14:05:52 GMT",
"msg_from": "Peter Moscatt <pmoscatt@bigpond.net.au>",
"msg_from_op": true,
"msg_subject": "CREATEDB Where ??"
},
{
"msg_contents": "Peter Moscatt <pmoscatt@bigpond.net.au> wrote in message news:<4x9f7.126086$Xr6.689318@news-server.bigpond.net.au>...\n> I am pretty new to PostgreSQL so please bare with me :-)\n> \n> When issuing the CREATEDB MyDb then creating some tables with CREATE \n> TABLE, I then go back and do a search for the file I have just created \n> (MyDb) but can't find the physical file.\n> \n> Does one actually exist ??\n> \n> Pete\n\nSure it does. The problem you are having is that since the\nimplementation of TOAST in PG 7.1, all of the db and table names are\nrepresented by numbers in the physical file system\n(usr/local/pgsql/data/base). So if you tried to do an 'ls' or 'find'\nfor the name of your database, it probably wouldn't show up. However,\njust do a 'psql {db_name}' (where {db_name} is the name of your\ndatabase) and you'll see that everything is kosher.\n\nTo translate the oid numbers to their respective names, use the\noid2name function found in the /contrib under your Postgres source\ncode.\n\n-Tony\n",
"msg_date": "17 Aug 2001 10:16:30 -0700",
"msg_from": "reina@nsi.edu (Tony Reina)",
"msg_from_op": false,
"msg_subject": "Re: CREATEDB Where ??"
},
{
"msg_contents": "Thanks Tony... yes that helps explain why I am not seeing what I expected \nto see.\n\nRight..... If I was developing an application, say with Python and I \nneeded to transport my created database and make it part of an installation \nprocess (create a tar ball with all needed components), do I just include \nthe /usr/local/pgsql/data directory as part of my dist ??\n\nPete\n\n\n\nTony Reina wrote:\n\n> Peter Moscatt <pmoscatt@bigpond.net.au> wrote in message\n> news:<4x9f7.126086$Xr6.689318@news-server.bigpond.net.au>...\n>> I am pretty new to PostgreSQL so please bare with me :-)\n>> \n>> When issuing the CREATEDB MyDb then creating some tables with CREATE\n>> TABLE, I then go back and do a search for the file I have just created\n>> (MyDb) but can't find the physical file.\n>> \n>> Does one actually exist ??\n>> \n>> Pete\n> \n> Sure it does. The problem you are having is that since the\n> implementation of TOAST in PG 7.1, all of the db and table names are\n> represented by numbers in the physical file system\n> (usr/local/pgsql/data/base). So if you tried to do an 'ls' or 'find'\n> for the name of your database, it probably wouldn't show up. However,\n> just do a 'psql {db_name}' (where {db_name} is the name of your\n> database) and you'll see that everything is kosher.\n> \n> To translate the oid numbers to their respective names, use the\n> oid2name function found in the /contrib under your Postgres source\n> code.\n> \n> -Tony\n> \n\n",
"msg_date": "Sat, 18 Aug 2001 02:10:46 GMT",
"msg_from": "Peter Moscatt <pmoscatt@bigpond.net.au>",
"msg_from_op": true,
"msg_subject": "Re: CREATEDB Where ??"
},
{
"msg_contents": "Peter Moscatt <pmoscatt@bigpond.net.au> writes:\n\n> I am pretty new to PostgreSQL so please bare with me :-)\n> \n> When issuing the CREATEDB MyDb then creating some tables with CREATE \n> TABLE, I then go back and do a search for the file I have just created \n> (MyDb) but can't find the physical file.\n> \n> Does one actually exist ??\n\nYes, they're named by OID (integer) in $PGDATA rather than by database\nname. There's a reason for this so don't complain about it. ;)\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n",
"msg_date": "20 Aug 2001 11:03:01 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATEDB Where ??"
},
{
"msg_contents": "Yes and no :-). The files were created but all postgres data files are now\nidententified by numbers (oids I think), so you will not find a file or\ndirectory anywhere in your filesystem named \"mydb\", or \"mytable\".\n\n----- Original Message -----\nFrom: \"Peter Moscatt\" <pmoscatt@bigpond.net.au>\nTo: <pgsql-hackers@postgresql.org>\nSent: Friday, August 17, 2001 9:05 AM\nSubject: [HACKERS] CREATEDB Where ??\n\n\n> I am pretty new to PostgreSQL so please bare with me :-)\n>\n> When issuing the CREATEDB MyDb then creating some tables with CREATE\n> TABLE, I then go back and do a search for the file I have just created\n> (MyDb) but can't find the physical file.\n>\n> Does one actually exist ??\n>\n> Pete\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n",
"msg_date": "Mon, 20 Aug 2001 10:14:19 -0500",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: CREATEDB Where ??"
},
{
"msg_contents": "Peter Moscatt <pmoscatt@bigpond.net.au> writes:\n\n> Thanks Tony... yes that helps explain why I am not seeing what I expected \n> to see.\n> \n> Right..... If I was developing an application, say with Python and I \n> needed to transport my created database and make it part of an installation \n> process (create a tar ball with all needed components), do I just include \n> the /usr/local/pgsql/data directory as part of my dist ??\n\nIt would be a MUCH better idea instead to include an SQL script that's \nrun automatically by the install process.\n\nIt's more flexible and more robust against version skew etc.\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n",
"msg_date": "20 Aug 2001 11:55:42 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: CREATEDB Where ??"
},
{
"msg_contents": "Hi Pete,\n\nWould it be appropriate to do a SQL dump of the created database via\npg_dump, then reload it during the installation vi psql or the COPY\ncommand? If you include the whole data/ subdirectory, you'll also get\nthe WAL logfiles and everything, which you probably don't need.\n\nOf course, you'll need to create a process for updating the *.conf\n(postgresql.conf, pg_ident.conf, pg_hba.conf) files correctly too. \nPerl, sed, etc, would all be a starting point here.\n\nRegards and best wishes,\n\nJustin Clift\n\n\nPeter Moscatt wrote:\n> \n> Thanks Tony... yes that helps explain why I am not seeing what I expected\n> to see.\n> \n> Right..... If I was developing an application, say with Python and I\n> needed to transport my created database and make it part of an installation\n> process (create a tar ball with all needed components), do I just include\n> the /usr/local/pgsql/data directory as part of my dist ??\n> \n> Pete\n> \n> Tony Reina wrote:\n> \n> > Peter Moscatt <pmoscatt@bigpond.net.au> wrote in message\n> > news:<4x9f7.126086$Xr6.689318@news-server.bigpond.net.au>...\n> >> I am pretty new to PostgreSQL so please bare with me :-)\n> >>\n> >> When issuing the CREATEDB MyDb then creating some tables with CREATE\n> >> TABLE, I then go back and do a search for the file I have just created\n> >> (MyDb) but can't find the physical file.\n> >>\n> >> Does one actually exist ??\n> >>\n> >> Pete\n> >\n> > Sure it does. The problem you are having is that since the\n> > implementation of TOAST in PG 7.1, all of the db and table names are\n> > represented by numbers in the physical file system\n> > (usr/local/pgsql/data/base). So if you tried to do an 'ls' or 'find'\n> > for the name of your database, it probably wouldn't show up. However,\n> > just do a 'psql {db_name}' (where {db_name} is the name of your\n> > database) and you'll see that everything is kosher.\n> >\n> > To translate the oid numbers to their respective names, use the\n> > oid2name function found in the /contrib under your Postgres source\n> > code.\n> >\n> > -Tony\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 21 Aug 2001 02:46:23 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: CREATEDB Where ??"
},
{
"msg_contents": "> Yes and no :-). The files were created but all postgres data files are now\n> idententified by numbers (oids I think), so you will not find a file or\n> directory anywhere in your filesystem named \"mydb\", or \"mytable\".\n\n/contrib/oid2name in 7.1.X does the mapping.\n\n> \n> ----- Original Message -----\n> From: \"Peter Moscatt\" <pmoscatt@bigpond.net.au>\n> To: <pgsql-hackers@postgresql.org>\n> Sent: Friday, August 17, 2001 9:05 AM\n> Subject: [HACKERS] CREATEDB Where ??\n> \n> \n> > I am pretty new to PostgreSQL so please bare with me :-)\n> >\n> > When issuing the CREATEDB MyDb then creating some tables with CREATE\n> > TABLE, I then go back and do a search for the file I have just created\n> > (MyDb) but can't find the physical file.\n> >\n> > Does one actually exist ??\n> >\n> > Pete\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Aug 2001 15:33:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATEDB Where ??"
},
{
"msg_contents": "Peter Moscatt <pmoscatt@bigpond.net.au> writes:\n> Right..... If I was developing an application, say with Python and I \n> needed to transport my created database and make it part of an installation \n> process (create a tar ball with all needed components), do I just include \n> the /usr/local/pgsql/data directory as part of my dist ??\n\nNo --- a tar dump of your directory will be quite useless to anyone else\non a different platform, and even those on the same platform would\nlikely not want to blow away their databases and replace 'em with yours.\n\nInstead, use pg_dump to create an SQL script that can be loaded into an\nexisting database installation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 17:04:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: CREATEDB Where ?? "
}
] |
[
{
"msg_contents": "As usual i didn't cc the list :)\n\nMagnus\n----- Original Message ----- \nFrom: \"Magnus Naeslund(f)\" <mag@fbab.net>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nSent: Friday, August 17, 2001 6:55 PM\nSubject: Re: [PATCHES] Re: [HACKERS] Re: WIN32 errno patch \n\n\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> [snip]\n> > FWIW, Magnus says this works:\n> >\n> > #define SOCK_STRERROR my_sock_strerror\n> >\n> [snip]\n> >\n> >\n> > Anyone have any objections to it?\n> >\n> > regards, tom lane\n> >\n> \n> I can make that patch if you'd like, but i need to know what i should be\n> working on (right from CVS?).\n> In what context is it? (the libpq lib maybe).\n> And how to test it :)\n> \n> I can do the legwork.\n> \n> Magnus\n> \n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> Programmer/Networker [|] Magnus Naeslund\n> PGP Key: http://www.genline.nu/mag_pgp.txt\n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> \n> \n\n",
"msg_date": "Fri, 17 Aug 2001 18:56:35 +0200",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>",
"msg_from_op": true,
"msg_subject": "Fw: [PATCHES] Re: Re: WIN32 errno patch "
}
] |
[
{
"msg_contents": "Hi all. I'm not a good DB admin , thats why I'm posting to this list !\nHow can I figure out the size of a database or table ???\n\n-- \nLeandro Rodrigo Saad Cruz\nInterBusiness tecn. e serv.\nSao Paulo - Brasil",
"msg_date": "Fri, 17 Aug 2001 14:32:05 -0300",
"msg_from": "Leandro Saad <leandro@ibnetwork.com.br>",
"msg_from_op": true,
"msg_subject": "DB size"
},
{
"msg_contents": "> Hi all. I'm not a good DB admin , thats why I'm posting to this list !\n> How can I figure out the size of a database or table ???\n\nIt was easier in older versions of postgresql to perform a `du -h` on\nthe directory that corresponded to the database name. However now they\nare named by some type of object id?\n\nWould it be possible for an item to be added to the to do list to perform\na report on a database?\n\nI.e:\n\n \\d{p|S|l|r} list permissions/system tables/lobjects/ report\n\nReport will contain:\n\n1. Last vacuum time.\n2. Space the database is using on the file system.\n3. Some other juicy statistics?\n\nThanks.\n\n",
"msg_date": "Mon, 20 Aug 2001 10:11:48 +1000 (EST)",
"msg_from": "Grant <grant@rawlinsons.com.au>",
"msg_from_op": false,
"msg_subject": "Re: DB size"
}
] |
[
{
"msg_contents": "\nMorning all ...\n\n\tPostgreSQL v7.1.3 has just been bundled and uploaded onto the\ncentral FTP server, with mirrors to follow over the next several hours ...\n\n\tThe ChangeLog file is/will be viewable on the mirrors under:\n\n\t\t/pub/latest/README.ChangeLog\n\n\n\tBeing a purely bug fix release, there is no need to do a\ndump/restore to upgrade to this release.\n\n",
"msg_date": "Fri, 17 Aug 2001 13:48:24 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL v7.1.3 bundled and uploaded to central FTP Server"
},
{
"msg_contents": "Hi Marc. Is anonymous FTP just not allowed to the posostgresql.org box\nanymore ? Every time I've tried in the last few months I get \"The maximum\nnumber of concurrent connections has been reached.\"..\n\nI'm wondering simply for situations like the one that we're in right now\nwhere the mirrors won't be updated for a while (and I have some time to do\nsome upgrading)...\n\n*Shrug* not a big deal, I'm just wondering..\n\nThanks!\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Marc G. Fournier\" <scrappy@hub.org>\nTo: <pgsql-hackers@postgresql.org>\nCc: <pgsql-announce@postgresql.org>; <pgsql-general@postgresql.org>\nSent: Friday, August 17, 2001 1:48 PM\nSubject: [GENERAL] PostgreSQL v7.1.3 bundled and uploaded to central FTP\nServer\n\n\n>\n> Morning all ...\n>\n> PostgreSQL v7.1.3 has just been bundled and uploaded onto the\n> central FTP server, with mirrors to follow over the next several hours ...\n>\n> The ChangeLog file is/will be viewable on the mirrors under:\n>\n> /pub/latest/README.ChangeLog\n>\n>\n> Being a purely bug fix release, there is no need to do a\n> dump/restore to upgrade to this release.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n",
"msg_date": "Fri, 17 Aug 2001 14:26:53 -0400",
"msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1.3 bundled and uploaded to central FTP Server"
},
{
"msg_contents": "On Fri, Aug 17, 2001 at 02:26:53PM -0400,\n Mitch Vincent <mvincent@cablespeed.com> wrote:\n> Hi Marc. Is anonymous FTP just not allowed to the posostgresql.org box\n> anymore ? Every time I've tried in the last few months I get \"The maximum\n> number of concurrent connections has been reached.\"..\n> \n> I'm wondering simply for situations like the one that we're in right now\n> where the mirrors won't be updated for a while (and I have some time to do\n> some upgrading)...\n> \n> *Shrug* not a big deal, I'm just wondering..\n> \n> Thanks!\n> \n> -Mitch\n\nIt isn't surprizing that a lot of people are hitting the ftp site\nright after the announcement went out.\n\nThere is http access to the ftp area, which is what I used to\nget a copy. I don't have the web url handy, but I found a link for\ngetting 7.1.2 and changed the 2 2s to 3s and got access.\n\n",
"msg_date": "Fri, 17 Aug 2001 14:41:15 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1.3 bundled and uploaded to central FTP Server"
},
{
"msg_contents": "Right, but this has been happening for months and I'm wondering if it's just\nmy terrible timing or if anon access has been shut off..\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Bruno Wolff III\" <bruno@wolff.to>\nTo: \"Mitch Vincent\" <mvincent@cablespeed.com>\nCc: \"Marc G. Fournier\" <scrappy@hub.org>; <pgsql-general@postgresql.org>\nSent: Friday, August 17, 2001 3:41 PM\nSubject: Re: PostgreSQL v7.1.3 bundled and uploaded to central FTP Server\n\n\n> On Fri, Aug 17, 2001 at 02:26:53PM -0400,\n> Mitch Vincent <mvincent@cablespeed.com> wrote:\n> > Hi Marc. Is anonymous FTP just not allowed to the posostgresql.org box\n> > anymore ? Every time I've tried in the last few months I get \"The\nmaximum\n> > number of concurrent connections has been reached.\"..\n> >\n> > I'm wondering simply for situations like the one that we're in right now\n> > where the mirrors won't be updated for a while (and I have some time to\ndo\n> > some upgrading)...\n> >\n> > *Shrug* not a big deal, I'm just wondering..\n> >\n> > Thanks!\n> >\n> > -Mitch\n>\n> It isn't surprizing that a lot of people are hitting the ftp site\n> right after the announcement went out.\n>\n> There is http access to the ftp area, which is what I used to\n> get a copy. I don't have the web url handy, but I found a link for\n> getting 7.1.2 and changed the 2 2s to 3s and got access.\n>\n\n",
"msg_date": "Fri, 17 Aug 2001 15:41:49 -0400",
"msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1.3 bundled and uploaded to central FTP Server"
},
{
"msg_contents": "\"Mitch Vincent\" <mvincent@cablespeed.com> writes:\n\n> Hi Marc. Is anonymous FTP just not allowed to the posostgresql.org box\n> anymore ? \n\nIt is... there's just lots of people wanting it. I got in after 10\nminutes or so, downloaded some of it at 0.99 kB/s and then got the\nfile via other means (thanks :).\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "17 Aug 2001 15:54:58 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1.3 bundled and uploaded to central FTP Server"
}
] |
[
{
"msg_contents": "I have so far implemented the following:\n\nAn operator class text_binary_ops that does memcmp()-based comparison of\ntext data. The operators are named $<$ etc. for lack of a better idea.\nThat lack is further illustrated by the idea to name them \"binary-<\" etc.,\nwhich wouldn't get through the parser, but it doesn't need to.\n\nThe system will use such an index for the queries in question if the\nlocale is not \"like-safe\", in the terminology of the code (I'll end up\nrenaming that a little). If the locale is plain C/POSIX, it will use the\n\"normal\" index. (Should it be able to use either one in that case?)\n\nOpen issues:\n\nCurrently, this only works on text. Before I go out and duplicate all\nthis code and the catalog entries for varchar and bpchar, is there a\nBetter Way?\n\nIn match_special_index_operator(), the code looks up the required\noperators by name (<, >=). In other places we go out of our way to not\nattach predefined meanings to operator names. (In yet other places we do,\nof course.) Wouldn't it be better to test whether the candidate index is\na btree and then select the operator to use from the btree strategy\nentries? One uglification factor here is that the comment\n\n * We cheat a little by not checking for availability of \"=\" ... any\n * index type should support \"=\", methinks.\n\nno longer holds.\n\n\nPS: I wasn't able to find or reconstruct a concrete test case for this in\nthe archives. Naturally, I'd accept this approach on theoretical purity,\nbut if someone remembers a real tough one, let me know.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 17 Aug 2001 20:25:46 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> I have so far implemented the following:\n> \n> An operator class text_binary_ops that does memcmp()-based comparison of\n> text data. The operators are named $<$ etc. for lack of a better idea.\n> That lack is further illustrated by the idea to name them \"binary-<\" etc.,\n> which wouldn't get through the parser, but it doesn't need to.\n> \n> The system will use such an index for the queries in question if the\n> locale is not \"like-safe\", in the terminology of the code (I'll end up\n> renaming that a little).\n\nThis depends on the assumption that '=' is equivalent in\nany locale. Is it guaranteed ?\nFor example, ( 'a' = 'A' ) isn't allowed in any locale ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Sat, 18 Aug 2001 09:00:57 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Hiroshi Inoue writes:\n\n> > An operator class text_binary_ops that does memcmp()-based comparison of\n> > text data. The operators are named $<$ etc. for lack of a better idea.\n> > That lack is further illustrated by the idea to name them \"binary-<\" etc.,\n> > which wouldn't get through the parser, but it doesn't need to.\n> >\n> > The system will use such an index for the queries in question if the\n> > locale is not \"like-safe\", in the terminology of the code (I'll end up\n> > renaming that a little).\n>\n> This depends on the assumption that '=' is equivalent in\n> any locale. Is it guaranteed ?\n> For example, ( 'a' = 'A' ) isn't allowed in any locale ?\n\nThe whole point here is not to rely on '='. Instead we use a different\nopclass which does \"locale-safe\" comparisons, as said above.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 18 Aug 2001 12:47:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net]\n>\n> Hiroshi Inoue writes:\n>\n> > > An operator class text_binary_ops that does memcmp()-based\n> comparison of\n> > > text data. The operators are named $<$ etc. for lack of a\n> better idea.\n> > > That lack is further illustrated by the idea to name them\n> \"binary-<\" etc.,\n> > > which wouldn't get through the parser, but it doesn't need to.\n> > >\n> > > The system will use such an index for the queries in question if the\n> > > locale is not \"like-safe\", in the terminology of the code (I'll end up\n> > > renaming that a little).\n> >\n> > This depends on the assumption that '=' is equivalent in\n> > any locale. Is it guaranteed ?\n> > For example, ( 'a' = 'A' ) isn't allowed in any locale ?\n>\n> The whole point here is not to rely on '='. Instead we use a different\n> opclass which does \"locale-safe\" comparisons, as said above.\n\nIsn't 'a' LIKE 'A' if 'a' = 'A' ?\nLIKE seems to use the collating sequence.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Sun, 19 Aug 2001 00:56:50 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Hiroshi Inoue writes:\n\n> Isn't 'a' LIKE 'A' if 'a' = 'A' ?\n\nYes. But 'a' <> 'A'.\n\n> LIKE seems to use the collating sequence.\n\nNo. The collating sequence defines the order of all possible strings.\nLIKE doesn't order anything.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 18 Aug 2001 18:19:59 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net]\n> \n> Hiroshi Inoue writes:\n> \n> > Isn't 'a' LIKE 'A' if 'a' = 'A' ?\n> \n> Yes. But 'a' <> 'A'.\n\nPlease look at my first question.\n This depends on the assumption that '=' is equivalent in\n any locale. Is it guaranteed ?\n For example, ( 'a' = 'A' ) isn't allowed in any locale ?. \n\nAnd your answer was\n The whole point here is not to rely on '='. \n\nClearly your theory depends on the assumption that\n If a = b in some locale then a = b in ASCII locale.\n\nAnd where does 'a' <> 'A' come from ?\nThe definition of '=' is a part of collating sequence.\n\n> \n> > LIKE seems to use the collating sequence.\n> \n> No. The collating sequence defines the order of all possible strings.\n> LIKE doesn't order anything.\n\nAgain where does it come from ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Sun, 19 Aug 2001 06:53:06 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Hiroshi Inoue writes:\n\n> Please look at my first question.\n> This depends on the assumption that '=' is equivalent in\n> any locale. Is it guaranteed ?\n> For example, ( 'a' = 'A' ) isn't allowed in any locale ?.\n>\n> And your answer was\n> The whole point here is not to rely on '='.\n>\n> Clearly your theory depends on the assumption that\n> If a = b in some locale then a = b in ASCII locale.\n>\n> And where does 'a' <> 'A' come from ?\n> The definition of '=' is a part of collating sequence.\n>\n> >\n> > > LIKE seems to use the collating sequence.\n> >\n> > No. The collating sequence defines the order of all possible strings.\n> > LIKE doesn't order anything.\n>\n> Again where does it come from ?\n\nLet me elaborate again:\n\nWe want to be able to use btree indexes for LIKE expressions, under the\ntheory that given the expression col LIKE 'foo%' we can augment the\nexpression col >= 'foo' and col < 'fop', which a btree can handle. Our\nproblem is that this theory was false, because if the operators >= and <\nare locale-aware they can do just about anything. So my solution was that\nI implement an extra set of operators >= and < (named $>=$ and $<$ for the\nheck of it) that are *not* locale-aware so that this whole thing works\nagain.\n\nNow, if you look at the code that does the LIKE pattern matching you'll\nsee that it does not use any locale features, it simply compares\ncharacters for equality based on their character codes, accounting for the\nwildcards. Consequentially, this whole operation has nothing to do with\nlocales. It was an error that it did in the first place, that's why we\nhad all these problems.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 19 Aug 2001 00:56:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Now, if you look at the code that does the LIKE pattern matching you'll\n> see that it does not use any locale features, it simply compares\n> characters for equality based on their character codes, accounting for the\n> wildcards. Consequentially, this whole operation has nothing to do with\n> locales.\n\nBut the LIKE code does know about multibyte character sets. Is it safe\nto assume that memcmp-based sorting will not make any mistakes with\nmultibyte characters?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Aug 2001 19:34:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report on locale safe LIKE indexing "
},
{
"msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut\n>\n> Hiroshi Inoue writes:\n>\n> > Please look at my first question.\n> > This depends on the assumption that '=' is equivalent in\n> > any locale. Is it guaranteed ?\n> > For example, ( 'a' = 'A' ) isn't allowed in any locale ?.\n> >\n> > And your answer was\n> > The whole point here is not to rely on '='.\n> >\n> > Clearly your theory depends on the assumption that\n> > If a = b in some locale then a = b in ASCII locale.\n> >\n> > And where does 'a' <> 'A' come from ?\n> > The definition of '=' is a part of collating sequence.\n> >\n> > >\n> > > > LIKE seems to use the collating sequence.\n> > >\n> > > No. The collating sequence defines the order of all possible strings.\n> > > LIKE doesn't order anything.\n> >\n> > Again where does it come from ?\n>\n> Let me elaborate again:\n>\n> Now, if you look at the code that does the LIKE pattern matching you'll\n> see that it does not use any locale features, it simply compares\n> characters for equality based on their character codes, accounting for the\n> wildcards. Consequentially, this whole operation has nothing to do with\n> locales.\n\nOh I see your point.\nHmm * string1 = string2 * doesn't imply * string1 LIKE string2 * ?\n\nOtherwise the current criterion of LIKE matching unwittingly assumes\nthat there's no locale that has the different definition of '=' from that of\nASCII locale. I don't think the current implementation is strictly right.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Sun, 19 Aug 2001 17:56:36 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Hiroshi Inoue writes:\n\n> Hmm * string1 = string2 * doesn't imply * string1 LIKE string2 * ?\n\nIn the current implementation of LIKE, you're right. The SQL standard\nallows for the possibility that \"[d]epending on the collating sequence,\ntwo strings may compare as equal even if they are of different lengths or\ncontain different sequences of characters.\" However, I doubt that this\ncan really happen in practice. For example, in some collating sequences\n(such as en_US), characters with diacritic marks (accents) are \"more\nequal\" than others, but in the end there's always a tie breaker. Or do\nyou know an example where this really happens?\n\n> Otherwise the current criterion of LIKE matching unwittingly assumes\n> that there's no locale that has the different definition of '=' from that of\n> ASCII locale. I don't think the current implementation is strictly right.\n\nStrictly speaking, it isn't. The SQL standard says that literal\ncharacters in the pattern must be matched to the characters in the\nsupplied value according to the collating sequence. (See 8.5 GR 3. d) ii)\n4).)\n\nHowever, I strongly doubt that that would actually be a good idea.\nPattern matching generally doesn't work this way (cf. POSIX regexp), and\nsince locale-aware comparisons are context-sensitive in some cases I don't\neven want to think about whether this could actually work when faced with\nwildcards.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 19 Aug 2001 13:16:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Tom Lane writes:\n\n> But the LIKE code does know about multibyte character sets. Is it safe\n> to assume that memcmp-based sorting will not make any mistakes with\n> multibyte characters?\n\nRemember that this memcmp-based sorting is the same sorting that texteq\nwill do when locale is turned off. So if there were a problem it would\nalready exist, but I'm sure there isn't one.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 19 Aug 2001 14:31:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Progress report on locale safe LIKE indexing "
},
{
"msg_contents": "We're about to start up Postgresql in production, and I am looking for a \nfew tools, so I do not have to reinvent the wheel.\n\nI'm looking for tools that:\n\n 1) Grep through log files, looking for particulary nasty items, that \nshould get sent to me in email or to a cell phone. I noticed you can use \nthe syslog daemon, which would be great for sorting. Anyone have a sort list?\n\n 2) Probably, we'll use Bib Brother or Whats Up Gold to monitor system \nresources, like cpu, memory, and disk space. Anyone have suggestions?\n\n 3) Interesting tools any self-respecting DBA should have, that looks for \nparticular \"quirks\" in Postgresql. For example, in Informix, we were \nalways checking fragmentation and space issues.\n\n\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100\nnwalker@eldocomp.com\n\n",
"msg_date": "Sun, 19 Aug 2001 16:09:23 -0700",
"msg_from": "Naomi Walker <nwalker@eldocomp.com>",
"msg_from_op": false,
"msg_subject": "Tool Search"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Hiroshi Inoue writes:\n> \n> > Hmm * string1 = string2 * doesn't imply * string1 LIKE string2 * ?\n> \n> In the current implementation of LIKE, you're right. The SQL standard\n> allows for the possibility that \"[d]epending on the collating sequence,\n> two strings may compare as equal even if they are of different lengths or\n> contain different sequences of characters.\" However, I doubt that this\n> can really happen in practice. For example, in some collating sequences\n> (such as en_US), characters with diacritic marks (accents) are \"more\n> equal\" than others, but in the end there's always a tie breaker. Or do\n> you know an example where this really happens?\n\nI can see the examples in a documentation M$ SQL Server though\nI can't try it in reality.\nFor example\n ignore case(low/high)\n ignore accents\n \nI don't think they are strange as collating sequences.\n\nYou are establishing a pretty big mechanism and I think\nyou should clarify the assumption.\nPlease tell me the assumption.\nI can think of the followings.\n\n1) Because the current implementaion of LIKE isn't locale-aware,\n we should be compatible with it for ever.\n2) strcoll(str1, str2) == 0 means strcmp(str1, str2) == 0\n in any locale.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 20 Aug 2001 08:52:12 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Hiroshi Inoue writes:\n\n> 1) Because the current implementaion of LIKE isn't locale-aware,\n> we should be compatible with it for ever.\n\nI'm not sure I intended to say that. The combination of the following\nfactors seems important:\n\na) compatibility is desirable\nb) no requests to the contrary have been made\nc) the LIKE locale awareness as defined by SQL is quite brain-dead\n\n> 2) strcoll(str1, str2) == 0 means strcmp(str1, str2) == 0\n> in any locale.\n\nI *think* that is true, but until something happens about 1) it doesn't\nmatter.\n\n\nWhat is your position: Do you think my solution is short-lived because\nthe LIKE implementation is wrong and should be changed?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 20 Aug 2001 17:28:35 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Hiroshi Inoue writes:\n> \n> > 1) Because the current implementaion of LIKE isn't locale-aware,\n> > we should be compatible with it for ever.\n> \n> I'm not sure I intended to say that. The combination of the following\n> factors seems important:\n> \n> a) compatibility is desirable\n> b) no requests to the contrary have been made\n> c) the LIKE locale awareness as defined by SQL is quite brain-dead\n> \n> > 2) strcoll(str1, str2) == 0 means strcmp(str1, str2) == 0\n> > in any locale.\n> \n> I *think* that is true, but until something happens about 1) it doesn't\n> matter.\n> \n\nTatsuo reported a case that strcoll(str1, str2) == 0 but strcmp(\nstr1, str2) != 0 though it seems to be considered as an OS bug.\n\n> What is your position: Do you think my solution is short-lived because\n> the LIKE implementation is wrong and should be changed?\n\nYes I'm afraid of it though I'm not sure.\nIf your solution is short-lived, your change would be not\nonly useless but also harmful. So I expect locale-aware\npeople to confirm that we are in the right direction.\nI myself don't need the locale support at all. In Japan\nexistent locales themselves seem to be harmful for most\npeople as Tatsuo mentioned already.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 21 Aug 2001 09:12:56 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Hiroshi Inoue writes:\n\n> If your solution is short-lived, your change would be not\n> only useless but also harmful. So I expect locale-aware\n> people to confirm that we are in the right direction.\n\nI am a bit confused here. We have tinkered with LIKE indexing at least a\nyear. Now that a solution is found that *works*, it is claimed that it is\nharmful because LIKE was doing the wrong thing in the first place. OTOH,\nI have not seen anyone independently claim that LIKE is wrong, nor do I\nsee anyone proposing to actually change it. If we install my fix now we\nhave a system that works better than the previous release with no change\nin semantics. If someone wants to change LIKE at a later point, reverting\nmy changes would be the least part of that work.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 21 Aug 2001 17:46:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut\n>\n> Hiroshi Inoue writes:\n>\n> > If your solution is short-lived, your change would be not\n> > only useless but also harmful. So I expect locale-aware\n> > people to confirm that we are in the right direction.\n>\n> I am a bit confused here. We have tinkered with LIKE indexing at least a\n> year. Now that a solution is found that *works*, it is claimed that it is\n> harmful because LIKE was doing the wrong thing in the first place. OTOH,\n> I have not seen anyone independently claim that LIKE is wrong, nor do I\n> see anyone proposing to actually change it.\n\nProbably no one has used such a (useful) locale that\nstrcoll(str1, str2) == 0 but strcmp(str1, str2) != 0\nwith PostgreSQL. As long as the vagueness with LIKE\nis held within utils/adt/like.c I won't complain.\n\n> If we install my fix now we\n> have a system that works better than the previous release with no change\n> in semantics. If someone wants to change LIKE at a later point, reverting\n> my changes would be the least part of that work.\n\nIf your change is useful, the change would be hard to revert.\nWe had better be more careful about the change.\nFor example, you are defining text_binary_ops on text data type\nbut how about introduing a new data type (text collate ASCII) and\ndefine text_ascii_ops on the new type ? We could use operators\nlike =, <=, >= instead of $=$, $<=$, $>=$ ... We may be able to\nto lay the foundation of the collate support for individual column\nby changing existent text type as (text collate some_collate) .\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Wed, 22 Aug 2001 23:49:11 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
}
] |
[
{
"msg_contents": "1. Just noted this in contrib/userlock/README.user_locks:\n\n> User locks, by Massimo Dal Zotto <dz@cs.unitn.it>\n> Copyright (C) 1999, Massimo Dal Zotto <dz@cs.unitn.it>\n> \n> This software is distributed under the GNU General Public License\n> either version 2, or (at your option) any later version.\n\nWell, anyone can put code into contrib with whatever license\nhe/she want but \"user locks\" package includes interface\nfunctions in contrib *and* changes in our lock manager, ie\nchanges in backend code. I wonder if backend' part of package\nis covered by the same license above? And is it good if yes?\n\n2. Not good implementation, imho.\n\nIt's too complex (separate lock method table, etc). Much cleaner\nwould be implement this feature the same way as transactions\nwait other transaction commit/abort: by locking objects in\npseudo table. We could get rid of offnum and lockmethod from\nLOCKTAG and add\n\nstruct\n{\n\tOid\tRelId;\n\tOid\tObjId;\n} userObjId;\n\nto objId union of LOCKTAG.\n\nThis way user could lock whatever object he/she want in specified\ntable and note that we would be able to use table access rights to\ncontrol if user allowed to lock objects in table - missed in 1.\n\nOne could object that 1. is good because user locks never wait.\nI argue that \"never waiting\" for lock is same bad as \"always waiting\".\nSomeday we'll have time-wait etc features for general lock method\nand everybody will be happy -:)\n\nComments?\n\nVadim\nP.S. I could add 2. very fast, no matter if we'll keep 1. or not.\n",
"msg_date": "Fri, 17 Aug 2001 11:48:49 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "User locks code"
},
{
"msg_contents": "An interesting method would be to allow users to simply avoid locked\nrows:\n\nSELECT * FROM queue FOR UPDATE LIMIT 1 UNLOCKED;\n\nUnlocked, return immediately, whatever could be used as a keyword to\navoid rows that are locked (skipping over them).\n\nFor update locks the row of course. Currently for the above type of\nthing I issue an ORDER BY random() which avoids common rows enough,\nthe queue agent dies if queries start taking too long (showing it's\nwaiting for other things) and tosses up new copies if it goes a while\nwithout waiting at all (showing increased load).\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>\nTo: <pgsql-hackers@postgresql.org>\nSent: Friday, August 17, 2001 2:48 PM\nSubject: [HACKERS] User locks code\n\n\n> 1. Just noted this in contrib/userlock/README.user_locks:\n>\n> > User locks, by Massimo Dal Zotto <dz@cs.unitn.it>\n> > Copyright (C) 1999, Massimo Dal Zotto <dz@cs.unitn.it>\n> >\n> > This software is distributed under the GNU General Public License\n> > either version 2, or (at your option) any later version.\n>\n> Well, anyone can put code into contrib with whatever license\n> he/she want but \"user locks\" package includes interface\n> functions in contrib *and* changes in our lock manager, ie\n> changes in backend code. I wonder if backend' part of package\n> is covered by the same license above? And is it good if yes?\n>\n> 2. Not good implementation, imho.\n>\n> It's too complex (separate lock method table, etc). Much cleaner\n> would be implement this feature the same way as transactions\n> wait other transaction commit/abort: by locking objects in\n> pseudo table. We could get rid of offnum and lockmethod from\n> LOCKTAG and add\n>\n> struct\n> {\n> Oid RelId;\n> Oid ObjId;\n> } userObjId;\n>\n> to objId union of LOCKTAG.\n>\n> This way user could lock whatever object he/she want in specified\n> table and note that we would be able to use table access rights to\n> control if user allowed to lock objects in table - missed in 1.\n>\n> One could object that 1. is good because user locks never wait.\n> I argue that \"never waiting\" for lock is same bad as \"always\nwaiting\".\n> Someday we'll have time-wait etc features for general lock method\n> and everybody will be happy -:)\n>\n> Comments?\n>\n> Vadim\n> P.S. I could add 2. very fast, no matter if we'll keep 1. or not.\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Fri, 17 Aug 2001 15:46:01 -0400",
"msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>",
"msg_from_op": false,
"msg_subject": "Re: User locks code"
}
] |
[
{
"msg_contents": "Can I ask why we are mentioning the changelog for the release and not\nthe list from the HISTORY file? Any why are we putting the changelog in\nthe tarball anyway? Seems that could easily go on a web site.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 18:48:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Changelog and 7.1.3 release"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Can I ask why we are mentioning the changelog for the release and not\n> the list from the HISTORY file? Any why are we putting the changelog in\n> the tarball anyway? Seems that could easily go on a web site.\n\nThe point of these changelogs was to show the changes between beta and rc\nversions, because those were not necessarily recorded in the HISTORY file.\nHowever, putting these in the tarball is questionable (if you already\ndownloaded the tarball then you might as well proceed with installing it),\nstill having them there now is even more questionable (who cares?), and\nmaking them for minor releases is redundant and confusing. I vote for\nremoving them.\n\nFirst prize for Consistency in Naming, btw.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 24 Aug 2001 16:15:05 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Changelog and 7.1.3 release"
},
{
"msg_contents": "\nCan we come to some kind of decision on this before going beta?\n\n\n> Bruce Momjian writes:\n> \n> > Can I ask why we are mentioning the changelog for the release and not\n> > the list from the HISTORY file? Any why are we putting the changelog in\n> > the tarball anyway? Seems that could easily go on a web site.\n> \n> The point of these changelogs was to show the changes between beta and rc\n> versions, because those were not necessarily recorded in the HISTORY file.\n> However, putting these in the tarball is questionable (if you already\n> downloaded the tarball then you might as well proceed with installing it),\n> still having them there now is even more questionable (who cares?), and\n> making them for minor releases is redundant and confusing. I vote for\n> removing them.\n> \n> First prize for Consistency in Naming, btw.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 16:17:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Changelog and 7.1.3 release"
},
{
"msg_contents": "> Can we come to some kind of decision on this before going beta?\n\n>> The point of these changelogs was to show the changes between beta and rc\n>> versions, because those were not necessarily recorded in the HISTORY file.\n>> However, putting these in the tarball is questionable (if you already\n>> downloaded the tarball then you might as well proceed with installing it),\n>> still having them there now is even more questionable (who cares?), and\n>> making them for minor releases is redundant and confusing. I vote for\n>> removing them.\n\nI agree with Peter on this; I don't see much value in putting these\nfiles into the distribution, and none at all in preserving them\nindefinitely.\n\nI'd suggest that the future procedure ought to be to pull a changelog\nfrom CVS but put it beside the tarball on the FTP server, not inside\nthe tarball (and certainly not back into CVS --- that's redundant).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Sep 2001 17:26:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Changelog and 7.1.3 release "
},
{
"msg_contents": "> I'd suggest that the future procedure ought to be to pull a changelog\n> from CVS but put it beside the tarball on the FTP server, not inside\n> the tarball (and certainly not back into CVS --- that's redundant).\n\nWhen we commit the log files to CVS, don't we have to run cvs log again\nand commit a new version, ad infinitum. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 17:27:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Changelog and 7.1.3 release"
}
] |
[
{
"msg_contents": "Once the mirrors come up to speed, you may download the latest RPMset from:\n/pub/binary/v7.1.3/RPMS\n\nPlease actually read the file 'README.rpm-dist' in that directory BEFORE \ninstallation.\n\nBinaries are, as before, provided for Red Hat 7.1 only at this time. Source \nRPM is provided, and should rebuild just fine on most any LSB-compliant \nRPM-based Linux. README.rpm-dist has instructions on how to rebuild, and \nimproved instructions on building just what packages you need.\n\nChangelog from PostgreSQL-7.1.2-5PGDG RPMset:\n* Fri Aug 17 2001 Lamar Owen <lamar.owen@wgcr.org>\n- 7.1.3-1PGDG\n- Kerberos auth optional.\n- Sync with latest Rawhide RPMset.\n- Minor README.rpm-dist updates.\n- Handle stop with stale pid file.\n- Make packages own their directories.\n\nKerberos-5 support is built by default. Some bugs found during some QA \nprocessing were fixed by this newer set. The new intarray code shipped with \nthe 5PGDG set is now also optional, and not installed by default.\n\nIf you want the postmaster '-i' switch behavior, please read README.rpm-dist \n-- the _right_ way to do this is documented there. Editing the initscript in \n/etc/rc.d/init.d (or /etc/init.d) is NOT recommended, as that file WILL be \noverwritten during an upgrade!\n\nThis was probably the smoothest new build I've done in the 7.1.x series -- a \ntestimony to the improving maturity of the 7.1.x series. One edit of the \nspec file, and the build just _happened_.\n\nHOWEVER, there is one known security issue in this RPMset, specifically with \nthe postgresql-perl client, Pg.so. Due to the way RPM's are built, in \nconjunction with the way the Pg Makefile system works, Pg.so receives an \nRPATH that includes the rpm 'buildroot' in it. This is BAD. This means that \nany user on the box could cause Pg.so to load an arbitrary library. I am \nworking on this, and expect to see an update once fixed.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 17 Aug 2001 20:12:48 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.1.3 RPM's available for download."
}
] |
[
{
"msg_contents": "I seem to have lost my CVS access to the repository. I don't know if\nmy password changed for some reason or what. I sent email to Marc but\nno response. Can someone else check my account and let me know what\nhappened please. I have some PostgreSQL changes to make (including\na fix to a very nasty and sneaky bug) and I can't commit.\n \nThanks.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 17 Aug 2001 20:30:23 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "CVS access lost"
},
{
"msg_contents": "\nfixing\n\nOn Fri, 17 Aug 2001, D'Arcy J.M. Cain wrote:\n\n> I seem to have lost my CVS access to the repository. I don't know if\n> my password changed for some reason or what. I sent email to Marc but\n> no response. Can someone else check my account and let me know what\n> happened please. I have some PostgreSQL changes to make (including\n> a fix to a very nasty and sneaky bug) and I can't commit.\n>\n> Thanks.\n>\n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Fri, 17 Aug 2001 21:51:32 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: CVS access lost"
}
] |
[
{
"msg_contents": "I have recently installed PostgreSQL onto my Linux system. I installed and \nset it up under the account name of 'postgres' where I am able to create \nand manage databases.\n\nHow do I set another account (namely my normal login 'pmoscatt') so that I \ncan create and manage databases instead of logging in as 'postgres' ??\n\nRegards\nPete\n",
"msg_date": "Sat, 18 Aug 2001 06:45:35 GMT",
"msg_from": "Peter Moscatt <pmoscatt@bigpond.net.au>",
"msg_from_op": true,
"msg_subject": "Setting Up User Accounts For PostgreSQL ?"
},
{
"msg_contents": "Perhaps this question would be better directed to -general?\n\nThe documentation for PostgreSQL is suprisingly good as well but I assume\nyou've read that and are still confused :)\n\nAZ\n\n\"Peter Moscatt\" <pmoscatt@bigpond.net.au> wrote in message\nnews:jaof7.127918$Xr6.699263@news-server.bigpond.net.au...\n> I have recently installed PostgreSQL onto my Linux system. I installed\nand\n> set it up under the account name of 'postgres' where I am able to create\n> and manage databases.\n>\n> How do I set another account (namely my normal login 'pmoscatt') so that I\n> can create and manage databases instead of logging in as 'postgres' ??\n>\n> Regards\n> Pete\n\n\n",
"msg_date": "Sat, 18 Aug 2001 12:56:33 -0400",
"msg_from": "\"August Zajonc\" <junk-pgsql@aontic.com>",
"msg_from_op": false,
"msg_subject": "Re: Setting Up User Accounts For PostgreSQL ?"
}
] |
[
{
"msg_contents": "\nJust a suggestion, how much work would it be to accept multiple parameters on\naggregate functions?\n\nFor instance:\n\nselect fubar(field1, field2) from table one group by field1;\n\nThe reason I think that this is useful is that for some statistical operations,\noften times there is extra \"per record\" data that can affect how you calculate\na value.\n",
"msg_date": "Sat, 18 Aug 2001 14:56:52 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Multiple parameters on aggregates?"
},
{
"msg_contents": "mlw wrote:\n> \n> Just a suggestion, how much work would it be to accept multiple parameters on\n> aggregate functions?\n> \n> For instance:\n> \n> select fubar(field1, field2) from table one group by field1;\n> \n> The reason I think that this is useful is that for some statistical operations,\n> often times there is extra \"per record\" data that can affect how you calculate\n> a value.\n\nThis would also be VERY helpful in a lot of OLAP type processing!\n\ncreate function aggfunct( int4, int4, int4 )\n returns int4\n as '/usr/local/lib/pglib.so', 'aggfunct'\n language 'c' ;\n \ncreate function aggterm( int4 )\n returns int4\n as '/usr/local/lib/pglib.so', 'aggterm'\n language 'c' ;\n \ncreate aggregate agg1 ( basetype = integer,\n sfunc1 = aggfunct, stype1 = integer,\n finalfunc = aggterm,\n initcond1 = 0 );\n\n\nIn the above example, parameters 1 and 2 to aggfunct are the standard aggregate\nparameters are required by the \"create aggrigate\" syntax. At query time,\nhowever, the additional parameter(s) is used in addition.\n\nAs an example of a more complex example, one could do something like this:\n\nselect mycube_agg(region, date, sales, product) from salesinfo group by region;\n\n\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Sun, 19 Aug 2001 11:23:08 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Multiple parameters on aggregates?"
}
] |
[
{
"msg_contents": "Hi all,\n\nFor a few months now I've been thinking about whether or not a guide\n('line-by-line') to the Postgres source tree would be of any value. \n\nSuch a guide would, most probably, trace an 'ultimate' query (ie,\none which requires the use of all source level functionality) through the\nsource as well as reference appendices guides to underlying functionality\n(backend lib, transactions, macros) and client interfaces (inc. FE/BE\nprotocol, internals of the libs etc), procedural language interfaces, SPI\nand any other part of the source I have left out.\n\nThe guide would look at all non-trivial functions and code segments in the\nsource. Examination would involve explanation of complex code (at a\nline-by-line level), background information of reasoning behind the\ncode-level design of important functionality, analysis of algorithms and\nwhat ever else seems helpful to people approaching the source.\n\nI think it would be most useful as a non-commercial project intended for\ndistribution from the Postgres Web site.\n\nObviously such a project would take a very long time and would have to\ninvolve more people than myself. So the questions which go unanswered\nare: Would such a work be of any real use? Would it be of use to enough\npeople? Is this the right way to go about introducing people to the\nsource? Is it desirable to introduce (lots of) people to the source?\n\nFor my part, I think such a project would be a good way of countering two\nthings which are affecting Postgres's popularity. Firstly, in Australia\n(and I imagine other parts of the world) University courses dealing with\ndatabase/information systems (whether it be as basic as an introduction to\nSQL or as detailed as developing complicated/sophisticated data storage\nsystems) are more often than not sponsored by vendors (Oracle, IBM,\nSybase) or 'consultancy' companies who may as well be sales reps for\nvendors. Which ever it is, in the end courses are full of marketing drivel\nand very little analysis and exploration of real concepts/problems. \n\nSecondly, where Postgres really out performs proprietary databases is in\nits source being open. Problems which cause major functionality problems\nand downtime in critical vendor installations could often easily be\nresolved if developers had the source. I think a thorough source guide\nwould go some way to bolstering the appeal of Postgres to such developers\nwhilst countering some of the arguments for throwing hefty sums of money\nat support.\n\nSo, what do people think?\n\nGavin\n\n",
"msg_date": "Sun, 19 Aug 2001 17:40:33 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Guide to PostgreSQL source tree"
},
{
"msg_contents": "\n----- Original Message ----- \nFrom: Gavin Sherry <swm@linuxworld.com.au>\nSent: Sunday, August 19, 2001 3:40 AM\n\n\n> Hi all,\n> \n> For a few months now I've been thinking about whether or not a guide\n> ('line-by-line') to the Postgres source tree would be of any value. \n\n[snip] \n\n> So, what do people think?\n\nSuch a guide would be nice to have handy as a reference, but how \nare you going to keep up with frequent code changes\nand the new code added? Keeping this thing up-to-date\nis an enormous effort even if you're not the only\none involved.\n\nAnd what's wrong in automated guide generation? Like\nif put specially formatted comments (responsibility of the author of the piece) \nfor the guide in the source, and run through the source tree\nyour guide generator from time to time (like on every beta, RC, and release perhaps?).\nThis way the guide maintenance will be much easier, but the quality will solely\ndepend on comments supplied by the author.\n\nMy two Canadian cents.\n\nSerguei\n\n\n",
"msg_date": "Sun, 19 Aug 2001 10:47:03 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PostgreSQL source tree"
},
{
"msg_contents": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> And what's wrong in automated guide generation?\n\nEven more to the point, what's wrong with looking at the source code?\n(Why would you want a \"line by line\" guide if you're not looking at the\nsource code, anyway?)\n\nWe could probably do with more extensive high-level documentation than\nwe have, to point people to the parts of the code that they need to read\nfor a particular purpose. But I agree with Sergei that it's hopeless\nto try to divorce low-level documentation from the code itself.\n\nOne thing that I find absolutely essential for dealing with any large\nproject is a full-text indexer (I use Glimpse, but I think there are\nothers out there). Being able to quickly look at every use of a\nparticular identifier goes a long way towards answering questions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Aug 2001 11:22:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PostgreSQL source tree "
},
{
"msg_contents": "On Sun, 19 Aug 2001, Tom Lane wrote:\n> One thing that I find absolutely essential for dealing with any large\n> project is a full-text indexer (I use Glimpse, but I think there are\n> others out there). Being able to quickly look at every use of a\n> particular identifier goes a long way towards answering questions.\n\nAgreed -- you can't find your way around PostgreSQL without such a\nprogram. Personally, I use Source Navigator which you can grab at\nhttp://sources.redhat.com/sourcenav/ . The really useful thing about\nsource navigator is that it parses the source into functions, variables,\netc. rather than just indexing it all as text. This means when you are\nlooking at a source file with it, you can do neat things like click on a\nfunction call and then see things like the declaration and a x-ref\ntree. Very handy.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300,\nToronto, ON M4P 2C9\n\n",
"msg_date": "Sun, 19 Aug 2001 13:10:50 -0400 (EDT)",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PostgreSQL source tree "
},
{
"msg_contents": "> On Sun, 19 Aug 2001, Tom Lane wrote:\n> > One thing that I find absolutely essential for dealing with any large\n> > project is a full-text indexer (I use Glimpse, but I think there are\n> > others out there). Being able to quickly look at every use of a\n> > particular identifier goes a long way towards answering questions.\n> \n> Agreed -- you can't find your way around PostgreSQL without such a\n> program. Personally, I use Source Navigator which you can grab at\n> http://sources.redhat.com/sourcenav/ . The really useful thing about\n\nThis is the cygnus programmer's editor, right? It is a nice piece of\nsoftware.\n\n> source navigator is that it parses the source into functions, variables,\n> etc. rather than just indexing it all as text. This means when you are\n> looking at a source file with it, you can do neat things like click on a\n> function call and then see things like the declaration and a x-ref\n> tree. Very handy.\n\nI have found I need tags support too. You can do project-wide\nfunction/identifier searching either in your editor, if it supports\nthat, or using an external program like glimpse or idutils. Syntax\ncolorization helps too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Aug 2001 11:27:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PostgreSQL source tree"
},
{
"msg_contents": "> Hi all,\n> \n> For a few months now I've been thinking about whether or not a guide\n> ('line-by-line') to the Postgres source tree would be of any value. \n> \n> Such a guide would, most probably, trace an 'ultimate' query (ie,\n> one which requires the use of all source level functionality) through the\n> source as well as reference appendices guides to underlying functionality\n> (backend lib, transactions, macros) and client interfaces (inc. FE/BE\n> protocol, internals of the libs etc), procedural language interfaces, SPI\n> and any other part of the source I have left out.\n\nI wonder if we should just work down from where we are now. We have the\nFAQ, backend flowchard, and \"PostgreSQL Internals through Pictures\" that\nI wrote, plus a presentation by Tom Lane. Where do we go from there?\n\n> Secondly, where Postgres really out performs proprietary databases is in\n> its source being open. Problems which cause major functionality problems\n> and downtime in critical vendor installations could often easily be\n> resolved if developers had the source. I think a thorough source guide\n> would go some way to bolstering the appeal of Postgres to such developers\n> whilst countering some of the arguments for throwing hefty sums of money\n> at support.\n\nI didn't realize that was happening.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Aug 2001 11:29:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Guide to PostgreSQL source tree"
}
] |
[
{
"msg_contents": "Here's the patch for review. It adds the non-locale operator classes for\nall three character types and the analogous selectivity estimation\nchanges. Basically, I'm confident this works, but as some people seem to\nhave doubts I show it here first.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter",
"msg_date": "Sun, 19 Aug 2001 18:28:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "LIKE indexing"
},
{
"msg_contents": "I'm fairly close to committing a wholesale rearrangement of pg_opclass\nand friends (per previous discussions, mostly with Oleg). This is going\nto create some conflicts with your patch :-(. Who gets to go first?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Aug 2001 13:56:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LIKE indexing "
},
{
"msg_contents": "Tom Lane writes:\n\n> I'm fairly close to committing a wholesale rearrangement of pg_opclass\n> and friends (per previous discussions, mostly with Oleg). This is going\n> to create some conflicts with your patch :-(. Who gets to go first?\n\nIf your changes only conflict in the system catalog headers I can redo\nthose when you're done. (Presuming there's documentation coming along.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 20 Aug 2001 00:25:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: LIKE indexing "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> If your changes only conflict in the system catalog headers I can redo\n> those when you're done. (Presuming there's documentation coming along.)\n\nActually, all those uses of op_class() are going to conflict too...\nop_class doesn't need an AM OID parameter anymore (and besides which,\nI renamed it to op_in_opclass).\n\nIf you feel ready to commit, do so, and I'll do the merge.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 00:08:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LIKE indexing "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Here's the patch for review.\n\nA few gripes:\n\n+ The optimizer can also use a B-Tree index for queries involving the\n+ pattern matching operators <literal>LIKE</>,\n+ <literal>ILIKE</literal>, <literal>~</literal>, and\n+ <literal>~*</literal>, <emphasis>if</emphasis> the pattern is\n+ anchored to the beginning of the string, e.g., <literal>col LIKE\n+ 'foo%'</literal> or <literal>col ~ '^foo'</literal>, but not\n+ <literal>col LIKE 'bar'</literal>. However, if your server does\n\nThe \"but not\" part is wrong: col LIKE 'bar' works perfectly fine as\nan indexable LIKE query. Perhaps you meant \"but not col LIKE '%foo'\".\n\nWhile it's okay to treat text and varchar alike, I object to treating\nbpchar as equivalent to the other two. Shouldn't the bpchar versions of\nthese functions strip trailing spaces before comparing?\n\nSeems to me you should provide \"$<>$\" operators for completeness, even\nthough they're not essential for btree opclasses. I think that these\noperators may be useful for more than just this one purpose, so we\nshouldn't set up artificial roadblocks.\n\nI don't like the fact that you added expected-output rows to opr_sanity;\nseems like tweaking the queries to allow $<$ etc as expected names would\nbe more appropriate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 00:33:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LIKE indexing "
},
{
"msg_contents": "Tom Lane writes:\n\n> The \"but not\" part is wrong: col LIKE 'bar' works perfectly fine as\n> an indexable LIKE query. Perhaps you meant \"but not col LIKE '%foo'\".\n\nThanks. That was a mixup with the POSIX regexp style.\n\n> While it's okay to treat text and varchar alike, I object to treating\n> bpchar as equivalent to the other two. Shouldn't the bpchar versions of\n> these functions strip trailing spaces before comparing?\n\nI had thought a long time about this and I couldn't see a reason why.\nThe reason is that the LIKE operator for bpchar does take the blanks into\naccount, so it effectively doesn't care whether the blanks are the result\nof padding or explicit input. E.g.,\n\npeter=# set enable_indexscan to off;\nSET VARIABLE\npeter=# create table test1 (a char(5));\nCREATE\npeter=# insert into test1 values ('four');\nINSERT 16560 1\npeter=# select * from test1 where a like 'four'::bpchar;\n a\n---\n(0 rows)\n\n/*\n * If we had stripped spaces here we would have gotten a false positive.\n */\n\npeter=# select * from test1 where a like 'fou_'::bpchar;\n a\n---\n(0 rows)\n\n/*\n * Since the padding here is after the wildcard character and is thus\n * stripped in the analysis, the augmented expression still holds.\n */\n\npeter=# select * from test1 where a like 'fou%'::bpchar;\n a\n-------\n four\n(1 row)\n\n/* same here */\n\nI would also argue that the notion of a direct binary comparision would\nnot benefit from space stripping.\n\n> Seems to me you should provide \"$<>$\" operators for completeness, even\n> though they're not essential for btree opclasses.\n\nWill do.\n\n> I don't like the fact that you added expected-output rows to opr_sanity;\n> seems like tweaking the queries to allow $<$ etc as expected names would\n> be more appropriate.\n\nOk.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 20 Aug 2001 17:45:02 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: LIKE indexing "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> peter=# create table test1 (a char(5));\n> CREATE\n> peter=# insert into test1 values ('four');\n> INSERT 16560 1\n> peter=# select * from test1 where a like 'four'::bpchar;\n> a\n> ---\n> (0 rows)\n\nI think this is an erroneous result, actually, seeing as how\n\nregression=# select 'four '::bpchar = 'four'::bpchar;\n ?column?\n----------\n t\n(1 row)\n\nHow can A = B not imply A LIKE B? (This may be related to Hiroshi's\nconcerns.)\n\nI dug into the spec to see what it has to say, and came up with this\nrather opaque prose:\n\n 4) If the i-th substring specifier of PCV is neither an\n arbitrary character specifier nor an arbitrary string\n specifier, then the i-th substring of MCV is equal to\n that substring specifier according to the collating\n sequence of the <like predicate>, without the appending\n of <space> characters to MCV, and has the same length as\n that substring specifier.\n\nThe bit about \"without the appending of <space> characters\" *might*\nmean that LIKE is always supposed to treat trailing blanks as\nsignificant, but I'm not sure. The text does seem to say that it's okay\nto add trailing blanks to the pattern to produce a match, when the\ncollating sequence is PAD SPACE type (bpchar in our terms).\n\nIn any case, Hiroshi is dead right that LIKE is supposed to perform\ncollating-sequence-dependent comparison, and this probably means that\nthis whole approach is a dead end :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 11:58:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LIKE indexing "
},
{
"msg_contents": "Tom Lane writes:\n\n> How can A = B not imply A LIKE B?\n\nWell, according to my reading of the spec, it apparently can. Space\npadding can be weird that way. But see below why I think there are much\nworse alternatives.\n\n> 4) If the i-th substring specifier of PCV is neither an\n> arbitrary character specifier nor an arbitrary string\n> specifier, then the i-th substring of MCV is equal to\n> that substring specifier according to the collating\n> sequence of the <like predicate>, without the appending\n> of <space> characters to MCV, and has the same length as\n> that substring specifier.\n>\n> The bit about \"without the appending of <space> characters\" *might*\n> mean that LIKE is always supposed to treat trailing blanks as\n> significant, but I'm not sure.\n\nThat's how I read it.\n\n> The text does seem to say that it's okay to add trailing blanks to the\n> pattern to produce a match, when the collating sequence is PAD SPACE\n> type (bpchar in our terms).\n\nI can't find that.\n\n> In any case, Hiroshi is dead right that LIKE is supposed to perform\n> collating-sequence-dependent comparison,\n\nAs I have answered to Hiroshi, I think that would really be brain-dead.\nIt would alienate LIKE from how pattern matching normally operates. If we\nmake the assumption that strcoll(A, B) can be 0 for wildly different\nvalues of A and B (for an appropriate definition of \"different\"), then the\nfollowing things could happen:\n\n-> A = B does not imply A ~ B\n\n-> A LIKE 'foobar%' does not imply A LIKE 'foo%' (because 'foobar' is a\nsingle collating element that sorts like 'xyz').\n\n-> A LIKE '%foo%' does not imply that POSITION('foo' IN A) <> 0 (The SQL\nPOSITION function does not mention using the collating sequence.)\n\nI'm also quite suspicious about the wording \"...and has the same length as\nthat substring specifier.\" For instance, it might be nearly reasonable to\ndefine a German locale where � (u umlaut) and ue are equivalent. But then\nwhile 'x�y' = 'xuey' (a strict interpretation of the SQL standard might\ndeny this because of the padding, but \"The result of the comparison of X\nand Y is given by the collating sequence CS.\", and I define mine that\nway), but 'x�y' NOT LIKE 'xuey' because of that rule. Voil�, it can\nhappen after all.\n\nI think this rule is a mistake designed by committee and must be struck\ndown by community. ;-)\n\n> and this probably means that this whole approach is a dead end :-(\n\nBlech... ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 20 Aug 2001 19:04:56 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] LIKE indexing "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> The text does seem to say that it's okay to add trailing blanks to the\n>> pattern to produce a match, when the collating sequence is PAD SPACE\n>> type (bpchar in our terms).\n\n> I can't find that.\n\nThe spec says \"without the appending of <space> characters to MCV\",\nnot \"without the appending of <space> characters\" full stop. I read\nthat to imply that it *is* okay to append spaces to the pattern PCV.\nBut it's not exactly transparently written in any case.\n\n> I'm also quite suspicious about the wording \"...and has the same length as\n> that substring specifier.\"\n\nYeah, me too. Does that say that space padding isn't allowed? If so,\nwhy the thrashing-about earlier in the sentence? In any case, it seems\nto allow a locale-dependent case conversion, for instance.\n\n\n>> In any case, Hiroshi is dead right that LIKE is supposed to perform\n>> collating-sequence-dependent comparison,\n\n> As I have answered to Hiroshi, I think that would really be brain-dead.\n> It would alienate LIKE from how pattern matching normally operates.\n\nWell, you make some good arguments, but I'd like to see a fairly strong\nconsensus that we think the SQL definition of LIKE is broken before we\ndecide to build a lot of superstructure on our definition of LIKE. So\nfar I haven't seen many comments on this thread...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 17:21:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] LIKE indexing "
}
] |
[
{
"msg_contents": "We've had some problem reports that the current practice of initdb\nassigning to the postgres user the same usesysid as the user id of the\nUnix user running initdb has caused some clashes.\n\n(Imagine this scenario: A few years ago you installed BluePants Linux\n5.0, created a user for PostgreSQL, id 501, created a database. Later you\ncreated a few real users, which get uids 502, 503, etc. Then you\npg_dumpall that database (which saves the sysid). Now you install\nBluePants Linux 7.0 on a new box, create a new user for PostgreSQL, which\nturns out to be 502, because foolishly you created a user for TheirSQL\nfirst. So now you replay your pg_dumpall and you have two users with id\n502. Boom.)\n\nOne idea to resolve this, by getting rid of the usesysid column in favour\nof the oid column has fallen by the wayside (for some valid reasons), so\nthe problem remains. I think the simplest fix would be to assign a fixed\nusesysid of 1. There still is the possibility of changing that with an\ninitdb option, as there always has been. (We could also ensure that\nCREATE USER never assigns ids below, say, 10, so that if for who knows\nwhat reason we decide to add more users into the bootstrap installation we\nhave some room.)\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 19 Aug 2001 18:52:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "A fixed user id for the postgres user?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I think the simplest fix would be to assign a fixed usesysid of 1.\n\nSlightly more flexible: make the ID number an initdb option, with a\ndefault of 1. This would let people do it the old way if they wanted.\nDoesn't seem very critical though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Aug 2001 13:50:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A fixed user id for the postgres user? "
},
{
"msg_contents": "> One idea to resolve this, by getting rid of the usesysid column in favour\n> of the oid column has fallen by the wayside (for some valid reasons), so\n> the problem remains. I think the simplest fix would be to assign a fixed\n> usesysid of 1. There still is the possibility of changing that with an\n> initdb option, as there always has been. (We could also ensure that\n> CREATE USER never assigns ids below, say, 10, so that if for who knows\n> what reason we decide to add more users into the bootstrap installation we\n> have some room.)\n\nDo we do any mapping from uid to usesysid in the code? It is all by\nuser name right, so changing it will not affect authentication.\n\nRemember local PEER/CRED authentication passes ownership by uid, not\nname and people mostly use sameuser mapping. Does this work?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 20 Aug 2001 11:36:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A fixed user id for the postgres user?"
}
] |
[
{
"msg_contents": "It occurred to me that a server with locale features that is started in\nthe C locale is going to behave the same as a server without locale\nfeatures. The exception are a few extra memory moving operations. (I\nsincerely hope that all systems' libcs have optimized paths for the C\nlocale.) So we could get rid of this --enable-locale switch altogether.\nGiven our international user base, this would be an appropriate step and\nmove the locale support out of the \"cumbersome secondary feature\"\ncompartment. What do you think?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 19 Aug 2001 19:15:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Locale by default?"
},
{
"msg_contents": "Hi Peter,\n\nAny idea of how many \"extra memory moving operations\" that would be?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nPeter Eisentraut wrote:\n> \n> It occurred to me that a server with locale features that is started in\n> the C locale is going to behave the same as a server without locale\n> features. The exception are a few extra memory moving operations. (I\n> sincerely hope that all systems' libcs have optimized paths for the C\n> locale.) So we could get rid of this --enable-locale switch altogether.\n> Given our international user base, this would be an appropriate step and\n> move the locale support out of the \"cumbersome secondary feature\"\n> compartment. What do you think?\n> \n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 20 Aug 2001 03:31:16 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> (I sincerely hope that all systems' libcs have optimized paths for the C\n> locale.) So we could get rid of this --enable-locale switch\n> altogether.\n\nSome experimental evidence to support the claim that --enable-locale has\nzero cost would be good before taking this step.\n\nIf any hotspots turn up, we could possibly do runtime checks:\n\n\tif (locale_is_c())\n\t\tstrcmp()\n\telse\n\t\tstrcoll()\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Aug 2001 13:53:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default? "
},
{
"msg_contents": "If it's of any assistance, I'm working with the Open Source Database\nBenchmark guys (osdb.sourceforge.net) to get an AS3AP-based benchmark\nfor PostgreSQL 7.1.x+ up-and-running reliably.\n\nIt's working on my Mandrake Linux 8.0 system here, but I need the main\nOSDB guy to get back from holidays to review and commit things to their\nCVS. ETA of around a week from right now. :)\n\nMy point is, if we've got decent benchmarking software (and we can\nactually freely use it), we can do real-world validation tests when\nconsidering things like Peter's suggestion.\n\nSounds good to me.\n\nRegards and best wishes,\n\nJustin Clift\n\n\nTom Lane wrote:\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > (I sincerely hope that all systems' libcs have optimized paths for the C\n> > locale.) So we could get rid of this --enable-locale switch\n> > altogether.\n> \n> Some experimental evidence to support the claim that --enable-locale has\n> zero cost would be good before taking this step.\n> \n> If any hotspots turn up, we could possibly do runtime checks:\n> \n> if (locale_is_c())\n> strcmp()\n> else\n> strcoll()\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 20 Aug 2001 04:06:51 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "> It occurred to me that a server with locale features that is started in\n> the C locale is going to behave the same as a server without locale\n> features. The exception are a few extra memory moving operations. (I\n> sincerely hope that all systems' libcs have optimized paths for the C\n> locale.) So we could get rid of this --enable-locale switch altogether.\n> Given our international user base, this would be an appropriate step and\n> move the locale support out of the \"cumbersome secondary feature\"\n> compartment. What do you think?\n\nI wouldn't object it if there is a way to disable locale support. We\nin Japan are always troubled by borken Japanese locales on some\nsystems. I'm afraid to hear more complains if there is no way to\ndisable the locale support. Moreover, collation of locales for\nJapanese are broken on all platforms as far as I know. I'm not sure\nabout other Asian languages though.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 20 Aug 2001 10:12:01 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> I wouldn't object it if there is a way to disable locale support.\n\nexport LC_ALL=C\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 20 Aug 2001 17:12:00 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Tatsuo Ishii writes:\n> \n> > I wouldn't object it if there is a way to disable locale support.\n> \n> export LC_ALL=C\n\nI would object even if there's such a way.\nPeople in Japan have hardly noticed that the strange\nbehabior is due to the strange locale(LC_COLLATE).\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 21 Aug 2001 09:26:38 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "> Tatsuo Ishii writes:\n> \n> > I wouldn't object it if there is a way to disable locale support.\n> \n> export LC_ALL=C\n\nIt's not a solution. My point is people should not be troubled by the\nuseless feature (at least for Japanese) even if they set their locale\nother than C.\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 21 Aug 2001 10:00:33 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > Tatsuo Ishii writes:\n> >\n> > > I wouldn't object it if there is a way to disable locale support.\n> >\n> > export LC_ALL=C\n>\n> It's not a solution. My point is people should not be troubled by the\n> useless feature (at least for Japanese) even if they set their locale\n> other than C.\n\nIf people set their locale to something other than C they have evidently\njudged that locale is not useless. Why would they set it otherwise? I\ndon't think hiding away a feature because you think it's useless is a good\nidea. If people don't like it, allow them to turn it off. If there are\npotential problems related to the feature, document them.\n\nFace it, everything has locale support these day. PostgreSQL is one of\nthe few packages that even has it as an option to turn it off. Users of\nbinary packages of PostgreSQL are all invariably faced with locale\nfeatures. So it's not like sudden unasked-for locale support is going to\nbe a major shock.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 21 Aug 2001 17:39:07 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "Hiroshi Inoue writes:\n\n> I would object even if there's such a way.\n> People in Japan have hardly noticed that the strange\n> behabior is due to the strange locale(LC_COLLATE).\n\nI don't think we should design our systems in a way that inconveniences\nmany users because some users are using broken operating systems. If\nJapanese users have not realized yet that the locale support they are\nusing is broken, then it's not the right solution to disable it in\nPostgreSQL by default. In that case the problem would just persist for\nthe system as a whole. The right solution is for them to turn off locale\nsupport in their operating system, the way it's supposed to be done.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 21 Aug 2001 17:41:56 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Hiroshi Inoue writes:\n> \n> > I would object even if there's such a way.\n> > People in Japan have hardly noticed that the strange\n> > behabior is due to the strange locale(LC_COLLATE).\n> \n> I don't think we should design our systems in a way that inconveniences\n> many users because some users are using broken operating systems. If\n> Japanese users have not realized yet that the locale support they are\n> using is broken,\n\nI don't know if the locale support is broken in Japan.\nI can't think of any reasonable Japanese collating sequence\nat once(maybe for ever). I don't think people should know \nabout the existence of collating sequences in Japan.\n\n> then it's not the right solution to disable it in\n> PostgreSQL by default. In that case the problem would just persist for\n> the system as a whole. The right solution is for them to turn off locale\n> support in their operating system, the way it's supposed to be done.\n\nDBMS should be independent from the OS settings as far as\npossible especially in the handling of data. Currently we\ncould hardly judge if we are running on a locale or not from\nthe dbms POV and it doesn't seem a dbms kind of thing in the\nfirst place. I'm a dbms guy not an OS guy and really dislike\nthe requirement for users to export LC_ALL=C. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 22 Aug 2001 09:05:41 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "> If people set their locale to something other than C they have evidently\n> judged that locale is not useless. Why would they set it otherwise?\n\nAs Hiroshi pointed out, the broken thing is the LC_COLLATE, other\nthings in the local are working.\n\n> I\n> don't think hiding away a feature because you think it's useless is a good\n> idea. If people don't like it, allow them to turn it off. If there are\n> potential problems related to the feature, document them.\n\nI don't object the idea letting users turn it off. I said we need a\nway to turn it off in the configuration/compile time.\n\n> Face it, everything has locale support these day. PostgreSQL is one of\n> the few packages that even has it as an option to turn it off. Users of\n> binary packages of PostgreSQL are all invariably faced with locale\n> features. So it's not like sudden unasked-for locale support is going to\n> be a major shock.\n\nI would say it's a misunderstanding that the locale (more precisely\nLC_COLLATE) is usefull for *any* Language/encodings.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 22 Aug 2001 10:34:13 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "> Hiroshi Inoue writes:\n> \n> > I would object even if there's such a way.\n> > People in Japan have hardly noticed that the strange\n> > behabior is due to the strange locale(LC_COLLATE).\n> \n> I don't think we should design our systems in a way that inconveniences\n> many users because some users are using broken operating systems. If\n\nI don't understand why you object the idea giving PostgreSQL the\nability to turn off the locale support in configuration/compile\ntime. In that way, there's no inconveniences for \"many users\".\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 22 Aug 2001 10:34:37 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> I don't understand why you object the idea giving PostgreSQL the\n> ability to turn off the locale support in configuration/compile\n> time. In that way, there's no inconveniences for \"many users\".\n\nI don't mind at all the ability to turn it off. My point is that the\ncompile time is the wrong time to do it. Many users use binary\npackages these days, many more users would like to use binary packages.\nBut the creators of these packages have to make configuration choices to\nsatisfy all of their users. So they turn on the locale support, because\nthat way if you don't want it you can turn if off. The other way around\ndoesn't work.\n\nThe more appropriate way to handle this situation is to make it a runtime\noption. I agree that the LC_ALL/LC_COLLATE/LANG lattice is confusing and\nfragile. But there can be other ways, e.g.,\n\ninitdb --locale=en_US\ninitdb --locale-collate=C --locale-ctype=en_US\ninitdb # defaults to --locale=C\n\nor in postgresql.conf\n\nlocale=C\nlocale_numeric=en_US\netc.\n\nor\n\nSHOW locale;\nSHOW locale_numeric;\n\nThat way you always know exactly what situation you're in. I think this\nwas Hiroshi's main concern, the reliance on export LC_ALL, and I agree\nthat this is bad.\n\nYou say locale in Japan works, except for LC_COLLATE. This concern would\nbe satisfied by the above approach.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 22 Aug 2001 17:50:32 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "> Face it, everything has locale support these day. PostgreSQL is one of\n> the few packages that even has it as an option to turn it off. Users of\n> binary packages of PostgreSQL are all invariably faced with locale\n> features. So it's not like sudden unasked-for locale support is going to\n> be a major shock.\n\nCertainly everyone would agree that \"locale support\" is desirable.\nTatsuo has been one of the earliest and most vocal participants in\ndesign speculations on how to support the SQL9x concept of character\nsets and collations, which for purposes of long range planning seem to\nbe synonymous with \"locale\" afaict.\n\nThe question is whether and how to continue to extend the use of\nOS-supplied features to accomplish this support, with the severe\nrestrictions (from an SQL9x pov) which come with the OS implementation.\n\n - Thomas\n",
"msg_date": "Tue, 28 Aug 2001 02:02:16 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "> > Face it, everything has locale support these day. PostgreSQL is one of\n> > the few packages that even has it as an option to turn it off. Users of\n> > binary packages of PostgreSQL are all invariably faced with locale\n> > features. So it's not like sudden unasked-for locale support is going to\n> > be a major shock.\n> \n> Certainly everyone would agree that \"locale support\" is desirable.\n\nNo. At least for Japanese, LC_COLLATE is not usefull at all. Let me\nexplain why. Japanese has three kind of characters: The first one is\ncalled \"Kanji\", Scond one is \"Hiragana\". The last one is \"Katakana\".\nMany pary of data stored in database are usually written in Kanji. The\nproblem is, Kanji is an ideogram and there is no algorithm to guess\nthe correct pronunciation for Kanji letters. The only solution for\nthis is add a separate column having Hiragana or Katakana letters\nwhich represents the pronunciation for the Kanji column (Hiragana and\nKataka are phonogram). Sorting is also done by the additional\nHiragana/Katakan column, that can be done according to the code point\nof Hiragana/Katakana. So no locale support (LC_COLLATE) is neccessary\nat all for Japanese.\n\n> Tatsuo has been one of the earliest and most vocal participants in\n> design speculations on how to support the SQL9x concept of character\n> sets and collations, which for purposes of long range planning seem to\n> be synonymous with \"locale\" afaict.\n> \n> The question is whether and how to continue to extend the use of\n> OS-supplied features to accomplish this support, with the severe\n> restrictions (from an SQL9x pov) which come with the OS implementation.\n\nIn my opinion, with the SQL99 collate support, the current locale\nsupport should be vanished.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 28 Aug 2001 11:50:02 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
},
{
"msg_contents": "> > Certainly everyone would agree that \"locale support\" is desirable.\n> No...\n\nThat is why I put \"locale support\" in double-quotes. Sorry that I was\ncryptic, but I do understand your concern that OS-specific locale\nsupport is suspect for some languages.\n\n> In my opinion, with the SQL99 collate support, the current locale\n> support should be vanished.\n\nRight. That is my opinion too.\n\n - Thomas\n",
"msg_date": "Tue, 28 Aug 2001 03:35:03 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default?"
}
] |
[
{
"msg_contents": "> Hi all. I'm not a good DB admin , thats why I'm posting to this list !\n> How can I figure out the size of a database or table ???\n\nIt was easier in older versions of postgresql to perform a `du -h` on\nthe directory that corresponded to the database name. However now they\nare named by some type of object id?\n\nWould it be possible for an item to be added to the to do list to perform\na report on a database?\n\nI.e:\n\n \\d{p|S|l|r} list permissions/system tables/lobjects/ report\n\nReport will contain:\n\n1. Last vacuum time.\n2. Space the database is using on the file system.\n3. Some other juicy statistics?\n\nThanks.\n\n",
"msg_date": "Mon, 20 Aug 2001 10:17:16 +1000 (EST)",
"msg_from": "Grant <grant@conprojan.com.au>",
"msg_from_op": true,
"msg_subject": "Re: DB size"
}
] |
[
{
"msg_contents": "Possibly create a timeout for psql. pg_dump, pg_restore and other clients.\nIf they can not connect to a certain host within a certain period it will\nquit with an error. I have psql's still running for 6 days from crontab\nthat could not connect to a bogus IP address.\n\nI checked the idocs and searched on google for \"pg_dump timeout\".\n\nThankyou.\n\n",
"msg_date": "Mon, 20 Aug 2001 15:07:11 +1000 (EST)",
"msg_from": "Grant <grant@conprojan.com.au>",
"msg_from_op": true,
"msg_subject": "Suggestion for To Do List - Client timeout please."
},
{
"msg_contents": "Grant <grant@conprojan.com.au> writes:\n> Possibly create a timeout for psql. pg_dump, pg_restore and other clients.\n> If they can not connect to a certain host within a certain period it will\n> quit with an error. I have psql's still running for 6 days from crontab\n> that could not connect to a bogus IP address.\n\nThere is something wrong with your system, not with Postgres. Any\nreasonable TCP stack will time out within circa 1 minute if no response.\n\nExample (sss is a machine on my LAN that's not presently up):\n\n$ time psql -h sss\npsql: PQconnectPoll() -- connect() failed: Connection timed out\n Is the postmaster running (with -i) at 'sss'\n and accepting connections on TCP/IP port '5432'?\n\nreal 1m14.27s\nuser 0m0.01s\nsys 0m0.01s\n$\n\nThis particular timeout length is probably specific to HPUX, but the\npoint is that you have a local system problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 09:48:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Suggestion for To Do List - Client timeout please. "
},
{
"msg_contents": "Tom Lane writes:\n\n> There is something wrong with your system, not with Postgres. Any\n> reasonable TCP stack will time out within circa 1 minute if no response.\n>\n> Example (sss is a machine on my LAN that's not presently up):\n>\n> $ time psql -h sss\n> psql: PQconnectPoll() -- connect() failed: Connection timed out\n> Is the postmaster running (with -i) at 'sss'\n> and accepting connections on TCP/IP port '5432'?\n>\n> real 1m14.27s\n> user 0m0.01s\n> sys 0m0.01s\n> $\n>\n> This particular timeout length is probably specific to HPUX, but the\n> point is that you have a local system problem.\n\nI can observe something peculiar:\n\nWith current sources:\n\npeter ~$ time pg-install/bin/psql -h ralph\npsql: could not connect to server: No route to host\n Is the server running on host ralph and accepting\n TCP/IP connections on port 5432?\n\nreal 0m3.013s\nuser 0m0.010s\nsys 0m0.010s\n\nWith 7.0.2:\n\npeter ~$ time psql -h ralph\npsql: PQconnectPoll() -- connect() failed: No route to host\n Is the postmaster running (with -i) at 'ralph'\n and accepting connections on TCP/IP port '5432'?\n\n[hangs]\n\nThe backtrace shows:\n\n#0 0x401d9a0e in __select () from /lib/libc.so.6\n#1 0x4002f3b0 in b2c3 () from /usr/lib/libpq.so.2.1\n#2 0x4002666e in pqFlush () from /usr/lib/libpq.so.2.1\n#3 0x40022bc2 in closePGconn () from /usr/lib/libpq.so.2.1\n#4 0x40022c67 in PQfinish () from /usr/lib/libpq.so.2.1\n#5 0x805167d in main ()\n\nI suspect that this may be because of the questionable TCP implementation\nin Linux that you argued about with Alan Cox et al. a while ago, though I\ndon't pretend to fathom the details. Apparently something in libpq\nchanged in between, however.\n\nBut that reinforces your point that \"something is wrong with your system\".\n;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 20 Aug 2001 18:03:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestion for To Do List - Client timeout please. "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I can observe something peculiar:\n> [7.0.2 works different from current]\n\nInteresting. The psql that I exhibited my test with was in fact 7.0.2\n[quick check ... yes, current sources act the same]. So it does seem\nthere's something Linux-specific here.\n\n> The backtrace shows:\n\n> #0 0x401d9a0e in __select () from /lib/libc.so.6\n> #1 0x4002f3b0 in b2c3 () from /usr/lib/libpq.so.2.1\n> #2 0x4002666e in pqFlush () from /usr/lib/libpq.so.2.1\n> #3 0x40022bc2 in closePGconn () from /usr/lib/libpq.so.2.1\n> #4 0x40022c67 in PQfinish () from /usr/lib/libpq.so.2.1\n> #5 0x805167d in main ()\n\n> I suspect that this may be because of the questionable TCP implementation\n> in Linux that you argued about with Alan Cox et al. a while ago, though I\n> don't pretend to fathom the details. Apparently something in libpq\n> changed in between, however.\n\nYou changed it. I'll bet the difference you are seeing is that\nclosePGconn no longer tries to send an 'X' message when closing the\nsocket, if we haven't reached CONNECTION_OK state. The hang is clearly\noccuring while trying to flush out that extra byte.\n\nI would agree that this is evidence of a broken TCP stack, however ---\nat worst you should incur a second timeout delay here, not an indefinite\nhang. Anyone want to file a bug report with the Linux TCP boys?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 12:10:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Suggestion for To Do List - Client timeout please. "
}
] |
[
{
"msg_contents": "Hello, I'm pretty new to PostgreSQL, (I'm a (young) oracle DBA)\n\nWe need to work with Oracle (some heavy client of our society want Oracle\nfor security or maintenance), but we want to work with PostgreSQL too, cause\nit's more performant, less administration, ... in few words : It's the\nfuture\nBut before PostgreSQL rule the world, we want to develop once and deploy\ntwice, so I create some function ( now() and nextval() under oracle ) to\nenhanced Oracle and PostgreSQL compatibily ...\nBut I have a big problem : PostgreSQL doesn't allow Oracle style outer join,\nand Oracle doesn't allow Postgres style ... So, if someone can do something,\nI will be very greatfull ...\n\nOther compatibily problems : reserved words, create function syntax,\ndatatypes are not a problem, I can workaround.\n\nThanx in advance ...\n\n",
"msg_date": "Mon, 20 Aug 2001 15:55:45 +0200",
"msg_from": "\"Nicolas Verger\" <nicolas@verger.net>",
"msg_from_op": true,
"msg_subject": "Select parser at runtime ...."
},
{
"msg_contents": "\"Nicolas Verger\" <nicolas@verger.net> writes:\n> But I have a big problem : PostgreSQL doesn't allow Oracle style outer join,\n> and Oracle doesn't allow Postgres style ...\n\nAre you sure about that? Postgres supports ISO standard (ISO/IEC 9075,\nSQL 1992) outer joins. Oracle claims to be compliant with that\nstandard. If they don't accept the standard syntax for outer joins,\nthen their claim of compliance is faulty. But last I heard, they did.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 11:33:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Select parser at runtime .... "
},
{
"msg_contents": "> Tom Lane writes :\n> Oracle claims to be compliant with that\n> standard. If they don't accept the standard syntax for outer joins,\n> then their claim of compliance is faulty. But last I heard, they did.\n\nWell, I can't find it into the documentation, and the postgres syntax does\nnot work so I think Oracle does not support SQL92 outer join syntax... :(\nAnd Oracle keywords does not contains 'outer' nor 'left' .... Arg\n\n",
"msg_date": "Mon, 20 Aug 2001 18:11:28 +0200",
"msg_from": "\"Nicolas Verger\" <nicolas@verger.net>",
"msg_from_op": true,
"msg_subject": "RE: Select parser at runtime .... "
},
{
"msg_contents": "> De : Tom Lane\n> Are you sure about that? Postgres supports ISO standard (ISO/IEC 9075,\n> SQL 1992) outer joins. Oracle claims to be compliant with that\n> standard. If they don't accept the standard syntax for outer joins,\n> then their claim of compliance is faulty. But last I heard, they did.\n\nI find this on an Oracle Technical Forum :\n\n- SQL 92 is a standard with 3 levels: entrey level, intermediate level and a\nthird level I - can't remember (sorry...)\n-\n- As far as I know, Oracle is SQL 92 entry level compliant.\n- INNER JOIN, OUTER JOIN, LEFT JOIN etc belong to SQL 92\n- intermediate level sintax and, therefore, will not work on Oracle.\n-\n- Regards,\n- Danilo Gimenez\n- Oracle DBA\n\nSo I haven't any solution.... Can I hope about a future implementation of\nOracle outer join style ?\n\n\n",
"msg_date": "Mon, 20 Aug 2001 18:40:32 +0200",
"msg_from": "\"Nicolas Verger\" <nicolas@verger.net>",
"msg_from_op": true,
"msg_subject": "RE: Select parser at runtime .... "
}
] |
[
{
"msg_contents": "> Would your suggested implementation allow locking on an\n> arbitrary string?\n\nWell, placing string in LOCKTAG is not good so we could\ncreate auxilary hash table in shmem to keep such strings\nand use string' address as part of LOCKTAG. New function\n(LockRelationKey?) in lmgr.c would first find/place\nkey in that table, than construct LOCKTAG and call\nLockAcquire.\nPossible syntax:\n\nLOCK TABLE relation IN {SHARE | EXCLUSIVE} MODE\n\tON KEY user-string [FOR TRANSACTION | FOR SESSION];\n\nUNLOCK (RELEASE?) TABLE relation {SHARE | EXCLUSIVE} LOCK\n\tON KEY user-string;\n\n(or just some built-in functions).\n\n> If it does then one of the things I'd use it for is to insert\n> unique data without having to lock the table or rollback on\n> failed insert (unique index still kept as a guarantee).\n\n(Classic example how could be used SAVEPOINTs -:))\n\nSo, in your application you would first lock a key in excl mode\n(for duration of transaction), than try to select and insert unless\nfound? (Note that this will not work with serializable isolevel.)\n\nComments?\n\nVadim\n",
"msg_date": "Mon, 20 Aug 2001 09:39:42 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: User locks code"
}
] |
[
{
"msg_contents": "Hi all,\n\nJust wondering if it'd be worthwhile creating a wrapper script people\ncan run which would automatically generate/update the\npg_hba.conf/pg_ident.conf files? Just to make it easier for the\nend-user.\n\nCalled \"bin/pg_auth\" or something.\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 21 Aug 2001 02:50:11 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Idea: Worthwhile creating a wrapper script to automate pg_hba.conf\n\tentries?"
}
] |
[
{
"msg_contents": "ipcclean(1) currently says:\n\n| ipcclean cleans up shared memory and semaphore space from aborted backends\n| by deleting all instances owned by user postgres. Only the DBA should\n| execute this program as it can cause bizarre behavior (i.e., crashes) if\n| run during multi-user execution. This program should be executed if\n| messages such as semget: No space left on device are encountered when\n| starting up the postmaster or the backend server.\n\nAFAIR, with the 7.1 release the postmaster automatically recovers from\nthis situation. Can someone come up with a better description of what\nipcclean is useful for, if there still is such a thing?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 20 Aug 2001 20:12:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Status of ipcclean"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> AFAIR, with the 7.1 release the postmaster automatically recovers from\n> this situation. Can someone come up with a better description of what\n> ipcclean is useful for, if there still is such a thing?\n\nI believe that ipcclean is no longer needed for preparing to start a new\npostmaster. It might possibly be useful if you wanted to clean up after\na dead postmaster that you did *not* intend to restart.\n\nHowever, given the lack of portability and lack of robustness of the\nscript (including inability to deal with multiple-postmaster\nsituations), I think I'd vote for removing it altogether.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Aug 2001 20:46:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Status of ipcclean "
},
{
"msg_contents": "Tom Lane writes:\n\n> I believe that ipcclean is no longer needed for preparing to start a new\n> postmaster. It might possibly be useful if you wanted to clean up after\n> a dead postmaster that you did *not* intend to restart.\n>\n> However, given the lack of portability and lack of robustness of the\n> script (including inability to deal with multiple-postmaster\n> situations), I think I'd vote for removing it altogether.\n\nCan other people voice their opinions what to do with ipcclean?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 28 Aug 2001 17:38:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Status of ipcclean "
}
] |
[
{
"msg_contents": "This is from the jdbc list. Is there any way to get fully qualified\ncolumn names?\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Ben Carterette\nSent: August 16, 2001 11:02 AM\nTo: Rene Pijlman\nCc: pgsql-jdbc@postgresql.org\nSubject: Re: [JDBC] select on multiple tables\n\n\nThis won't work because I don't know in advance of the SELECT which \ntables I'm going to be selecting from. The SELECT is done in a servlet \nthat determines the tables based on request parameters. I tried \"SELECT\n\ntable1.*, table2.* FROM table1, table2\", but it still can't tell the \ndifference between columns with the same name.\n\nThanks for your help\n\nben\n\n\nOn Wednesday, August 15, 2001, at 06:29 PM, Rene Pijlman wrote:\n\n> On Wed, 15 Aug 2001 16:43:31 -0500, Ben Carterette wrote:\n>> I have a query like \"SELECT * FROM table1, table2\" and I want to read\n>> values\n>> out of a ResultSet. What if the two tables have column names in \n>> common and\n>> I can't predict the column numbers? Is there any way to get\ntable1.id \n>> and\n>> table2.id? rs.getString tells me \"The column name table1.id not \n>> found.\"\n>\n> Does this also happen when you explicitly name the columns?\n>\n> SELECT table1.id, ..., table2.id, ...\n> FROM table1, table2\n>\n> Or if that doesn't help, try if a column label with the AS clause \n> works:\n>\n> SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]\n> * | expression [ AS output_name ] [, ...] \n> http://www.postgresql.org/idocs/index.php?sql-select.html\n>\n> SELECT table.id AS id1, ..., table2.id AS id2\n> FROM table1, table2\n>\n> And then rs.getString(\"id1\");\n>\n> I think both solutions should work. Please let us know if they don't.\n>\n> Regards,\n> Ren� Pijlman\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Mon, 20 Aug 2001 15:02:13 -0400",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": true,
"msg_subject": "Fully qualified column names"
},
{
"msg_contents": "\nThis is from the jdbc list. Is there any way to get fully qualified\ncolumn names?\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Ben Carterette\nSent: August 16, 2001 11:02 AM\nTo: Rene Pijlman\nCc: pgsql-jdbc@postgresql.org\nSubject: Re: [JDBC] select on multiple tables\n\n\nThis won't work because I don't know in advance of the SELECT which \ntables I'm going to be selecting from. The SELECT is done in a servlet \nthat determines the tables based on request parameters. I tried \"SELECT\n\ntable1.*, table2.* FROM table1, table2\", but it still can't tell the \ndifference between columns with the same name.\n\nThanks for your help\n\nben\n\n\nOn Wednesday, August 15, 2001, at 06:29 PM, Rene Pijlman wrote:\n\n> On Wed, 15 Aug 2001 16:43:31 -0500, Ben Carterette wrote:\n>> I have a query like \"SELECT * FROM table1, table2\" and I want to read\n\n>> values out of a ResultSet. What if the two tables have column names \n>> in common and\n>> I can't predict the column numbers? Is there any way to get\ntable1.id \n>> and\n>> table2.id? rs.getString tells me \"The column name table1.id not\n>> found.\"\n>\n> Does this also happen when you explicitly name the columns?\n>\n> SELECT table1.id, ..., table2.id, ...\n> FROM table1, table2\n>\n> Or if that doesn't help, try if a column label with the AS clause\n> works:\n>\n> SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]\n> * | expression [ AS output_name ] [, ...]\n> http://www.postgresql.org/idocs/index.php?sql-select.html\n>\n> SELECT table.id AS id1, ..., table2.id AS id2\n> FROM table1, table2\n>\n> And then rs.getString(\"id1\");\n>\n> I think both solutions should work. Please let us know if they don't.\n>\n> Regards,\n> Ren� Pijlman\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n",
"msg_date": "Mon, 20 Aug 2001 20:08:54 -0400",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Fully qualified column names"
}
] |
[
{
"msg_contents": "At 09:39 AM 20-08-2001 -0700, Mikheev, Vadim wrote:\n>> If it does then one of the things I'd use it for is to insert\n>> unique data without having to lock the table or rollback on\n>> failed insert (unique index still kept as a guarantee).\n>\n>(Classic example how could be used SAVEPOINTs -:))\n\nI guess so. But this could be faster.\n\n>So, in your application you would first lock a key in excl mode\n>(for duration of transaction), than try to select and insert unless\n>found? (Note that this will not work with serializable isolevel.)\n\nyep:\nlock \"tablename.colname.val=1\"\nselect count(*) from tablename where colname=1\nIf no rows, insert, else update.\n(dunno if the locks would scale to a scenario with hundreds of concurrent\ninserts - how many user locks max?).\n\nWhy wouldn't it work with serializable isolevel?\n\nAnyway, I believe that isolevel doesn't really serialise things in this\ncase (inserting a unique row) so it wouldn't matter to me.\n\nRegards,\nLink.\n\n",
"msg_date": "Tue, 21 Aug 2001 14:26:06 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": true,
"msg_subject": "RE: User locks code"
},
{
"msg_contents": "Hi all,\n\nI'm currently trying to develop a log analyzer for PostgreSQL logs and at\nthe first\nstage I'm finding a little problem with the postgresql.conf option\nlog_timestamp.\n\nThe problem is that if this option is set to false we have no idea of when\nthe backend\nis started:\n\nDEBUG: database system was shut down at 2001-08-20 21:51:54 CEST\nDEBUG: CheckPoint record at (0, 126088776)\nDEBUG: Redo record at (0, 126088776); Undo record at (0, 0); Shutdown TRUE\nDEBUG: NextTransactionId: 489793; NextOid: 77577\nDEBUG: database system is in production state\n\nIs it possible to have it into the last line as we have the information of\nthe database\nshutdown timestamp in the first line ?\n\nAlso, an other question is why using timestamp into the other log instead of\nthe value\nof time in seconds since the Epoch like the time() function do ?\n\nI don't know if it is speedest or not but if timestamp is system local\ndependant\nI think it should be very difficult to me to have a portable log analyzer...\n\nRegards,\n\nGilles Darold\n\n",
"msg_date": "Tue, 21 Aug 2001 09:45:50 +0200",
"msg_from": "Gilles DAROLD <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Postgresql log analyzer"
},
{
"msg_contents": "Gilles DAROLD writes:\n\n> Is it possible to have it into the last line as we have the information of\n> the database\n> shutdown timestamp in the first line ?\n\nWe not just turn time stamping on?\n\n> Also, an other question is why using timestamp into the other log instead of\n> the value\n> of time in seconds since the Epoch like the time() function do ?\n\nBecause humans generally reckon time the former way.\n\n> I don't know if it is speedest or not but if timestamp is system local\n> dependant\n> I think it should be very difficult to me to have a portable log analyzer...\n\nIn the current system, the timestamp is not locale dependent, but that\ndoesn't mean that it could be in the future. (I wouldn't find it\nparticularly useful, because the current format is internationally\nreadable.)\n\nWhat *is* locale dependent are the messages, though. Not sure how you\nplan to deal with that.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 21 Aug 2001 17:49:42 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql log analyzer"
},
{
"msg_contents": "Hi all,\n\nHere is a first draft generated by a log analyzer for postgres I've wrote today:\n\n http://www.samse.fr/GPL/log_report/\n\nIn all this html report there is what I'm able to extract minus the statistics.\n\nI need to know what people want to see reported to have a powerfull log analyzer,\n\nI can spend this week do do that...\n\nPeter, sorry for my poor english that what I mean is that if you don't activated\nthe\nlog_timestamp option we have no idea when the postmaster have been started\n(or at least doing a ls -la on /tmp/.s.PGSQL.5432). Other things was just\nquestion\nand you answer them, thanks.\n\nRegards\n\n\n",
"msg_date": "Tue, 21 Aug 2001 20:35:25 +0200",
"msg_from": "Gilles DAROLD <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgresql log analyzer"
},
{
"msg_contents": "Gilles DAROLD wrote:\n> \n> Hi all,\n> \n> Here is a first draft generated by a log analyzer for postgres I've wrote today:\n> \n> http://www.samse.fr/GPL/log_report/\n> \n> In all this html report there is what I'm able to extract minus the statistics.\n> \n> I need to know what people want to see reported to have a powerfull log analyzer,\n\nI like what you have there so far.\n\nFor my own use I would like to see the ability to turn some of these off,\nand also perhaps a summary page that you would click through to the more\ndetailed reports.\n\nThe 'query' page is kind of complicated too. Would it be possible to put\nthat into a table layout as well?\n+-------------------------------+\n|select... |\n+----+----+----+--------+-------+\n|stat|stat|stat|stat ...| |\n+----+----+----+--------+-------+\n\nsort of layout.\n\nIt would be nice to see an EXPLAIN on the query page, but you would want\nthis to be an option, I guess. I imagine you could do this by getting the\nEXPLAIN at log analysis time if it isn't in the logs.\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew @ catalyst . net . nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64(21)635-694, Fax:+64(4)499-5596, Office: +64(4)499-2267xtn709\n",
"msg_date": "Wed, 22 Aug 2001 08:03:26 +1200",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Postgresql log analyzer"
},
{
"msg_contents": "Hi all,\n\nI have updated the drafts for pg log analyzer especially for EXPLAIN output.\nWhat do you want to see as statistics result. Currently I only output the following:\n\n- scan type\n- startup cost\n- total cost\n- number of rows returned\n- and the width\n\nThere's certainly other usefull information but I don't know. Please let me know !\n\nNote:\n\nThis is a simple draft to show what can be done, as a general purpose it will include:\n\n- A configuration file (to choose what should be reported, paths, etc...)\n or/and command line args\n- An index page with resume of all reports\n- Incremental scan working on full or rotate log\n\nFor other good requests it's done...\n\nLet me know any other requests otherwise I will publish the first release at least on\nmonday\nif not tomorow !\n\n http://www.samse.fr/GPL/log_report/\n\nRegards,\n\nGilles Darold\n\nAndrew McMillan wrote:\n\n> Gilles DAROLD wrote:\n> >\n> > Hi all,\n> >\n> > Here is a first draft generated by a log analyzer for postgres I've wrote today:\n> >\n> > http://www.samse.fr/GPL/log_report/\n> >\n> > In all this html report there is what I'm able to extract minus the statistics.\n> >\n> > I need to know what people want to see reported to have a powerfull log analyzer,\n>\n> I like what you have there so far.\n>\n> For my own use I would like to see the ability to turn some of these off,\n> and also perhaps a summary page that you would click through to the more\n> detailed reports.\n>\n> The 'query' page is kind of complicated too. Would it be possible to put\n> that into a table layout as well?\n> +-------------------------------+\n> |select... |\n> +----+----+----+--------+-------+\n> |stat|stat|stat|stat ...| |\n> +----+----+----+--------+-------+\n>\n> sort of layout.\n>\n> It would be nice to see an EXPLAIN on the query page, but you would want\n> this to be an option, I guess. I imagine you could do this by getting the\n> EXPLAIN at log analysis time if it isn't in the logs.\n>\n> Cheers,\n> Andrew.\n> --\n> _____________________________________________________________________\n> Andrew McMillan, e-mail: Andrew @ catalyst . net . nz\n> Catalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\n> Me: +64(21)635-694, Fax:+64(4)499-5596, Office: +64(4)499-2267xtn709\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n\n",
"msg_date": "Thu, 23 Aug 2001 14:35:47 +0200",
"msg_from": "Gilles DAROLD <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql log analyzer"
}
] |
[
{
"msg_contents": "Where is the MX RPM ? I didn't see this in the 7.1.3 RPM, for RH 7.1 and also Mdk 8.0.\nAnd by the way, it was asked when I tried to install the PostgreSQL Python module.\nI know 7.1.2 RH 7.1 has this MX RPM.\n\nThank's\nAndy\n\n\n\n\n\n\n\nWhere is the MX RPM ? I didn't see this in the \n7.1.3 RPM, for RH 7.1 and also Mdk 8.0.\nAnd by the way, it was asked when I tried to \ninstall the PostgreSQL Python module.\nI know 7.1.2 RH 7.1 has this MX RPM.\n \nThank's\nAndy",
"msg_date": "Tue, 21 Aug 2001 16:30:56 +0700",
"msg_from": "\"Andy\" <andysamuel@geocities.com>",
"msg_from_op": true,
"msg_subject": "mx.rpm"
},
{
"msg_contents": "On Tuesday 21 August 2001 05:30, Andy wrote:\n> Where is the MX RPM ? I didn't see this in the 7.1.3 RPM, for RH 7.1 and\n> also Mdk 8.0. And by the way, it was asked when I tried to install the\n> PostgreSQL Python module. I know 7.1.2 RH 7.1 has this MX RPM.\n\nThe 7.1 DB-API 2.0 Python client _requires_ mx to be installed, whether from \nRPM or from source. As a stopgap measure, I uploaded a version of the mx rpm \nwith the last 7.1.2 RPMset.\n\nHowever, I can't really provide those dependencies forever. The mx rpm \nprovided in the 7.1.2 download is still ok to use. I don't think I cleaned \nit out.\n\nCheck on 'www.rpmfind.net' for an mx RPM for your distribution.\n\nIncidentally, the older PyGreSQL module doesn't require this -- but the newer \npgdb module does. This is not documented in the main PostgreSQL \ndocumentation, either.\n\nD'Arcy may be able to provide a link to where this is documented properly.\n\n>From rpmfind.net: \n(http://www.rpmfind.net//linux/RPM/rawhide/1.0/i386/RedHat/RPMS/mx-2.0.1-1.i386.html)\nThe mx extensions for Python are a collection of Python software tools\nwhich enhance Python's usability in many areas.\n\nAny OS that ships PostgreSQL 7.1.x and doesn't ship mx has a broken \nPostgreSQL Python DB-API 2.0 client, AFAICT.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Aug 2001 11:26:20 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] mx.rpm"
}
] |
[
{
"msg_contents": "How is posible show the date in european format. By defalut it's ISO. I\ncan show with \"SET datestyle=postgres\" but is validate from one sesion\nonly.\n\nThanks\n",
"msg_date": "Tue, 21 Aug 2001 11:52:03 +0200",
"msg_from": "Juan Manuel =?iso-8859-1?Q?Garc=EDa?= Arias <desarrollo@centrored.com>",
"msg_from_op": true,
"msg_subject": "Problem with postgres's date"
}
] |
[
{
"msg_contents": "If anyone was concerned about our bug database being visible and giving\nthe impression we don't fix any bugs, see this URL:\n\n\thttp://www.isthisthingon.org/nisca/postgres.html\n\nNot only does it show the problems he had with PostgreSQL, he uses our\nbug list as an example of how PostgreSQL isn't advancing or interested\nin fixing bug.\n\nWe better remove that web page soon:\n\n\thttp://www.ca.postgresql.org/bugs/bugs.php?2\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 07:05:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Link to bug webpage"
},
{
"msg_contents": ">\n>We better remove that web page soon:\n>\n>\thttp://www.ca.postgresql.org/bugs/bugs.php?2\n>\n\nDo we have any pages to alter the status of bugs, or assign them? There are\na number of bugs in the list that I know are fixed.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Aug 2001 21:30:49 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> If anyone was concerned about our bug database being visible and giving\n> the impression we don't fix any bugs, see this URL:\n>\n> \thttp://www.isthisthingon.org/nisca/postgres.html\n>\n> Not only does it show the problems he had with PostgreSQL, he uses our\n> bug list as an example of how PostgreSQL isn't advancing or interested\n> in fixing bug.\n>\n> We better remove that web page soon:\n>\n> \thttp://www.ca.postgresql.org/bugs/bugs.php?2\n\nI removed the link to the page a few days ago. I guess I should disable\nit as well. Woulda been a whole lot easier if the database was just\nupdated periodically.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 08:22:39 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> If anyone was concerned about our bug database being visible and giving\n> the impression we don't fix any bugs, see this URL:\n>\n> \thttp://www.isthisthingon.org/nisca/postgres.html\n>\n> Not only does it show the problems he had with PostgreSQL, he uses our\n> bug list as an example of how PostgreSQL isn't advancing or interested\n> in fixing bug.\n>\n> We better remove that web page soon:\n>\n> \thttp://www.ca.postgresql.org/bugs/bugs.php?2\n>\n>\n\nOk the functionality as well as the menu item are gone. You do realize\nit's going to give the impression that we're trying to hide something,\ndon't you?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 08:29:22 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Philip Warner wrote:\n\n> >\n> >We better remove that web page soon:\n> >\n> >\thttp://www.ca.postgresql.org/bugs/bugs.php?2\n> >\n>\n> Do we have any pages to alter the status of bugs, or assign them? There are\n> a number of bugs in the list that I know are fixed.\n\nYes but noone was interested in it. It's still there but you're really\nthe first to show interest in about a year.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 08:32:17 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "At 08:32 21/08/01 -0400, Vince Vielhaber wrote:\n>\n>Yes but noone was interested in it. It's still there but you're really\n>the first to show interest in about a year.\n>\n\nThat's good (and depressing); where are they?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Aug 2001 22:39:57 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> On Tue, 21 Aug 2001, Bruce Momjian wrote:\n> \n> > If anyone was concerned about our bug database being visible and giving\n> > the impression we don't fix any bugs, see this URL:\n> >\n> > \thttp://www.isthisthingon.org/nisca/postgres.html\n> >\n> > Not only does it show the problems he had with PostgreSQL, he uses our\n> > bug list as an example of how PostgreSQL isn't advancing or interested\n> > in fixing bug.\n> >\n> > We better remove that web page soon:\n> >\n> > \thttp://www.ca.postgresql.org/bugs/bugs.php?2\n> >\n> >\n> \n> Ok the functionality as well as the menu item are gone. You do realize\n> it's going to give the impression that we're trying to hide something,\n> don't you?\n\nUh, what choices do we have? Do we want to update that database, seeing\nas only a small percentage of bug reports come in through that\ninterface?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 08:48:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "At 08:22 21/08/01 -0400, Vince Vielhaber wrote:\n>\n>I removed the link to the page a few days ago. I guess I should disable\n>it as well. Woulda been a whole lot easier if the database was just\n>updated periodically.\n>\n\nI don't think this is a good solution. We really do need a list of bugs. We\nprobably need to list status and the releases they apply to.\n\nI don't think anybody but the most naieve (or biased) users expect software\nto be bug free, and the number of bugs grows with the complexity of the\ncomponents. The fact we have a lot of bugs is to be expected. The fact that\nwe don't mark them as fixed is just sloppy.\n\nPlease reinstate the page, and allow some facility to edit them. I will try\nto work through them *slowly* to verify they are reproducible/not\nreproducible in 7.1.3 and in the current CVS, then mark them as fixed in\nthe appropriate release. Hopefully other people will do the same with bugs\nthey know about.\n\nDoes this seem reasonable?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Aug 2001 22:48:15 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> > On Tue, 21 Aug 2001, Bruce Momjian wrote:\n> >\n> > > If anyone was concerned about our bug database being visible and giving\n> > > the impression we don't fix any bugs, see this URL:\n> > >\n> > > \thttp://www.isthisthingon.org/nisca/postgres.html\n> > >\n> > > Not only does it show the problems he had with PostgreSQL, he uses our\n> > > bug list as an example of how PostgreSQL isn't advancing or interested\n> > > in fixing bug.\n> > >\n> > > We better remove that web page soon:\n> > >\n> > > \thttp://www.ca.postgresql.org/bugs/bugs.php?2\n> > >\n> > >\n> >\n> > Ok the functionality as well as the menu item are gone. You do realize\n> > it's going to give the impression that we're trying to hide something,\n> > don't you?\n>\n> Uh, what choices do we have? Do we want to update that database, seeing\n> as only a small percentage of bug reports come in through that\n> interface?\n\nThere are over 400 in the database. If that's a small percentage then\nso be it, but it's still over 400 bugs that appear to have been ignored.\nHaving a place to look up possible problems and seeing if there was a\nsolution seems to be a plus to me, but if you don't want it it doesn't\nbother me either way. The lookups are currently disabled, ball's in\nyour court.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 09:00:26 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "We could install the Postgres version of Bugzilla.\nYes, there's a version that runs on Postgres rather than MySQL.\nThat way we don't have to maintain the bug system.\n\n> Ok the functionality as well as the menu item are gone. You do realize\n> it's going to give the impression that we're trying to hide something,\n> don't you?\n>\n> Vince.\n\nCheers,\n\nColin\n\n\n",
"msg_date": "Tue, 21 Aug 2001 15:05:13 +0200",
"msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> > > Ok the functionality as well as the menu item are gone. You do realize\n> > > it's going to give the impression that we're trying to hide something,\n> > > don't you?\n> >\n> > Uh, what choices do we have? Do we want to update that database, seeing\n> > as only a small percentage of bug reports come in through that\n> > interface?\n> \n> There are over 400 in the database. If that's a small percentage then\n> so be it, but it's still over 400 bugs that appear to have been ignored.\n> Having a place to look up possible problems and seeing if there was a\n> solution seems to be a plus to me, but if you don't want it it doesn't\n> bother me either way. The lookups are currently disabled, ball's in\n> your court.\n\nIt's up to the group to decide. If we have a database of bugs, I think\nit has to be complete. I think a partial list is worse than no list at\nall.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 09:15:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Colin 't Hart wrote:\n\n> We could install the Postgres version of Bugzilla.\n> Yes, there's a version that runs on Postgres rather than MySQL.\n> That way we don't have to maintain the bug system.\n\nAnd how does it know when bugs are fixed?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 09:46:22 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Link to bug webpage"
},
{
"msg_contents": ">\n>It's up to the group to decide. If we have a database of bugs, I think\n>it has to be complete. I think a partial list is worse than no list at\n>all.\n>\n\nI disagree. Unless you are omniscient, we will only ever have a partial list. \n\nPerhaps more importantly, the more common ones will be in the list, because\nthe more often it's reported, the more likely someone will use the bug\ntool. If we develop a culture that says 'if it's on the bug list, it will\nget looked at', then more people will report bugs via the correct channels\netc.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 21 Aug 2001 23:49:05 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> >\n> >It's up to the group to decide. If we have a database of bugs, I think\n> >it has to be complete. I think a partial list is worse than no list at\n> >all.\n> >\n> \n> I disagree. Unless you are omniscient, we will only ever have a partial list. \n> \n> Perhaps more importantly, the more common ones will be in the list, because\n> the more often it's reported, the more likely someone will use the bug\n> tool. If we develop a culture that says 'if it's on the bug list, it will\n> get looked at', then more people will report bugs via the correct channels\n> etc.\n\nThat is the real question. Do we want to rely more heavily on a bug\ndatabase rather than the email lists? I haven't heard many say they\nwant that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 09:51:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> > >\n> > >It's up to the group to decide. If we have a database of bugs, I think\n> > >it has to be complete. I think a partial list is worse than no list at\n> > >all.\n> > >\n> >\n> > I disagree. Unless you are omniscient, we will only ever have a partial list.\n> >\n> > Perhaps more importantly, the more common ones will be in the list, because\n> > the more often it's reported, the more likely someone will use the bug\n> > tool. If we develop a culture that says 'if it's on the bug list, it will\n> > get looked at', then more people will report bugs via the correct channels\n> > etc.\n>\n> That is the real question. Do we want to rely more heavily on a bug\n> database rather than the email lists? I haven't heard many say they\n> want that.\n\nThe database keeps track of it. When someone uses the bugtool to\nreport a bug it's mailed to the bugs list.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 10:02:01 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "\nPhilip Warner wrote:\n> I don't think this is a good solution. We really do need a list of bugs.\nWe\n> probably need to list status and the releases they apply to.\n\nBugzilla can do this -- it has the concept of a Milestone and a Version.\n\n> I don't think anybody but the most naieve (or biased) users expect\nsoftware\n> to be bug free, and the number of bugs grows with the complexity of the\n> components. The fact we have a lot of bugs is to be expected. The fact\nthat\n> we don't mark them as fixed is just sloppy.\n\nBugzilla makes it fairly painless to mark a bug as fixed.\n\n> Please reinstate the page, and allow some facility to edit them. I will\ntry\n> to work through them *slowly* to verify they are reproducible/not\n> reproducible in 7.1.3 and in the current CVS, then mark them as fixed in\n> the appropriate release. Hopefully other people will do the same with bugs\n> they know about.\n>\n> Does this seem reasonable?\n\nIf we install Bugzilla (running on Postgres, not MySQL, obviously) we save\nourselves the hassle of maintaining the bug system, and we can showcase\nthat Postgres *can* be to back a web-based system :-)\n\nCheers,\n\nColin\n\n\n",
"msg_date": "Tue, 21 Aug 2001 16:03:15 +0200",
"msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> > That is the real question. Do we want to rely more heavily on a bug\n> > database rather than the email lists? I haven't heard many say they\n> > want that.\n> \n> The database keeps track of it. When someone uses the bugtool to\n> report a bug it's mailed to the bugs list.\n\nYes, but we have to add items that don't come in through the database,\nand mark them as done/duplicates if we want it to be useful.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 10:03:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> > > That is the real question. Do we want to rely more heavily on a bug\n> > > database rather than the email lists? I haven't heard many say they\n> > > want that.\n> >\n> > The database keeps track of it. When someone uses the bugtool to\n> > report a bug it's mailed to the bugs list.\n>\n> Yes, but we have to add items that don't come in through the database,\n> and mark them as done/duplicates if we want it to be useful.\n\nNot necessarily. If someone discovers one that's not in the database\nthey'll add it. If it's already fixed it'll get closed out but will\nstill be in the database. It's not intended to be a todo/isdone list\nor a development history reference. We have a TODO list and CVS for\nthat stuff.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 10:10:08 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> > Yes, but we have to add items that don't come in through the database,\n> > and mark them as done/duplicates if we want it to be useful.\n> \n> Not necessarily. If someone discovers one that's not in the database\n> they'll add it. If it's already fixed it'll get closed out but will\n> still be in the database. It's not intended to be a todo/isdone list\n> or a development history reference. We have a TODO list and CVS for\n> that stuff.\n\nHow do you communicate that to people looking at the content? Do you\nput in big letters at the top, \"This list is not complete.\" The fact an\nitems is missing from the list (new bug) is just as important as an item\nappearing on the list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 10:14:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Please reinstate the page, and allow some facility to edit them. I will try\n> to work through them *slowly* to verify they are reproducible/not\n> reproducible in 7.1.3 and in the current CVS, then mark them as fixed in\n> the appropriate release. Hopefully other people will do the same with bugs\n> they know about.\n\nI think you are wasting your time, unless you can get the community as a\nwhole to buy into the notion that it's a profitable use of our time to\ntry to maintain this bug database.\n\nPersonally I won't spend any time on it, because it has exactly the\nsame flaws that made our previous experiment in bug-tracking go down in\nflames: it's incomplete (doesn't track bugs reported via the mailing\nlists) and at the same time too complete (tracks everything sent in\nvia that web form, which includes a lot of non-bugs).\n\nVince, if I were you I'd just make the page point to the pgsql-bugs\narchives (http://www.ca.postgresql.org/mhonarc/pgsql-bugs/), which\nat least gives people the right impression about activity.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 10:31:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "----- Original Message ----- \nFrom: Bruce Momjian <pgman@candle.pha.pa.us>\nSent: Tuesday, August 21, 2001 8:48 AM\n\n\n> > On Tue, 21 Aug 2001, Bruce Momjian wrote:\n> > \n> > >\n> > > Not only does it show the problems he had with PostgreSQL, he uses our\n> > > bug list as an example of how PostgreSQL isn't advancing or interested\n> > > in fixing bug.\n> > >\n> > > We better remove that web page soon:\n> > >\n> > > http://www.ca.postgresql.org/bugs/bugs.php?2\n> > >\n> > \n> > Ok the functionality as well as the menu item are gone. You do realize\n> > it's going to give the impression that we're trying to hide something,\n> > don't you?\n> \n> Uh, what choices do we have? Do we want to update that database, seeing\n> as only a small percentage of bug reports come in through that\n> interface?\n\nMaybe a better solution for the short run would be\nreturn the page where it was, and but links to the pgsql-bugs and \npgsql-hackers archives with some sort of exmplanatory saying that \"this is\na *complete* (it must be complete of course) list of bugs, which are \nbeing extensively discussed in the these lists and fixed. Please, visit/search\nthese mail archives for most up-to-date information blah blah blah\" One can\ninvent some appropriate wording, I guess. But this atleast will show people\nthat there's work acually going on, and on daily basis. And also a good idea\nto have \"Last updated\" time stamp on the page too... so it doesn't seem\nto be forgotten for ages..\n\nS.\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 10:37:48 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "How about we trial it, but with the understanding that bugs we fix will\nbe marked as such?\n\nAfter all, every bug is given an ID, so whomever fixes the bug with that\nID should also mark it off.\n\nLooking at the present situation, it seems we began a good idea, but\nnever really followed through with it. Maybe it's time to.\n\nPhilip seems to be volunteering for taking care of what's presently in\nthere, can we also ask those [HACKERS] who commit fixes to mark them\noff?\n\n+ Justin\n\n\nBruce Momjian wrote:\n> \n> > >\n> > >It's up to the group to decide. If we have a database of bugs, I think\n> > >it has to be complete. I think a partial list is worse than no list at\n> > >all.\n> > >\n> >\n> > I disagree. Unless you are omniscient, we will only ever have a partial list.\n> >\n> > Perhaps more importantly, the more common ones will be in the list, because\n> > the more often it's reported, the more likely someone will use the bug\n> > tool. If we develop a culture that says 'if it's on the bug list, it will\n> > get looked at', then more people will report bugs via the correct channels\n> > etc.\n> \n> That is the real question. Do we want to rely more heavily on a bug\n> database rather than the email lists? I haven't heard many say they\n> want that.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 22 Aug 2001 00:43:33 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> How about we trial it, but with the understanding that bugs we fix will\n> be marked as such?\n> \n> After all, every bug is given an ID, so whomever fixes the bug with that\n> ID should also mark it off.\n> \n> Looking at the present situation, it seems we began a good idea, but\n> never really followed through with it. Maybe it's time to.\n> \n> Philip seems to be volunteering for taking care of what's presently in\n> there, can we also ask those [HACKERS] who commit fixes to mark them\n> off?\n\nCan someone point me to a bug that is _not_ on the TODO list? If not,\nwhat does a complete bug database do for us except list reported bugs\nand possible workarounds.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 10:54:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> > > Yes, but we have to add items that don't come in through the database,\n> > > and mark them as done/duplicates if we want it to be useful.\n> >\n> > Not necessarily. If someone discovers one that's not in the database\n> > they'll add it. If it's already fixed it'll get closed out but will\n> > still be in the database. It's not intended to be a todo/isdone list\n> > or a development history reference. We have a TODO list and CVS for\n> > that stuff.\n>\n> How do you communicate that to people looking at the content?\n\nActually right now I'm just trying to communicate that to you.\n\n\n> Do you\n> put in big letters at the top, \"This list is not complete.\" The fact an\n> items is missing from the list (new bug) is just as important as an item\n> appearing on the list.\n\nIn any situation the list not being complete should be more than obvious.\nThe important part here that seems to be slipping away is that noone is\nupdating it when things are fixed (sans Phillip).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 10:57:34 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "MySQL has to first add some features in order to have some bugs, don't they?\n:-)\n\nSome people crack me up in their opinions.. If it took him 6 hours to figure\nout \"int8\" then I'm not really interested in anything else he has to say...\nLord...\n\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Tuesday, August 21, 2001 7:05 AM\nSubject: [HACKERS] Link to bug webpage\n\n\n> If anyone was concerned about our bug database being visible and giving\n> the impression we don't fix any bugs, see this URL:\n>\n> http://www.isthisthingon.org/nisca/postgres.html\n>\n> Not only does it show the problems he had with PostgreSQL, he uses our\n> bug list as an example of how PostgreSQL isn't advancing or interested\n> in fixing bug.\n>\n> We better remove that web page soon:\n>\n> http://www.ca.postgresql.org/bugs/bugs.php?2\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Tue, 21 Aug 2001 11:06:10 -0400",
"msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> > How about we trial it, but with the understanding that bugs we fix will\n> > be marked as such?\n> >\n> > After all, every bug is given an ID, so whomever fixes the bug with that\n> > ID should also mark it off.\n> >\n> > Looking at the present situation, it seems we began a good idea, but\n> > never really followed through with it. Maybe it's time to.\n> >\n> > Philip seems to be volunteering for taking care of what's presently in\n> > there, can we also ask those [HACKERS] who commit fixes to mark them\n> > off?\n>\n> Can someone point me to a bug that is _not_ on the TODO list? If not,\n> what does a complete bug database do for us except list reported bugs\n> and possible workarounds.\n\nDo you actually expect someone to go thru the 400+ items in the database\nand compare them to the TODO list? Seems to me that's something the\nmaintainer of the TODO list would be doing. Can you point me to the form\nthat gets something on the TODO list that the average user can use? Can\nyou guarantee every bug will end up on the TODO list? Can you point me\nto the place on the TODO list when a user can look at to see if a bug has\nbeen fixed or even reported?\n\nYou're making more of an issue with all of this than there is. The TODO\nlist has a purpose, as does CVS, as do the mailing lists, as does the\nregression database, as does the bugs database, as do the interactive\ndocs, ...\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 11:09:01 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, Aug 21, 2001 at 09:51:29AM -0400, Bruce Momjian wrote:\n> > >\n> > >It's up to the group to decide. If we have a database of bugs, I think\n> > >it has to be complete. I think a partial list is worse than no list at\n> > >all.\n> > >\n> > \n> > I disagree. Unless you are omniscient, we will only ever have a partial list. \n> > \n> > Perhaps more importantly, the more common ones will be in the list, because\n> > the more often it's reported, the more likely someone will use the bug\n> > tool. If we develop a culture that says 'if it's on the bug list, it will\n> > get looked at', then more people will report bugs via the correct channels\n> > etc.\n> \n> That is the real question. Do we want to rely more heavily on a bug\n> database rather than the email lists? I haven't heard many say they\n> want that.\n> \n\nI think this is related to the discussions about what to do with\nextensions, etc. The project is outgrowing its infrastructure. A bug\ndatabase is one of those things that is hard to justify and maintain\nwhen all the developers can keep up with all the mailing lists, but\nthat hasn't been true for a while, now. The _need_ for a bug database\nwas recognized, but there wasn't enough interest for someone to take on\nthe maintenance.\n\nAs new developers have come in and taken over things like the JDBC\ndriver, and the ODBC driver, no one has the entire state of the code in\ntheir head anymore. And, I think the project has reached critical mass:\nattracting new developers is not a big problem - other projects are\ntaking PostgreSQL up as their base, without any selling from the core\ndevelopers. Perhaps trying again (either with the existing system,\nor the PostgreSQL based Bugzilla) is the right answer. Personally,\nI think a bug database that coordinates with the email archives is the\nbest of both worlds. That way, discussion about a bug, and it's state\nall get archived in the same location, without a lot of extra bother on\nthe parts of the developers, just make sure to CC: the bug system.\n\nCall for volunteers who wish to be involved in the maintenence, open up\nthe system, and put it on the list of TODO for release: 'check status\nand usefulness of current bug reporting system' so it gets looked at\nagain at regular, appropriate intervals.\n\nRoss\n",
"msg_date": "Tue, 21 Aug 2001 10:09:23 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n>> That is the real question. Do we want to rely more heavily on a bug\n>> database rather than the email lists? I haven't heard many say they\n>> want that.\n\n> The database keeps track of it. When someone uses the bugtool to\n> report a bug it's mailed to the bugs list.\n\nBut the problem is the lack of feedback the other way. In my mind,\nand I think in the minds of the rest of the developers, the pgsql-bugs\nlist is where bug discussions happen. That's not something I have any\ninterest in trying to change. Thus, I see little point in trying to\nmaintain a bug database that's separate from the pgsql-bugs archives.\n\nI still think a link to a searchable pgsql-bugs archive could replace\nthis bug database very easily.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 11:10:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "> > Can someone point me to a bug that is _not_ on the TODO list? If not,\n> > what does a complete bug database do for us except list reported bugs\n> > and possible workarounds.\n> \n> Do you actually expect someone to go thru the 400+ items in the database\n> and compare them to the TODO list? Seems to me that's something the\n> maintainer of the TODO list would be doing. Can you point me to the form\n> that gets something on the TODO list that the average user can use? Can\n> you guarantee every bug will end up on the TODO list? Can you point me\n> to the place on the TODO list when a user can look at to see if a bug has\n> been fixed or even reported?\n\nI was just asking. If I have been missing stuff, I can see more value\nto a bug database.\n\n> You're making more of an issue with all of this than there is. The TODO\n> list has a purpose, as does CVS, as do the mailing lists, as does the\n> regression database, as does the bugs database, as do the interactive\n> docs, ...\n\nOK, what value does a bug database have over a TODO list?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 11:11:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> > > Yes, but we have to add items that don't come in through the database,\n> > > and mark them as done/duplicates if we want it to be useful.\n> >\n> > Not necessarily. If someone discovers one that's not in the database\n> > they'll add it. If it's already fixed it'll get closed out but will\n> > still be in the database. It's not intended to be a todo/isdone list\n> > or a development history reference. We have a TODO list and CVS for\n> > that stuff.\n>\n> How do you communicate that to people looking at the content? Do you\n> put in big letters at the top, \"This list is not complete.\" The fact an\n> items is missing from the list (new bug) is just as important as an item\n> appearing on the list.\n\nHuh? A list of bugs is only as complete as those submitting to it ... its\nno more, or less, complete then a mailing list *shrug*\n\n\n",
"msg_date": "Tue, 21 Aug 2001 11:13:13 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> On Tue, 21 Aug 2001, Bruce Momjian wrote:\n> \n> > > > Yes, but we have to add items that don't come in through the database,\n> > > > and mark them as done/duplicates if we want it to be useful.\n> > >\n> > > Not necessarily. If someone discovers one that's not in the database\n> > > they'll add it. If it's already fixed it'll get closed out but will\n> > > still be in the database. It's not intended to be a todo/isdone list\n> > > or a development history reference. We have a TODO list and CVS for\n> > > that stuff.\n> >\n> > How do you communicate that to people looking at the content? Do you\n> > put in big letters at the top, \"This list is not complete.\" The fact an\n> > items is missing from the list (new bug) is just as important as an item\n> > appearing on the list.\n> \n> Huh? A list of bugs is only as complete as those submitting to it ... its\n> no more, or less, complete then a mailing list *shrug*\n\nNot really. The bug database only gets submissions from the web form,\nas far as I know. It doesn't get direct postings to the bugs list, and\neven then, lots of bugs aren't reported on the bugs list. If we open it\nup to all lists, it becomes identical to our mailing list archives.\n\nThe comment above was saying showing some bugs is better than nothing,\nmeaning do you show the list even if you know it isn't being maintained\nor updated. I think that is worse than not showing it at all.\n\nNow, if it was updated with all bugs as they come in, and updated as\ncompleted, that would be nice, but it is a lot of work and I am not sure\nif it is worth doing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 11:17:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "A web-based interface allows people to submit bug reports they might\notherwise not be able to report. Not everyone is able/willing to\nsign-up to a mailing list, nor have newsfeed access.\n\nThe one we have (had) allows the reporting, but has the flaw of not\nshowing when something has been done about a bug. That could be fixed\nby either not keeping a history, or marking their status's as closed\nwhen done.\n\nAt a minimum, I reckon we should have the web interface there, even if\nit just pipes the report to the mailing list and doesn't keep a history\nat all.\n\n+ Justin\n\nBruce Momjian wrote:\n> \n> > > Yes, but we have to add items that don't come in through the database,\n> > > and mark them as done/duplicates if we want it to be useful.\n> >\n> > Not necessarily. If someone discovers one that's not in the database\n> > they'll add it. If it's already fixed it'll get closed out but will\n> > still be in the database. It's not intended to be a todo/isdone list\n> > or a development history reference. We have a TODO list and CVS for\n> > that stuff.\n> \n> How do you communicate that to people looking at the content? Do you\n> put in big letters at the top, \"This list is not complete.\" The fact an\n> items is missing from the list (new bug) is just as important as an item\n> appearing on the list.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 22 Aug 2001 01:20:22 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Bruce Momjian wrote:\n\n> > > Can someone point me to a bug that is _not_ on the TODO list? If not,\n> > > what does a complete bug database do for us except list reported bugs\n> > > and possible workarounds.\n> >\n> > Do you actually expect someone to go thru the 400+ items in the database\n> > and compare them to the TODO list? Seems to me that's something the\n> > maintainer of the TODO list would be doing. Can you point me to the form\n> > that gets something on the TODO list that the average user can use? Can\n> > you guarantee every bug will end up on the TODO list? Can you point me\n> > to the place on the TODO list when a user can look at to see if a bug has\n> > been fixed or even reported?\n>\n> I was just asking. If I have been missing stuff, I can see more value\n> to a bug database.\n>\n> > You're making more of an issue with all of this than there is. The TODO\n> > list has a purpose, as does CVS, as do the mailing lists, as does the\n> > regression database, as does the bugs database, as do the interactive\n> > docs, ...\n>\n> OK, what value does a bug database have over a TODO list?\n\nHistory. Searchability. Doesn't include features to be addeed.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 11:20:36 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> > > Do you actually expect someone to go thru the 400+ items in the database\n> > > and compare them to the TODO list? Seems to me that's something the\n> > > maintainer of the TODO list would be doing. Can you point me to the form\n> > > that gets something on the TODO list that the average user can use? Can\n> > > you guarantee every bug will end up on the TODO list? Can you point me\n> > > to the place on the TODO list when a user can look at to see if a bug has\n> > > been fixed or even reported?\n> >\n> > I was just asking. If I have been missing stuff, I can see more value\n> > to a bug database.\n> >\n> > > You're making more of an issue with all of this than there is. The TODO\n> > > list has a purpose, as does CVS, as do the mailing lists, as does the\n> > > regression database, as does the bugs database, as do the interactive\n> > > docs, ...\n> >\n> > OK, what value does a bug database have over a TODO list?\n> \n> History. Searchability. Doesn't include features to be addeed.\n\nYea, they would be nice.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 11:22:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> >> That is the real question. Do we want to rely more heavily on a bug\n> >> database rather than the email lists? I haven't heard many say they\n> >> want that.\n>\n> > The database keeps track of it. When someone uses the bugtool to\n> > report a bug it's mailed to the bugs list.\n>\n> But the problem is the lack of feedback the other way. In my mind,\n> and I think in the minds of the rest of the developers, the pgsql-bugs\n> list is where bug discussions happen. That's not something I have any\n> interest in trying to change. Thus, I see little point in trying to\n> maintain a bug database that's separate from the pgsql-bugs archives.\n>\n> I still think a link to a searchable pgsql-bugs archive could replace\n> this bug database very easily.\n\nSome of the discussions could go on for weeks. Are you saying that\nwading thru a few hundred posts to find out what a solution was is\nbetter than a quick searchable summary?\n\nPersonally I don't care if the bugs form, database, or any of that\ngets used. But if anyone expects to get meaningful bug reports you\nbetter plan on coming up with something alot easier than a mailing\nlist you have to subscribe to (or wait till approved) for your info.\nYou need to ask yourself how many bugs wouldn't be reported if it\nwas harder to report them.\n\nPostgreSQL is gaining in the number of users, it's gonna get a whole\nlot worse before it gets any better.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 11:27:31 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Some of the discussions could go on for weeks. Are you saying that\n> wading thru a few hundred posts to find out what a solution was is\n> better than a quick searchable summary?\n\nGiven a threaded index, you aren't wading through \"a few hundred posts\".\nAgreed, a nice canned database entry might be easier to look at, but\nwho's going to expend the time to maintain the database? Unless someone\nactively takes responsibility for keeping the DB up to date, it'll be\njunk. So far I heard Philip say he'd be willing to check over some\nfraction of the existing entries, but I don't hear anyone wanting to\ntake it on as a long-term commitment.\n\n> You need to ask yourself how many bugs wouldn't be reported if it\n> was harder to report them.\n\nWho said anything about making it harder to report them? The webform\nfeeding into pgsql-bugs is just fine with me. Seems to me the\ndiscussion here is about how we keep track of the results of pgsql-bugs\nactivity.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 11:33:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "> Vince Vielhaber <vev@michvhf.com> writes:\n> > Some of the discussions could go on for weeks. Are you saying that\n> > wading thru a few hundred posts to find out what a solution was is\n> > better than a quick searchable summary?\n> \n> Given a threaded index, you aren't wading through \"a few hundred posts\".\n> Agreed, a nice canned database entry might be easier to look at, but\n> who's going to expend the time to maintain the database? Unless someone\n> actively takes responsibility for keeping the DB up to date, it'll be\n> junk. So far I heard Philip say he'd be willing to check over some\n> fraction of the existing entries, but I don't hear anyone wanting to\n> take it on as a long-term commitment.\n\nWe could try going the other way, attaching URL's to the TODO items so\npeople can get more information about an existing bug. We already have\nthat for TODO.detail but we could certainly expand on that.\n\nIn fact, it may be interesting to load the TODO list into a database and\nstart attaching emails to individual items.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 11:36:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We could try going the other way, attaching URL's to the TODO items so\n> people can get more information about an existing bug.\n\nThat might be worth doing, but I think it's mostly orthogonal to the\nquestion of a bug database. The set of problems that are (still) on the\nTODO list is just a small fraction of the set of bugs that someone might\nlook in a bug database for --- anything that's already fixed in current\nsources is probably not going to be on TODO, if it ever got there at\nall (which easily-fixed problems do not).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 11:41:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We could try going the other way, attaching URL's to the TODO items so\n> > people can get more information about an existing bug.\n> \n> That might be worth doing, but I think it's mostly orthogonal to the\n> question of a bug database. The set of problems that are (still) on the\n> TODO list is just a small fraction of the set of bugs that someone might\n> look in a bug database for --- anything that's already fixed in current\n> sources is probably not going to be on TODO, if it ever got there at\n> all (which easily-fixed problems do not).\n\nThat is a good point. I remove items after we make a release, and we\ndon't record quickly fixed items. I wonder if I should keep the HISTORY\nfile updated more frequently so people can see what has been fixed from\nthe previous release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 11:44:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tuesday 21 August 2001 11:11, Bruce Momjian wrote:\n> OK, what value does a bug database have over a TODO list?\n\nThe TODO list isn't just a list of bugs that need fixing.\n\nA bug database is just that -- a list of bugs in existing features. While \nRequests of Enhancements certainly can be accomodated through a bug database, \nthat isn't its primary function.\n\nBugzilla, while clunky for some things, does have some great benefits, \nincluding the ability to discuss that bug in context; attach patches, logs, \nstack traces, or whatnot; mark the bug's status; assign the bug to someone; \nas well as many other features.\n\nThe TODO list as well as the mailing lists, while both great for what they \ndo, make it all too easy to loose a bug's context, being that both lists are \nflat-files trying to track relational things. :-)\n\nRed Hat makes mission-critical use of bugzilla running on Oracle. See \nbugzilla.redhat.com. And ask the Red Hat people on these lists their \nopinions of bugzilla.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Aug 2001 11:54:51 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Lamar Owen wrote:\n\n> On Tuesday 21 August 2001 11:11, Bruce Momjian wrote:\n> > OK, what value does a bug database have over a TODO list?\n>\n> The TODO list isn't just a list of bugs that need fixing.\n>\n> A bug database is just that -- a list of bugs in existing features. While\n> Requests of Enhancements certainly can be accomodated through a bug database,\n> that isn't its primary function.\n>\n> Bugzilla, while clunky for some things, does have some great benefits,\n> including the ability to discuss that bug in context; attach patches, logs,\n> stack traces, or whatnot; mark the bug's status; assign the bug to someone;\n> as well as many other features.\n>\n> The TODO list as well as the mailing lists, while both great for what they\n> do, make it all too easy to loose a bug's context, being that both lists are\n> flat-files trying to track relational things. :-)\n>\n> Red Hat makes mission-critical use of bugzilla running on Oracle. See\n> bugzilla.redhat.com. And ask the Red Hat people on these lists their\n> opinions of bugzilla.\n\nWhat who thinks of what has actually become irrelevant. The following\nis clear:\n\n\to No tool will replace the mailing lists\n\to The mailing lists are where discussion will be held\n\to Many/most maintainers have no desire to update bug reports\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 11:59:38 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, what value does a bug database have over a TODO list?\n\nA TODO list is forward-looking. Many of the entries in a bug database\nwould be backward-looking (already fixed). We shouldn't try to make\neither one serve the purpose of the other.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 12:02:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, what value does a bug database have over a TODO list?\n\nThe former is a database, the latter is a flat-text file. The former is\nmult-user, the latter is single-user. You figure out the rest. ;-)\n\nSeriously, IMHO a real bug database would be useful. A number of\nsolutions for this are available, including the one Vince developed. But\nI will refuse to participate in a bug database that ordinary users have\nwrite access to. The sort of stuff that comes in over the bug list and\nthrough whatever else just isn't filtered enough.\n\nBut in an organization that claims to be professional, when users report\nan actual problem they expect to be able to track this problem. I keep\ntrack of the problems that seem important to me myself, because I have no\nother place to put this information.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 21 Aug 2001 18:08:02 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> After all, every bug is given an ID, so whomever fixes the bug with that\n> ID should also mark it off.\n\nOh? I've never seen a bug ID. Certainly the traffic in pgsql-bugs\ndoesn't show any such thing.\n\nThis isn't going to happen unless there's some fairly convenient\nmechanism to make it happen *in the context of responding to email\non the pgsql-bugs list*. At the very least, the webform should add\na link to an appropriate status-update page to the mail that it forwards\nto the list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 12:46:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "> Justin Clift <justin@postgresql.org> writes:\n> > After all, every bug is given an ID, so whomever fixes the bug with that\n> > ID should also mark it off.\n> \n> Oh? I've never seen a bug ID. Certainly the traffic in pgsql-bugs\n> doesn't show any such thing.\n> \n> This isn't going to happen unless there's some fairly convenient\n> mechanism to make it happen *in the context of responding to email\n> on the pgsql-bugs list*. At the very least, the webform should add\n> a link to an appropriate status-update page to the mail that it forwards\n> to the list.\n\nThat would be pretty cool, using the mailing list archives as an\n_answer_ to the bug report.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 12:47:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "fwiw, the PHP group uses a pretty simple PHP/MySQL-based bug tracking \nsystem that consists of something like 3 or 4 PHP pages, two tables in \nMySQL and has some pretty decent features. It wouldn't be much of problem \ndifficult to port it to PostgreSQL (which I might be doing soon, anyways, \n'cause I want a bug database myself). \n\nCheck it out: http://www.php.net/bugs\n\nJ\n\nmlw wrote:\n\n> Has anyone thought of using Bugzilla? (It is MySQL based, of course) but\n> it might answer the bug database issues. (If you guys want a bug database)\n> \n> RedHat has a version which can use Oracle, but it seems there is a file:\n> ftp://people.redhat.com/dkl/pgzilla-latest.tar.gz that my be interesting.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n",
"msg_date": "Tue, 21 Aug 2001 19:21:40 +0228",
"msg_from": "J Smith <dark_panda@hushmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage / Bugzilla?"
},
{
"msg_contents": "On Tuesday 21 August 2001 11:59, Vince Vielhaber wrote:\n> On Tue, 21 Aug 2001, Lamar Owen wrote:\n> > Red Hat makes mission-critical use of bugzilla running on Oracle. See\n> > bugzilla.redhat.com. And ask the Red Hat people on these lists their\n> > opinions of bugzilla.\n\n> What who thinks of what has actually become irrelevant. \n\nNot really. I like to see what works for other projects before passing \njudgment -- and bugzilla and bug trackers of like bent are working very well \nfor other projects. Red Hat is just one of the largest such 'projects.'\n\n>The following\n> is clear:\n\n> \to No tool will replace the mailing lists\n> \to The mailing lists are where discussion will be held\n> \to Many/most maintainers have no desire to update bug reports\n\nOk, having been involved in a project that has both an active mailing list \nAND a bug tracker, I can comment on this.\n\nI am not interested in finding a mailing list _replacement_. I am, however, \ninterested in finding a augmentative solution that does well what mailing \nlists do not do well.\n\nMailing lists do many things well, but they do not do the business of \nbugtracking well. Particularly as the size and scope of the project goes up.\n\nBug trackers do not do the thing of discussion well. They do, however, do \nthe thing of bug status reporting _very_ well. They also do the thing of \nrelating bugs to OS versions, releases, and libraries much easier than with a \nlist. Reference the '7.1.2-3' patch bug sent last week. Much was said that \nwas not at all related to the real problem -- Windows 98.\n\nWhile the new fts mailing list search is VERY nice, mailing list bug reports \nand the discussion that follows may or may not help someone down the road. \nIOW, just how useful are our [BUGS] archives from two years ago? Just how \nuseful are any of our archives, for that matter? I have found them useful on \noccassion -- but those occassions are rather rare and usually are just to \nremember what _I_ said about something. Or to see what Tom Lane's first post \nwas. Or to see how long somebody has been with the project. Etc. I don't \nwant to see the searchable archives go away, of course -- but I am \nquestioning how useful they are to _non-developers_.\n\nA dynamic tracker would show bugs that were fixed at various versions. It \nwould also make the project seem more responsive to those not on the lists \n--the vast majority of our users likely don't even know these lists are as \nuseful as they are. I had used PostgreSQL for two years before I found out \nthat the mailling lists are where _it's_ happening. I was _amazed_ at these \nlists and their contents. And I ran a Usenet site ten years ago, at that. \nMaybe that was the source of my amazement, OTOH. I've been through \nnews.groups wars, news.admin wars, etc. There weren't any real 'wars' here \n--and that was _different_.\n\nJMHO, of course.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Aug 2001 13:05:45 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> On Tue, 21 Aug 2001, Lamar Owen wrote:\n[...]\n> \n> What who thinks of what has actually become irrelevant. The following\n> is clear:\n> \n> \to No tool will replace the mailing lists\n> \to The mailing lists are where discussion will be held\n> \to Many/most maintainers have no desire to update bug reports\ndisadvantages of a mailinglist:\n- easy problems are solved by 10 people in 5 minutes, hard ones often by \nnone\n- not clear who is the \"owner\" of a problem\n\nOK so what we need is an enhaced mailinglist with a web interface. I've \nused wreq (http://www.math.duke.edu/~yu/wreq/) in the past for something \nsimilar. Features:\n- web and mail interface\n- each problem gets an assigned owner\n- status of entered items is clear\n- not much extra work in comparison to a mailinglist.\n- outstanding bugs stay visible until closed (instead of forgotten)\n\nIt may not be ideal for this kind of thing, but it is a start. Has anyone \nsuggestions for a better tool?\n\nReinoud\n\n",
"msg_date": "Tue, 21 Aug 2001 19:14:09 +0200 (CEST)",
"msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "\nI know I am not on the kernel team, but I have been a software developer for\nalmost 20 years. ;-)\n\nA bug database is a useful tool IF it has been setup to be so. If it is a\nbare bones repository for bug reports it will not work. People won't use it.\nA \"good\" bug database, i.e. one which will be used, must be administered,\nand that administration must be easier than dealing with the various items\nseparately. Also, there should be \"approved\" severity numbers, and a\ndifference between \"entered\" and \"confirmed\" bugs, especially if you allow\nexternal access to the database.\n\nFor what it is worth, a \"good\" bug database, which is used and reliable,\nwould do much for PostgreSQL's reputation in the commercial IT space. The\ndanger is that a bad or unused bug database would probably do more harm than\nnot having one.\n\n\nBruce Momjian wrote:\n\n> If anyone was concerned about our bug database being visible and giving\n> the impression we don't fix any bugs, see this URL:\n>\n> http://www.isthisthingon.org/nisca/postgres.html\n>\n> Not only does it show the problems he had with PostgreSQL, he uses our\n> bug list as an example of how PostgreSQL isn't advancing or interested\n> in fixing bug.\n>\n> We better remove that web page soon:\n>\n> http://www.ca.postgresql.org/bugs/bugs.php?2\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n",
"msg_date": "Tue, 21 Aug 2001 13:14:37 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tuesday 21 August 2001 11:06, Mitch Vincent wrote:\n> Some people crack me up in their opinions.. If it took him 6 hours to\n> figure out \"int8\" then I'm not really interested in anything else he has to\n> say... Lord...\n\nHmmm...\n\nLet's look at the guy's bulleted list.\n\nThe first item he can't stand is that you can't add a column after any \narbitrary column, that it goes at the end. Well, this is really clueless, as \nyou order the columns when you SELECT or when the application presents the \ndata.\n\nThe second item, however, has some real meat in it. Don't tell me that I \nshould have a correct design before writing any application code. Any \nprogrammer knows that the user's needs change over time -- and the database \nshould be able to keep up without any problems. I have myself ran into \nPostgreSQL's ALTER-hostile environment. I'm patient, however, as I need the \nfeatureset. Our ALTER needs real muscle. Some things are already on our \nTODO list to fix this, though -- and this guy should have checked that. But \nmaybe he didn't find our TODO list. And 7.1 is much better than 7.0.3, the \nversion he looked at.\n\nThat third item, about int8. Can a clueless newbie who's heard that \nPostgreSQL is so great, knowing NOTHING about it, find things reasonably well \nin the docs? Only clueless newbies should answer that question -- I, nor any \ndeveloper, qualify to answer that question.\n\nThe fourth item looks like whining, IMHO. The problem he describes is merely \nannoying to him -- yet it's bulleted. Sounds like a MySQL partisan who's \nupset that PostgreSQL is better at many things and is trying to justify not \nsupporting PostgreSQL out of personal bias. However, if it weren't too \ndifficult to support index creation at table creation time, why NOT allow \nthat? Do we just not _want_ to do it? I didn't see it in my read of TODO. \nOf course, the guy didn't ask on the lists to have it put in TODO. But how \nwould he know to ask to have something put in TODO?\n\nOur development process is very simple, but is also rather opaque to \noutsiders. Maybe that's a good thing; maybe that's bad. Should we let just \nany user know that if they want a feature, they need to ask to have it placed \non TODO? Or are people really not reading the docs? (Experienced admins know \nthe answer to THATquestion.....)\n\nOur documentation is, however, much better now than when I started. Kudos to \nThomas and all the rest that have contributed. I also like the direction \ntechdocs.postgresql.org is going.\n\nThe last worthwhile item on this guy's list is changing ownership of a \ndatabase. Well, I haven't yet had to do this: can we do this easily?\n\nJust because someone is clueless and even obnoxious in their comments doesn't \nautomatically disqualify what they say from validity.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Aug 2001 13:30:44 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Has anyone thought of using Bugzilla? (It is MySQL based, of course) but it\nmight answer the bug database issues. (If you guys want a bug database)\n\nRedHat has a version which can use Oracle, but it seems there is a file:\nftp://people.redhat.com/dkl/pgzilla-latest.tar.gz that my be interesting.\n\n\n",
"msg_date": "Tue, 21 Aug 2001 13:37:47 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage / Bugzilla?"
},
{
"msg_contents": "> > > I disagree. Unless you are omniscient, we will only ever have a partial\n> > > list.\n> but there wasn't enough interest for someone to take on\n> the maintenance.\n\nWe need someone willing to be a kibo. Or is that too arcane a reference?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Aug 2001 13:46:29 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "On Tuesday 21 August 2001 12:47, Bruce Momjian wrote:\n> > Justin Clift <justin@postgresql.org> writes:\n> > > After all, every bug is given an ID, so whomever fixes the bug with\n> > > that ID should also mark it off.\n\n> That would be pretty cool, using the mailing list archives as an\n> _answer_ to the bug report.\n\nX-PostgreSQL-bug-ID: anyone? Or leave the bug ID number in the subject? Then \nthe replies can be properly inserted into the database as belonging to an ID.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Aug 2001 14:02:48 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> The project is outgrowing its infrastructure.\n\nPerhaps so. I think what's *really* needed here is someone who is\nwilling to take responsibility for maintaining a bug database, ie,\nremoving cruft (non-bug messages), making sure that old bugs are\nmarked closed when a developer forgets to do it, etc etc. It doesn't\nmatter what automatic systems we have in place unless a human is\nwilling to take responsibility for quality control. But given a\nvolunteer, a bug database could be a really nice thing to have.\nIf we're getting as big as all that, a volunteer or three to do this\nshouldn't be impossible to come by.\n\n> I think a bug database that coordinates with the email archives is the\n> best of both worlds. That way, discussion about a bug, and it's state\n> all get archived in the same location, without a lot of extra bother on\n> the parts of the developers, just make sure to CC: the bug system.\n\nI like that idea a lot: just cc: to some bug-input address to add or\nupdate the collected mail for any one bug.\n\nPeter remarked that he wouldn't use a bug database unless it has some\ninput filtering to remove all the non-bug issues that currently clutter\nthe pgsql-bug archives. I tend to agree with him. A possible way to\nhandle that is to set up bug-input like a closed mailing list: only\naccept mail from designated people (developers and people nominated to\nhelp run the bug database). So, a bug database entry would start life\nwhen some one of these people replies to an emailed bug report\nconfirming that there is a bug, or forwards the verified report to\nbug-input, or whatever.\n\nIt'd still need a maintainer, but something like this would fit\ncomfortably into our existing habits, which a pure web-based system\nwon't.\n\nSo: any volunteers to set this up and help run it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 14:17:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> What who thinks of what has actually become irrelevant. The following\n> is clear:\n> \n> o No tool will replace the mailing lists\n> o The mailing lists are where discussion will be held\n> o Many/most maintainers have no desire to update bug reports\n\nIf anyone is interested, I am willing to undertake to be the link between\nthe bugs mailing list and a bugs database. This should allow developers to\ncontinue to deal with the mailing list, just CCing a special e-mail address\nwhenever a bug was fixed. I would then take care of finding the\nappropriate bug(s) in the database and marking them as fixed.\n\nThere are two large, well-used bugs databases that I am aware of with\nsomewhat different strengths:\n - The Debian Bug Tracking System\n - Bugzilla\nthere are a gazillion others, of course, but let's just consider those two\nfor the moment.\n\nIn some ways the Debian bug tracking system is a closer fit to the way\nPostgreSQL currently works, since it drives into a mailing list, bug\nsubmission is via e-mail and bug control is via e-mail as well.\n\nBugzilla is probably a closer fit in reality, since it is more focused\naround bugs for a single application. If Bugzilla were installed I'm sure\nsome functionality could be added into it along the lines of the Debian BTS\ntoo.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew @ catalyst . net . nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64(21)635-694, Fax:+64(4)499-5596, Office: +64(4)499-2267xtn709\n",
"msg_date": "Wed, 22 Aug 2001 07:52:00 +1200",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Let's look at the guy's bulleted list.\n\n> The first item he can't stand is that you can't add a column after any\n> arbitrary column, that it goes at the end. Well, this is really\n> clueless, as you order the columns when you SELECT or when the\n> application presents the data.\n\nWell, I can see some value in it --- but not enough to justify the\nimplementation pain. It certainly is pretty weak as a leadoff gripe.\n\n> The second item, however, has some real meat in it.\n\nAgreed, we need better ALTER capability. As you say, it's on the TODO\nlist.\n\n> That third item, about int8. Can a clueless newbie who's heard that \n> PostgreSQL is so great, knowing NOTHING about it, find things\n> reasonably well in the docs?\n\nHe apparently didn't get as far as looking at Table 3-1, on the first\npage of the user's guide chapter on datatypes. Still, improving the\ndocs is an ever-important task.\n\n> However, if it weren't too \n> difficult to support index creation at table creation time, why NOT allow \n> that? Do we just not _want_ to do it?\n\nWe do support it, for UNIQUE indexes (see UNIQUE and PRIMARY KEY\nconstraints). As for why not plain indexes too, the main answer is that\nUNIQUE constraints are SQL92 and any syntax to create indexes otherwise\nis not. Of course a CREATE INDEX command is not to be found in SQL92\neither, but on the whole I agree with you; this is hard to read as\nanything except MySQL's-way-is-the-only-way partisanship.\n\nThere hasn't been a lot of talk recently about adopting MySQL-isms, at\nleast not anywhere near as much as about adopting Oracle-isms. I'd tend\nto treat either sort of proposal with suspicion, but we ought to be open\nto the idea if we are interested in attracting users of other DBMSs.\nReal question is, who out there is excited enough about this point to do\nthe work?\n\n> Of course, the guy didn't ask on the lists to have it put in TODO. But how \n> would he know to ask to have something put in TODO?\n\nI see no evidence that this guy wants to learn about or contribute to\nPostgres development at all; he's just looking for things to rag on.\n(And not even doing very well at that --- I could name ten worse\nproblems than these without taking a breath...) The TODO list is\nmentioned prominently on the website, for example.\n\n> The last worthwhile item on this guy's list is changing ownership of a \n> database. Well, I haven't yet had to do this: can we do this easily?\n\nIt could be better. See recent \"Multiple Servers\" thread over in\npg-admin, notably\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1031042\n(which the FTS server seems not to have linked into the thread for some\nreason)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 16:23:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage "
},
{
"msg_contents": "On Tue, 21 Aug 2001, Lamar Owen wrote:\n\n> > > > I disagree. Unless you are omniscient, we will only ever have a partial\n> > > > list.\n> > but there wasn't enough interest for someone to take on\n> > the maintenance.\n>\n> We need someone willing to be a kibo. Or is that too arcane a reference?\n\nGotta admit, I haven't heard that in a while. But I think I'm nearing\na solution. Stay tuned.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 17:51:38 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Can someone point me to a bug that is _not_ on the TODO list?\n\nJust looking through pgsql-bugs of the last two weeks, the following all\nlook reasonable.\n\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00088.html\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00084.html\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00078.html\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00089.html\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00086.html\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00042.html\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00036.html\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00035.html\nhttp://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00012.html\n\nAt least a couple of them I would want to have recorded somewhere.\n\n> If not, what does a complete bug database do for us except list\n> reported bugs and possible workarounds.\n\nhistory, searchability\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 22 Aug 2001 00:01:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> I see no evidence that this guy wants to learn about or contribute to\n> Postgres development at all; he's just looking for things to rag on.\n> (And not even doing very well at that --- I could name ten worse\n> problems than these without taking a breath...) The TODO list is\n> mentioned prominently on the website, for example.\n\nSeeing as it was written about 7.0.X, and it took >1 year for anyone to\neven mention it here, meaning no one else is reading it either.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 18:22:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Can someone point me to a bug that is _not_ on the TODO list?\n> \n> Just looking through pgsql-bugs of the last two weeks, the following all\n> look reasonable.\n> \n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00088.html\n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00084.html\n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00078.html\n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00089.html\n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00086.html\n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00042.html\n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00036.html\n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00035.html\n> http://www.ca.postgresql.org/mhonarc/pgsql-bugs/2001-08/msg00012.html\n\nYes, these are all valid bugs that the TODO list didn't capture. Good\npoint.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 21 Aug 2001 19:26:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "> > > > > I disagree. Unless you are omniscient, we will only ever have a\npartial\n> > > > > list.\n> > > but there wasn't enough interest for someone to take on\n> > > the maintenance.\n> >\n> > We need someone willing to be a kibo. Or is that too arcane a reference?\n>\n> Gotta admit, I haven't heard that in a while. But I think I'm nearing\n> a solution. Stay tuned.\n>\nI don't know what a kibo is, but I would be willing to put in some time\nhelping maintaing a bug reporting system. One of the helpful things with\nbugzilla setup with some other big projects is that the bug gets assigned to\na developer and the bug submitter gets emailed updates any time there is a\nstatus change.\n\nI agree that a bug database is not a replacement for the mailing lists, but\nI do think it could serve the project well if it is done correctly. I think\nmost uses look for a bugzilla type bug reporting tool these days.\n\n",
"msg_date": "Tue, 21 Aug 2001 19:08:13 -0500",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "mlw wrote:\n> Has anyone thought of using Bugzilla? (It is MySQL based, of course) but it\n> might answer the bug database issues. (If you guys want a bug database)\n\n Bug tracking software that doesn't use transactions and\n referential integrity in a multiuser environment? Sounds like\n a bug by design to me, which are known not to be traceable by\n software. So the system might trace it's own bugs while never\n catching the biggies ...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 22 Aug 2001 00:02:13 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Link to bug webpage / Bugzilla?"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> mlw wrote:\n> > Has anyone thought of using Bugzilla? (It is MySQL based, of course) but it\n> > might answer the bug database issues. (If you guys want a bug database)\n> \n> Bug tracking software that doesn't use transactions and\n> referential integrity in a multiuser environment? Sounds like\n> a bug by design to me, which are known not to be traceable by\n> software. So the system might trace it's own bugs while never\n> catching the biggies ...\n> \n\nRedHat has ported bugzilla to postgres, which I alluded too, but maybe I should\nbe a bit more explicit.\n\nftp://people.redhat.com/dkl/pgzilla-latest.tar.gz \n\n(They have also ported it to Oracle.)\n\nLike I said before, a \"good\" i.e. used, up to date, and complete, bug\ndatabase/tracking system impresses many people that make IT decisions. (but it\nbetter be good.)\n\nSomething like Bugzilla, or PVCS Tracker, or what ever, may need to be the next\nstage in PostgreSQL's evolution from a university project to a full\nprofessional solution.\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 22 Aug 2001 07:55:30 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Link to bug webpage / Bugzilla?"
},
{
"msg_contents": "Matthew T. O'Connor volunteered:\n\n> I don't know what a kibo is, but I would be willing to put in some time\n> helping maintaing a bug reporting system. One of the helpful things with\n> bugzilla setup with some other big projects is that the bug gets assigned\nto\n> a developer and the bug submitter gets emailed updates any time there is a\n> status change.\n\nI have some experience in setting up Bugzilla, although we currently run it\non MySQL, but we are looking to move it off MySQL and probably onto\nPostgres anyway.\n\nI'd also volunteer to help admin a Bugzilla setup.\n\nDo we have a third person?\n\n\nCheers,\n\nColin\n\n\n",
"msg_date": "Wed, 22 Aug 2001 16:46:29 +0200",
"msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Jan Wieck said:\n\n> > Has anyone thought of using Bugzilla? (It is MySQL based, of course) but\nit\n> > might answer the bug database issues. (If you guys want a bug database)\n>\n> Bug tracking software that doesn't use transactions and\n> referential integrity in a multiuser environment? Sounds like\n> a bug by design to me, which are known not to be traceable by\n> software. So the system might trace it's own bugs while never\n> catching the biggies ...\n\nI agree, of course. That's why we'd use a Postgres port of Bugzilla:\n\nhttp://groups.google.com/groups?hl=en&safe=off&th=9efb66b03a69b9fd,1\n\nand available at\n\nftp://people.redhat.com/dkl/pgzilla-latest.tar.gz\n\nCheers,\n\nColin\n\n\n",
"msg_date": "Wed, 22 Aug 2001 16:55:46 +0200",
"msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Link to bug webpage / Bugzilla?"
},
{
"msg_contents": "On Tuesday 21 August 2001 17:51, Vince Vielhaber wrote:\n> On Tue, 21 Aug 2001, Lamar Owen wrote:\n> > > > > I disagree. Unless you are omniscient, \n\n> > We need someone willing to be a kibo. Or is that too arcane a reference?\n\n> Gotta admit, I haven't heard that in a while.\n\nEgads! An Internet where people don't remember Kibo.... Vince remembers, as \ndoes Marc,more than likely. But I got several emails asking 'What is a \nkibo'. Wow.\n\nJames 'Kibo' Parry is, well, infamous in Usenet circles (or at least he used \nto be). Mention 'kibo' in a newsgroup posting (this used to work years ago), \nand 'Kibo' would reply. With 100MB of news per day, Kibo replied (in a \nsanctimonious way, as if he were a god or something) to almost every single \nmention of the name 'kibo'. I was halfway expecting a reply via the \nnewsgroup gateway, but kibo must have been asleep. Or the 100GB of news per \nday has overwhelmed his bandwidth.... :-)\n\nAnd the most oddball newsgroups would get replied to. People began putting \nthe string 'kibo' into their .sig's, prompting message after message.\n\nKibo appeared to be omniscient -- thus the reference to kibo. There was \nactually a time where _every_ new posting with the word 'kibo' in it was \nreplied to.\n\nIn reality, kibo was driven by an intelligent regex running on a backbone \nserver (and, at the time, Software Tool and Die _was_ a backbone server). \nAll news postings referencing kibo were either auto-replied to, or Kibo \nhimself would reply.\n\nA 'cult' of kibology developed, wondering what criteria would prompt personal \nreplies from Kibo (at this point, the capital K was being used....).\n\nSee www.kibo.com for all the flippant details. :-) Or ask kibo himself at \nkibo@world.std.com.\n\nWow, now I _know_ I've been on Usenet too long....\n\nBut we DO need someone willing to be a postgresql-kibo. I could think of no \nbetter example of the completeness and attention to detail that was any \nbetter than that, that I thought people here could relate to.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 22 Aug 2001 11:52:15 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Egads! An Internet where people don't remember Kibo...\n\nYup, I do. I think he gave up years ago, though.\n\nI useta be a small-time kibozer myself --- back in the early days of\nJPEG, when a lot of people didn't really understand the format, I had\na little perl script that trolled a hundred or so likely newsgroups\nfor references to JPEG. And I followed up where appropriate. Didn't\nhave an automatic responder though, and I never tried to scan the whole\nfeed.\n\n\t\t\tregards, tom lane\n\t\t\torganizer, Independent JPEG Group\n",
"msg_date": "Wed, 22 Aug 2001 14:58:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "[WAY OT] Re: Link to bug webpage "
},
{
"msg_contents": "David Ford wrote:\n> \n> Bruce Momjian wrote:\n> \n> >>>That is the real question. Do we want to rely more heavily on a bug\n> >>>database rather than the email lists? I haven't heard many say they\n> >>>want that.\n> >>>\n> \n> I'd very much like a bugzilla because I can do research on bugs past or\n> present now as well as know the status of them. Right now if I had a\n> bug I would have to dig through web page after web page or use wget and\n> grep.\n\nUsing bugzilla seems the best option for me too.\n\nNo need to roll our own bug tracking system when we could spend the same \neffort on making Bugzilla/PostgreSQL work better.\n\n---------------\nHannu\n",
"msg_date": "Fri, 24 Aug 2001 07:57:20 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "David Ford wrote:\n> \n> Tom Lane wrote:\n> \n> >Peter remarked that he wouldn't use a bug database unless it has some\n> >input filtering to remove all the non-bug issues that currently clutter\n> >the pgsql-bug archives.\n\nSo the first thing to decide is the purpose of the bug database, do we\nwant \nto have\n\na) a marketing tool to show that we are bugfree and all bugs are fixed\nfast\n (at lest this is what started this thread :)\n\nor\n\nb) a convenient place to look up all open issues - here bugzilla would\nbe graet.\n\n\nWe have been using bugzilla with good results for our own projects\n(mainly\nAmphora http://www.amphora.ee/eng/ - a quite large groupware product\nbased on\nZope, PostgreSQL and other freeware technologies). \n\nSo if it does suit both a smaller-than-PGSQL group of\nprogrammers/testers/users\nlike us and a huge group like mozilla it should also suit a medium-sized\ngroup \nlike postgreSQL.\n\n-----------\nHannu\n",
"msg_date": "Fri, 24 Aug 2001 08:40:40 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Honestly I wasn't aware postgres had any bugs... tongue in cheek.\n\nWhat I mean is PG works very nicely for me and I haven't had any \nproblems with it, so that means \"no bugs\". Yes there are bugs and \nthings to be solved, but from my perspective it is already a pretty darn \ngood piece of software.\n\n-d\n\nPhilip Warner wrote:\n\n>At 08:32 21/08/01 -0400, Vince Vielhaber wrote:\n>\n>>Yes but noone was interested in it. It's still there but you're really\n>>the first to show interest in about a year.\n>>\n>\n>That's good (and depressing); where are they?\n>\n\n\n",
"msg_date": "Fri, 24 Aug 2001 01:32:22 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "I vote for pgsql bugzilla. If I have a bug to report I'll file it. I \nfile plenty of moz bugs and aid in resolving them.\n\n-d\n\nBruce Momjian wrote:\n\n>>There are over 400 in the database. If that's a small percentage then\n>>so be it, but it's still over 400 bugs that appear to have been ignored.\n>>Having a place to look up possible problems and seeing if there was a\n>>solution seems to be a plus to me, but if you don't want it it doesn't\n>>bother me either way. The lookups are currently disabled, ball's in\n>>your court.\n>>\n>\n>It's up to the group to decide. If we have a database of bugs, I think\n>it has to be complete. I think a partial list is worse than no list at\n>all.\n>\n\n\n\n",
"msg_date": "Fri, 24 Aug 2001 01:34:27 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>>>That is the real question. Do we want to rely more heavily on a bug\n>>>database rather than the email lists? I haven't heard many say they\n>>>want that.\n>>>\n\nI'd very much like a bugzilla because I can do research on bugs past or \npresent now as well as know the status of them. Right now if I had a \nbug I would have to dig through web page after web page or use wget and \ngrep.\n\n-d\n\n\n",
"msg_date": "Fri, 24 Aug 2001 01:36:18 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>How do you communicate that to people looking at the content? Do you\n>put in big letters at the top, \"This list is not complete.\" The fact an\n>items is missing from the list (new bug) is just as important as an item\n>appearing on the list.\n>\n\nHow do you distinguish that from what we have now? I can't look at my \npgsql email box and see how many and of what.\n\nA bugzilla is a more accurate representation of bugs and future features \nfor the group.\n\n-d\n\n\n\n\n",
"msg_date": "Fri, 24 Aug 2001 01:38:30 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Serguei Mokhov wrote:\n\n>Maybe a better solution for the short run would be\n>return the page where it was, and but links to the pgsql-bugs and \n>pgsql-hackers archives with some sort of exmplanatory saying that \"this is\n>a *complete* (it must be complete of course) list of bugs, which are \n>being extensively discussed in the these lists and fixed. Please, visit/search\n>these mail archives for most up-to-date information blah blah blah\" One can\n>invent some appropriate wording, I guess. But this atleast will show people\n>that there's work acually going on, and on daily basis. And also a good idea\n>to have \"Last updated\" time stamp on the page too... so it doesn't seem\n>to be forgotten for ages..\n>\n\nThe archives are a 'flat' database of bugs which require a lot of work \nfor a researcher to figure out if a bug is already documented and what \nthe status is. It is also not 100% accurate as not all bugs get \nreported there.\n\n-d\n\n\n",
"msg_date": "Fri, 24 Aug 2001 01:41:35 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>OK, what value does a bug database have over a TODO list?\n>\nhistory of a bug, entire discussion about that bug on the same page with \nhyperlinked patches and other attachments.\n\nability of everyone to add to the bug documentation without submitting \nit to the TODO maintainer.\n\ncategorization and \"it works on X\", \"it's broken if Y\" etc\n\nI really could go on and on.\n\n-d\n\n\n\n\n",
"msg_date": "Fri, 24 Aug 2001 01:46:37 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Tom Lane wrote:\n\n>Given a threaded index, you aren't wading through \"a few hundred posts\".\n>Agreed, a nice canned database entry might be easier to look at, but\n>who's going to expend the time to maintain the database? Unless someone\n>actively takes responsibility for keeping the DB up to date, it'll be\n>junk. So far I heard Philip say he'd be willing to check over some\n>fraction of the existing entries, but I don't hear anyone wanting to\n>take it on as a long-term commitment.\n>\n\nI've often had a hard time searching for results in email archives \nbecause the datum used for indexing changes. Different people change \nthe subject line etc. You can't index by date a fair bit of the time \nbecause there will be lapses in the discussion.\n\nOne of the better things about using a bugzilla [e.g.] is that it \nbecomes a community responsibility, not a single person or small group. \nAnyone can now update the 'TODO' list.\n\n-d\n\n\n\n",
"msg_date": "Fri, 24 Aug 2001 01:50:46 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Tom Lane wrote:\n\n>Peter remarked that he wouldn't use a bug database unless it has some\n>input filtering to remove all the non-bug issues that currently clutter\n>the pgsql-bug archives. I tend to agree with him. A possible way to\n>handle that is to set up bug-input like a closed mailing list: only\n>accept mail from designated people (developers and people nominated to\n>help run the bug database). So, a bug database entry would start life\n>when some one of these people replies to an emailed bug report\n>confirming that there is a bug, or forwards the verified report to\n>bug-input, or whatever.\n>\n\nHere I respectfully disagree. If I have to wait on 'approval' to submit \na bug or carry on a discussion about it, most of the time I'm going to \nsilently drop it and find some other way to make my project work.\n\nI like Mozilla's bugzilla because I can instantly and with very little \neffort classify all sorts of things and describe my bug. Then along \ncomes a person who can assign it to someone, confirm it, mark it up as \nclueless user, or whatever is needed. Everyone associated with this bug \n# gets a copy of every transaction that happens to this bug. You can \neasily cc this into the pgsql-bugs.\n\nA lot of projects grow and develop little things like 'it works for all \nof us so it's not a bug', I run into that now and then in an obscure \nissue...libtool comes to mind...and -nobody- has information on it \nexcept 4 other webpages in this universe where 1 person reports the \nproblem, two people say the 1st person is shouldn't be using gcc 2.96 \nand the fourth person has a fix which meant gcc wasn't at fault in the \nfirst place.\n\nYou can't have a really effective 100% bug database without allowing \neveryone to add to it. If I had to submit to mozilla \"tables are one \npixel off starting about Aug 21st\" and get approval before it went in, \nI'd likely say screw it and simply make my tables one pixel bigger. As \nit is, I post the bug and 20 minutes later due to the magic of bugzilla, \nthe right person put the fix in. I don't have to adapt my tables.\n\n:)\n\n-d\n\n\n\n",
"msg_date": "Fri, 24 Aug 2001 02:05:15 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Tom Lane wrote:\n\n>>The last worthwhile item on this guy's list is changing ownership of a \n>>database. Well, I haven't yet had to do this: can we do this easily?\n>>\n>It could be better. See recent \"Multiple Servers\" thread over in\n>pg-admin, notably\n>http://fts.postgresql.org/db/mw/msg.html?mid=1031042\n>(which the FTS server seems not to have linked into the thread for some\n>reason)\n>\n\nHere is where the Indexing by date/thread.... fails. If I were \nsearching for changing ownership, how would I even begin to consider \n\"Multiple Servers\" in my searches?\n\nA bug db here would have been perfect.\n\n-d\n\n\n",
"msg_date": "Fri, 24 Aug 2001 02:12:08 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "One other aspect of bugzilla that I have not yet seen mentioned on this\nthread is the ability to vote for a particular feature or bug.\n\nI have often seen people on this asking \"Is this an important feature to\nthe users?\".\n\nWith voting, you can easily see what users think would be an important\nthing to fix.\n\n\t-rocco\n\n",
"msg_date": "Fri, 24 Aug 2001 16:57:58 -0400 (EDT)",
"msg_from": "Rocco Altier <roccoa@routescape.com>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "*whimper* I've been out of town for a week, and will not be able to\ncatch up with ~2000 email messages. So I can't even get to the end of\nthis thread. But I must agree that PostgreSQL development is pushing the\nlimits of what a person can keep up with.\n\n> I am not interested in finding a mailing list _replacement_. I am, however,\n> interested in finding a augmentative solution that does well what mailing\n> lists do not do well.\n\nThe fundamental problem with bug tracking has been that the available\ntools do not fit with our obviously successful mailing-list centered\ndevelopment process. I certainly would consider it a distraction to\nconsult that tool to be able to participate in development.\n\nI *could* see some combination of bug tracker, volunteer maintainers,\nand our beta release cycle as a mechanism for having us all participate\nin clearing those reported bugs and non-bugs.\n\n - Thomas\n",
"msg_date": "Tue, 28 Aug 2001 02:29:46 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Link to bug webpage"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> The fundamental problem with bug tracking has been that the available\n> tools do not fit with our obviously successful mailing-list centered\n> development process. I certainly would consider it a distraction to\n> consult that tool to be able to participate in development.\n\nAs usual, Thomas cuts to the heart of the matter ...\n\nThe above is an accurate statement of the problem from a developer point\nof view. ISTM that what we're missing is a window into the process for\npeople who are not following the mailing lists. While we have archives,\nsearching the archives is not a great answer for a number of reasons\n(most notably that there's no mechanism to ensure that closure of a bug\nis recorded in the same thread(s) that report it).\n\nThe trick for a bug database will be to provide a more coherent view\nwithout being a drag on our proven development process.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Aug 2001 00:06:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Link to bug webpage "
},
{
"msg_contents": "On Tue, 28 Aug 2001, Tom Lane wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > The fundamental problem with bug tracking has been that the available\n> > tools do not fit with our obviously successful mailing-list centered\n> > development process. I certainly would consider it a distraction to\n> > consult that tool to be able to participate in development.\n>\n> As usual, Thomas cuts to the heart of the matter ...\n>\n> The above is an accurate statement of the problem from a developer point\n> of view. ISTM that what we're missing is a window into the process for\n> people who are not following the mailing lists. While we have archives,\n> searching the archives is not a great answer for a number of reasons\n> (most notably that there's no mechanism to ensure that closure of a bug\n> is recorded in the same thread(s) that report it).\n>\n> The trick for a bug database will be to provide a more coherent view\n> without being a drag on our proven development process.\n\nWe're already working on it, lets not get this thread started again.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 28 Aug 2001 06:00:58 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Link to bug webpage "
}
] |
[
{
"msg_contents": "gcc -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include \n -c -o auth.o auth.c\nIn file included from auth.c:22:\n/usr/include/sys/ucred.h:50: `NGROUPS' undeclared here (not in a function)\n\n% uname -a\nFreeBSD xor 4.3-STABLE FreeBSD 4.3-STABLE #2: Thu May 24 14:05:34 MSD 2001 \nteodor@xor:/usr/src/sys/compile/XOR i386\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Tue, 21 Aug 2001 19:10:30 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "Current CVS is broken"
}
] |
[
{
"msg_contents": "> yep:\n> lock \"tablename.colname.val=1\"\n> select count(*) from tablename where colname=1\n> If no rows, insert, else update.\n> (dunno if the locks would scale to a scenario with hundreds\n> of concurrent inserts - how many user locks max?).\n\nI don't see problem here - just a few bytes in shmem for\nkey. Auxiliary table would keep refcounters for keys.\n\n> Why wouldn't it work with serializable isolevel?\n\nBecause of selects see old database snapshot and so you\nwouldn't see key inserted+committed by concurrent tx.\n\nVadim\n \n",
"msg_date": "Tue, 21 Aug 2001 09:56:19 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: User locks code"
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> (dunno if the locks would scale to a scenario with hundreds\n>> of concurrent inserts - how many user locks max?).\n\n> I don't see problem here - just a few bytes in shmem for\n> key. Auxiliary table would keep refcounters for keys.\n\nI think that running out of shmem *would* be a problem for such a\nfacility. We have a hard enough time now sizing the lock table for\nsystem locks, even though they use fixed-size keys and the system as\na whole is designed to ensure that not too many locks will be held\nsimultaneously. (For example, SELECT FOR UPDATE doesn't try to use\nper-tuple locks.) Earlier in this thread, someone proposed using\nuser locks as a substitute for SELECT FOR UPDATE. I can guarantee\nyou that that someone will run out of shared memory before long,\nif the userlock table resides in shared memory.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 13:32:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RE: User locks code "
}
] |
[
{
"msg_contents": "Hi All,\n\nLooking at my message about the bug webpage and\nsome other posts, I see that it was delayed for \nabout 2h and a half. Some of the post were\ndelayed for days... Why is that? Looks like\nthe list has problems of some sort which cause\nthese irregular delays.\n\nJust an annoying observation.\n\nS.\n\n\n",
"msg_date": "Tue, 21 Aug 2001 12:59:24 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": true,
"msg_subject": "List response time..."
},
{
"msg_contents": "On Tuesday 21 August 2001 12:59, Serguei Mokhov wrote:\n> Looking at my message about the bug webpage and\n> some other posts, I see that it was delayed for\n> about 2h and a half. Some of the post were\n> delayed for days... Why is that? Looks like\n> the list has problems of some sort which cause\n> these irregular delays.\n\nMailing lists don't scale well to large numbers of subscribers. I see this \ndelay constantly,on multiple lists. The bigger the list gets, the slower the \nlist gets (and the more loaded the server gets, right Marc? :-)). Newsgroups \nscale a little better, but Usenet propagation delay was a problem even when \nfull feeds were below 100MB per day. Actually, Usenet propagation is better \nnow than then, now that the majority of sites aren't uucp and fed batched \nwith C-News. (or BNews, even....). Usenet propagation delays used to be \nmeasured in days and sometimes weeks. To get a message in two days was great \ntime!\n\nBut I can remember when Usenet propagation delays were how you judged EXPIRE \ntimes and newsspool size. And I also remember nasty tricks used when servers \nthat didn't respect 'distribution:' were hit with 'expires:' headers with \nvalues below the mean propagation delay.....and I can recall getting CANCELS \nfor postings two days before the posting to be canceled came trickling in....\n\nWe're still not as bad as BugTraq, though. Not only is the message delayed \ntwo to three days, other people will have already replied to it, and \ndiscussion will have been closed off before I ever get a chance to say \nanything. Well, maybe that's a good thing. :-)\n\nI guess this IS one of the few advantages of having to use reply-all on this \nlist.... Although then the discussion has moved on before the general list \nmembership has had a chance to read most of the recent replies....\n\nThe best thing to do is simply to expect propagation delay.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 21 Aug 2001 13:59:50 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> The best thing to do is simply to expect propagation delay.\n\nActually, I just sent a gripe off to Marc about this. I've been\nnoticing large and variable propagation delay for a few months now,\nbut I just today realized that the problem is entirely local to hub.org.\nFor example, look at the headers on your message:\n\nReceived: from postgresql.org (webmail.postgresql.org [216.126.85.28])\n\tby sss.pgh.pa.us (8.11.4/8.11.4) with ESMTP id f7LJKpY10196\n\tfor <tgl@sss.pgh.pa.us>; Tue, 21 Aug 2001 15:20:51 -0400 (EDT)\nReceived: from postgresql.org.org (webmail.postgresql.org [216.126.85.28])\n\tby postgresql.org (8.11.3/8.11.4) with SMTP id f7LJKpP46374;\n\tTue, 21 Aug 2001 15:20:52 -0400 (EDT)\n\t(envelope-from pgsql-hackers-owner+M12441@postgresql.org)\nReceived: from www.wgcr.org (www.wgcr.org [206.74.232.194])\n\tby postgresql.org (8.11.3/8.11.4) with ESMTP id f7LHxnP15711\n\tfor <pgsql-hackers@postgresql.org>; Tue, 21 Aug 2001 13:59:49 -0400 (EDT)\n\t(envelope-from lamar.owen@wgcr.org)\nReceived: from lowen.wgcr.org (IDENT:lowen@[10.1.2.3])\n\tby www.wgcr.org (8.9.3/8.9.3/WGCR) with SMTP id NAA25357;\n\tTue, 21 Aug 2001 13:59:40 -0400\n\nAll the delay seems to be in transferring the message from\npostgresql.org to webmail.postgresql.org ... which are the same\nmachine, or at least the same IP address. What's up with that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 15:34:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: List response time... "
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> Mailing lists don't scale well to large numbers of subscribers. I see this \n> delay constantly,on multiple lists. The bigger the list gets, the slower the \n> list gets (and the more loaded the server gets, right Marc? :-)).\n\nNote that the postgresql.org mail server is still running sendmail.\nIn my personal experience with sources.redhat.com, qmail is a much\nbetter choice to handle large mailing lists. When we switched from\nsendmail to qmail, mailing list delays dropped from hours, or\nsometimes even days, to seconds.\n\nIan\n",
"msg_date": "21 Aug 2001 13:24:27 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> All the delay seems to be in transferring the message from\n> postgresql.org to webmail.postgresql.org ... which are the same\n> machine, or at least the same IP address. What's up with that?\n\nYou are seeing sendmail's poorly designed queuing behaviour in action.\nsendmail limits itself by outgoing messages, rather than outgoing\ndeliveries. This causes one slow delivery to hold up many fast\ndeliveries.\n\nIan\n",
"msg_date": "21 Aug 2001 13:26:26 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "\nI've had great luck with Postfix as well.\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Ian Lance Taylor\" <ian@airs.com>\nTo: \"Lamar Owen\" <lamar.owen@wgcr.org>\nCc: \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>; \"PostgreSQL Hackers\"\n<pgsql-hackers@postgresql.org>\nSent: Tuesday, August 21, 2001 4:24 PM\nSubject: [HACKERS] Re: List response time...\n\n\n> Note that the postgresql.org mail server is still running sendmail.\n> In my personal experience with sources.redhat.com, qmail is a much\n> better choice to handle large mailing lists. When we switched from\n> sendmail to qmail, mailing list delays dropped from hours, or\n> sometimes even days, to seconds.\n>\n> Ian\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n",
"msg_date": "Tue, 21 Aug 2001 17:23:51 -0400",
"msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": "On 21 Aug 2001, Ian Lance Taylor wrote:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n>\n> > Mailing lists don't scale well to large numbers of subscribers. I see this\n> > delay constantly,on multiple lists. The bigger the list gets, the slower the\n> > list gets (and the more loaded the server gets, right Marc? :-)).\n>\n> Note that the postgresql.org mail server is still running sendmail.\n> In my personal experience with sources.redhat.com, qmail is a much\n> better choice to handle large mailing lists. When we switched from\n> sendmail to qmail, mailing list delays dropped from hours, or\n> sometimes even days, to seconds.\n\nooooooooooooooooooooooohhhhhhhhhhhhhhhhhhhhhh.... I've been raggin on\nMarc on that one for well over a year, maybe two.. I started using\nqmail when it was still in .7something beta and never looked back. The\nfolks at Security Focus have moved all of the lists to ezmlm (part of\nqmail) and have had nothing but success... But don't tell Marc.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 18:02:45 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": "On Tue, 21 Aug 2001, Lamar Owen wrote:\n\n> On Tuesday 21 August 2001 12:59, Serguei Mokhov wrote:\n> > Looking at my message about the bug webpage and\n> > some other posts, I see that it was delayed for\n> > about 2h and a half. Some of the post were\n> > delayed for days... Why is that? Looks like\n> > the list has problems of some sort which cause\n> > these irregular delays.\n>\n> Mailing lists don't scale well to large numbers of subscribers. I see this\n> delay constantly,on multiple lists. The bigger the list gets, the slower the\n> list gets (and the more loaded the server gets, right Marc? :-)). Newsgroups\n\nActually, the 'multi-day' delay is generally related to posts from ppl\nthat aren't subscribed to the lists that I have to approve manually ...\n\n.. as far as server load is concerned, causing several hour delay, we're\njust about to put online a dual processor server that has been donated to\nthe project ... its first task is going to be mailing list distribution\nand a open news server for the newsgroups.\n\n\n",
"msg_date": "Tue, 21 Aug 2001 19:45:20 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "\nHuh? Two different machines altogether ... but, I do have work to do once\nthe new server goes online ...\n\n> nslookup postgresql.org\nServer: localhost.hub.org\nAddress: 127.0.0.1\n\nName: postgresql.org\nAddress: 216.126.84.28\n\n> nslookup webmail.postgresql.org\nServer: localhost.hub.org\nAddress: 127.0.0.1\n\nName: mail.postgresql.org\nAddress: 216.126.85.28\nAliases: webmail.postgresql.org\n\n\nOn Tue, 21 Aug 2001, Tom Lane wrote:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > The best thing to do is simply to expect propagation delay.\n>\n> Actually, I just sent a gripe off to Marc about this. I've been\n> noticing large and variable propagation delay for a few months now,\n> but I just today realized that the problem is entirely local to hub.org.\n> For example, look at the headers on your message:\n>\n> Received: from postgresql.org (webmail.postgresql.org [216.126.85.28])\n> \tby sss.pgh.pa.us (8.11.4/8.11.4) with ESMTP id f7LJKpY10196\n> \tfor <tgl@sss.pgh.pa.us>; Tue, 21 Aug 2001 15:20:51 -0400 (EDT)\n> Received: from postgresql.org.org (webmail.postgresql.org [216.126.85.28])\n> \tby postgresql.org (8.11.3/8.11.4) with SMTP id f7LJKpP46374;\n> \tTue, 21 Aug 2001 15:20:52 -0400 (EDT)\n> \t(envelope-from pgsql-hackers-owner+M12441@postgresql.org)\n> Received: from www.wgcr.org (www.wgcr.org [206.74.232.194])\n> \tby postgresql.org (8.11.3/8.11.4) with ESMTP id f7LHxnP15711\n> \tfor <pgsql-hackers@postgresql.org>; Tue, 21 Aug 2001 13:59:49 -0400 (EDT)\n> \t(envelope-from lamar.owen@wgcr.org)\n> Received: from lowen.wgcr.org (IDENT:lowen@[10.1.2.3])\n> \tby www.wgcr.org (8.9.3/8.9.3/WGCR) with SMTP id NAA25357;\n> \tTue, 21 Aug 2001 13:59:40 -0400\n>\n> All the delay seems to be in transferring the message from\n> postgresql.org to webmail.postgresql.org ... which are the same\n> machine, or at least the same IP address. What's up with that?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n",
"msg_date": "Tue, 21 Aug 2001 19:48:11 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: List response time... "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Huh? Two different machines altogether ...\n\nHmm. Maybe the problem is this:\n\n$ nslookup -q=mx postgresql.org\nServer: localhost\nAddress: 127.0.0.1\n\nNon-authoritative answer:\npostgresql.org preference = 0, mail exchanger = mail.postgresql.org\npostgresql.org preference = 20, mail exchanger = mail1.hub.org\npostgresql.org preference = 20, mail exchanger = mailserv.hub.org\n\nAuthoritative answers can be found from:\npostgresql.org nameserver = NS.TRENDS.CA\npostgresql.org nameserver = NS.hub.org\n\nmail.postgresql.org internet address = 216.126.85.28\nmail1.hub.org internet address = 216.126.85.1\nmailserv.hub.org internet address = 216.126.84.253\nNS.TRENDS.CA internet address = 209.47.148.2\nNS.hub.org internet address = 216.126.84.1\n\n From out here, the primary mail acceptor for 'postgresql.org' shows\nas mail.postgresql.org = 216.126.85.28. Should we be preferring\n216.126.84.28 instead? It sure looks like there's an unnecessary\nhop happening inside hub.org.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 19:58:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: List response time... "
},
{
"msg_contents": "On Tue, 21 Aug 2001, Marc G. Fournier wrote:\n\n> On Tue, 21 Aug 2001, Lamar Owen wrote:\n>\n> > On Tuesday 21 August 2001 12:59, Serguei Mokhov wrote:\n> > > Looking at my message about the bug webpage and\n> > > some other posts, I see that it was delayed for\n> > > about 2h and a half. Some of the post were\n> > > delayed for days... Why is that? Looks like\n> > > the list has problems of some sort which cause\n> > > these irregular delays.\n> >\n> > Mailing lists don't scale well to large numbers of subscribers. I see this\n> > delay constantly,on multiple lists. The bigger the list gets, the slower the\n> > list gets (and the more loaded the server gets, right Marc? :-)). Newsgroups\n>\n> Actually, the 'multi-day' delay is generally related to posts from ppl\n> that aren't subscribed to the lists that I have to approve manually ...\n>\n> .. as far as server load is concerned, causing several hour delay, we're\n> just about to put online a dual processor server that has been donated to\n> the project ... its first task is going to be mailing list distribution\n> and a open news server for the newsgroups.\n\nCan I put qmail on it first?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 20:59:16 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "\nNope, but thanks for the offer ;)\n\nOn Tue, 21 Aug 2001, Vince Vielhaber wrote:\n\n> On Tue, 21 Aug 2001, Marc G. Fournier wrote:\n>\n> > On Tue, 21 Aug 2001, Lamar Owen wrote:\n> >\n> > > On Tuesday 21 August 2001 12:59, Serguei Mokhov wrote:\n> > > > Looking at my message about the bug webpage and\n> > > > some other posts, I see that it was delayed for\n> > > > about 2h and a half. Some of the post were\n> > > > delayed for days... Why is that? Looks like\n> > > > the list has problems of some sort which cause\n> > > > these irregular delays.\n> > >\n> > > Mailing lists don't scale well to large numbers of subscribers. I see this\n> > > delay constantly,on multiple lists. The bigger the list gets, the slower the\n> > > list gets (and the more loaded the server gets, right Marc? :-)). Newsgroups\n> >\n> > Actually, the 'multi-day' delay is generally related to posts from ppl\n> > that aren't subscribed to the lists that I have to approve manually ...\n> >\n> > .. as far as server load is concerned, causing several hour delay, we're\n> > just about to put online a dual processor server that has been donated to\n> > the project ... its first task is going to be mailing list distribution\n> > and a open news server for the newsgroups.\n>\n> Can I put qmail on it first?\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\n",
"msg_date": "Tue, 21 Aug 2001 21:13:41 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "On Tue, 21 Aug 2001, Marc G. Fournier wrote:\n\n>\n> Nope, but thanks for the offer ;)\n\nPleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeze?????\n\nYou won't be sorry or disappointed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\n\n\n>\n> On Tue, 21 Aug 2001, Vince Vielhaber wrote:\n>\n> > On Tue, 21 Aug 2001, Marc G. Fournier wrote:\n> >\n> > > On Tue, 21 Aug 2001, Lamar Owen wrote:\n> > >\n> > > > On Tuesday 21 August 2001 12:59, Serguei Mokhov wrote:\n> > > > > Looking at my message about the bug webpage and\n> > > > > some other posts, I see that it was delayed for\n> > > > > about 2h and a half. Some of the post were\n> > > > > delayed for days... Why is that? Looks like\n> > > > > the list has problems of some sort which cause\n> > > > > these irregular delays.\n> > > >\n> > > > Mailing lists don't scale well to large numbers of subscribers. I see this\n> > > > delay constantly,on multiple lists. The bigger the list gets, the slower the\n> > > > list gets (and the more loaded the server gets, right Marc? :-)). Newsgroups\n> > >\n> > > Actually, the 'multi-day' delay is generally related to posts from ppl\n> > > that aren't subscribed to the lists that I have to approve manually ...\n> > >\n> > > .. as far as server load is concerned, causing several hour delay, we're\n> > > just about to put online a dual processor server that has been donated to\n> > > the project ... its first task is going to be mailing list distribution\n> > > and a open news server for the newsgroups.\n> >\n> > Can I put qmail on it first?\n> >\n> > Vince.\n> > --\n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> >\n> >\n> >\n> >\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 21:15:00 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> \n> > Mailing lists don't scale well to large numbers of subscribers. I see this \n> > delay constantly,on multiple lists. The bigger the list gets, the slower the \n> > list gets (and the more loaded the server gets, right Marc? :-)).\n> \n> Note that the postgresql.org mail server is still running sendmail.\n> In my personal experience with sources.redhat.com, qmail is a much\n> better choice to handle large mailing lists. When we switched from\n> sendmail to qmail, mailing list delays dropped from hours, or\n> sometimes even days, to seconds.\n\nThe MTA used for various redhat.com mailing lists is postfix (and\nmailman as listmanager)\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "21 Aug 2001 21:55:22 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": "\nIf it was a sendmail issue, by all means, but it isn't so no :)\n\nOn Tue, 21 Aug 2001, Vince Vielhaber wrote:\n\n> On Tue, 21 Aug 2001, Marc G. Fournier wrote:\n>\n> >\n> > Nope, but thanks for the offer ;)\n>\n> Pleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeze?????\n>\n> You won't be sorry or disappointed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n>\n>\n>\n> >\n> > On Tue, 21 Aug 2001, Vince Vielhaber wrote:\n> >\n> > > On Tue, 21 Aug 2001, Marc G. Fournier wrote:\n> > >\n> > > > On Tue, 21 Aug 2001, Lamar Owen wrote:\n> > > >\n> > > > > On Tuesday 21 August 2001 12:59, Serguei Mokhov wrote:\n> > > > > > Looking at my message about the bug webpage and\n> > > > > > some other posts, I see that it was delayed for\n> > > > > > about 2h and a half. Some of the post were\n> > > > > > delayed for days... Why is that? Looks like\n> > > > > > the list has problems of some sort which cause\n> > > > > > these irregular delays.\n> > > > >\n> > > > > Mailing lists don't scale well to large numbers of subscribers. I see this\n> > > > > delay constantly,on multiple lists. The bigger the list gets, the slower the\n> > > > > list gets (and the more loaded the server gets, right Marc? :-)). Newsgroups\n> > > >\n> > > > Actually, the 'multi-day' delay is generally related to posts from ppl\n> > > > that aren't subscribed to the lists that I have to approve manually ...\n> > > >\n> > > > .. as far as server load is concerned, causing several hour delay, we're\n> > > > just about to put online a dual processor server that has been donated to\n> > > > the project ... its first task is going to be mailing list distribution\n> > > > and a open news server for the newsgroups.\n> > >\n> > > Can I put qmail on it first?\n> > >\n> > > Vince.\n> > > --\n> > > ==========================================================================\n> > > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > > Online Campground Directory http://www.camping-usa.com\n> > > Online Giftshop Superstore http://www.cloudninegifts.com\n> > > ==========================================================================\n> > >\n> > >\n> > >\n> > >\n> >\n> >\n>\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\n",
"msg_date": "Tue, 21 Aug 2001 22:14:27 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n\n> If it was a sendmail issue, by all means, but it isn't so no :)\n\nBoth qmail and postfix radically outperform sendmail for large mailing\nlist delivery on identical hardware. It seems strange to me to say\nthat there is no sendmail issue when sendmail itself is the issue.\nThe queuing structure sendmail uses is simply wrong when a single\nmessage has many recipients. I've run moderately serious (1000 users,\ndozens of messages per day) mailing lists using both sendmail and\nqmail, and there really is no comparison.\n\nIan\n",
"msg_date": "21 Aug 2001 20:35:02 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "Marc wrote:\n> Actually, the 'multi-day' delay is generally related to posts from ppl\n> that aren't subscribed to the lists that I have to approve manually ...\n\nIs there a quick(er) way to 'subscribe, set nomail' on all the mailing lists\nthat are mirrored to news.postgresql.org?\n\nI prefer to read/post through the news server and I've had to subscribe\nmanually to most lists.\n\nCheers,\n\nColin\n\n\n",
"msg_date": "Wed, 22 Aug 2001 17:05:44 +0200",
"msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "On Tuesday 21 August 2001 19:45, Marc G. Fournier wrote:\n> Actually, the 'multi-day' delay is generally related to posts from ppl\n> that aren't subscribed to the lists that I have to approve manually ...\n\nI have been getting delayed duplicates from people (ie, Tom Lane) addressed \nto only the hackers list (which I know he's subscribed to). Up to a week \nafter reading it once already.\n\nMy mail spool filesystem has severl GB of free space, too. Unless my \nsendmail installation is doing funny things. :-)\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 22 Aug 2001 11:22:09 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Tuesday 21 August 2001 19:45, Marc G. Fournier wrote:\n>> Actually, the 'multi-day' delay is generally related to posts from ppl\n>> that aren't subscribed to the lists that I have to approve manually ...\n\n> I have been getting delayed duplicates from people (ie, Tom Lane) addressed \n> to only the hackers list (which I know he's subscribed to).\n\nThe approval delay isn't only for people who are not subscribed. Marc\nalso has a number of filters in there that are intended to shunt off\nadministrivia (ie, people who send uns*bscribe commands to the whole\nlist). This is not a bad idea, but unfortunately, his filter patterns\nare WAY too loose IMHO. I've had posts delayed because of references\nto c*ncel, s*b-SELECT, and some other words that I could hardly even\nsee the connection to administrivia requests.\n\nHoping that this post gets through without being delayed ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Aug 2001 12:25:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: List response time... "
},
{
"msg_contents": "> > Actually, the 'multi-day' delay is generally related to posts from ppl\n> > that aren't subscribed to the lists that I have to approve manually ...\n>\n> I have been getting delayed duplicates from people (ie, Tom Lane)\naddressed\n> to only the hackers list (which I know he's subscribed to). Up to a week\n> after reading it once already.\n>\n\nI can confirm this also. I have seen delayed (up to several days later)\nduplicates of emails I have already received.\n\n",
"msg_date": "Wed, 22 Aug 2001 13:35:16 -0500",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "David Ford wrote:\n> \n> Ian Lance Taylor wrote:\n> \n> >>Mailing lists don't scale well to large numbers of subscribers. I see this\n> >>delay constantly,on multiple lists. The bigger the list gets, the slower the\n> >>list gets (and the more loaded the server gets, right Marc? :-)).\n> >>\n> >\n> >Note that the postgresql.org mail server is still running sendmail.\n> >In my personal experience with sources.redhat.com, qmail is a much\n> >better choice to handle large mailing lists. When we switched from\n> >sendmail to qmail, mailing list delays dropped from hours, or\n> >sometimes even days, to seconds.\n> >\n> \n> It's all in the configuration. I slam mails around dozens of machines\n> in seconds using sendmail and I process a lot of mail.\n\nNot only configuration. A friend of mine upgraded a computer that was\nunable \nto handle the mail feed from P200 to PIII 800 going from sendmail to\nqmail at \nthe same time. The load average dropped from \"allways very busy\" to\n0.02. \n\nIt is possible that it is mainly from better conf and faster processor\nbut then \nI'd claim that qmail is easier to configure for big load.\n\n----------------\nHannu\n",
"msg_date": "Fri, 24 Aug 2001 08:47:51 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": ">\n>\n>All the delay seems to be in transferring the message from\n>postgresql.org to webmail.postgresql.org ... which are the same\n>machine, or at least the same IP address. What's up with that?\n>\n\nLooks like sendmail? Change your queue runs to be more aggressive. I \nhave an mc file on http://blue-labs.org/clue/bluelabs.mc that has some \naggressive queue definitions.\n\nDavid\n\n\n\n",
"msg_date": "Fri, 24 Aug 2001 02:06:49 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "Ian Lance Taylor wrote:\n\n>>Mailing lists don't scale well to large numbers of subscribers. I see this \n>>delay constantly,on multiple lists. The bigger the list gets, the slower the \n>>list gets (and the more loaded the server gets, right Marc? :-)).\n>>\n>\n>Note that the postgresql.org mail server is still running sendmail.\n>In my personal experience with sources.redhat.com, qmail is a much\n>better choice to handle large mailing lists. When we switched from\n>sendmail to qmail, mailing list delays dropped from hours, or\n>sometimes even days, to seconds.\n>\n\nIt's all in the configuration. I slam mails around dozens of machines \nin seconds using sendmail and I process a lot of mail.\n\nDavid\n\n\n",
"msg_date": "Fri, 24 Aug 2001 02:07:57 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": ">\n>\n>You are seeing sendmail's poorly designed queuing behaviour in action.\n>sendmail limits itself by outgoing messages, rather than outgoing\n>deliveries. This causes one slow delivery to hold up many fast\n>deliveries.\n>\n\nAgain, all in the configuration....rinse, repeat.\n\nSimply change your queue priority.\n\nDavid\n\n\n",
"msg_date": "Fri, 24 Aug 2001 02:09:03 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": "Vince Vielhaber wrote:\n\n>ooooooooooooooooooooooohhhhhhhhhhhhhhhhhhhhhh.... I've been raggin on\n>Marc on that one for well over a year, maybe two.. I started using\n>qmail when it was still in .7something beta and never looked back. The\n>folks at Security Focus have moved all of the lists to ezmlm (part of\n>qmail) and have had nothing but success... But don't tell Marc.\n>\n\nAnd ezlm is -ever- so quick to tell you your mail is bouncing when your \nlink goes down for a few hours or is sporadic. I know of several others \nthat simply send you the emails that are in queue.\n\n-d\n\n\n",
"msg_date": "Fri, 24 Aug 2001 02:13:52 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": "Ian Lance Taylor wrote:\n\n>Both qmail and postfix radically outperform sendmail for large mailing\n>list delivery on identical hardware. It seems strange to me to say\n>that there is no sendmail issue when sendmail itself is the issue.\n>The queuing structure sendmail uses is simply wrong when a single\n>message has many recipients. I've run moderately serious (1000 users,\n>dozens of messages per day) mailing lists using both sendmail and\n>qmail, and there really is no comparison.\n>\n\nIan, please\n\nIt's in the configuration. I run much more than the above and have no \nissues at all.\n\n-d\n\n\n",
"msg_date": "Fri, 24 Aug 2001 02:18:35 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": "> It's in the configuration. I run much more than the above and have no \n> issues at all.\n\nYeah, some people shouldn't have root even if they own the machine.\n\n",
"msg_date": "Fri, 24 Aug 2001 17:57:44 +1000 (EST)",
"msg_from": "speedboy <speedboy@nomicrosoft.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": "On Fri, 24 Aug 2001, David Ford wrote:\n\n> It's all in the configuration. I slam mails around dozens of machines\n> in seconds using sendmail and I process a lot of mail.\n\nSo have you patched for the latest of the many sendmail root exploits?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 24 Aug 2001 07:17:48 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
},
{
"msg_contents": "Vince Vielhaber wrote:\n\n>On Fri, 24 Aug 2001, David Ford wrote:\n>\n>>It's all in the configuration. I slam mails around dozens of machines\n>>in seconds using sendmail and I process a lot of mail.\n>>\n>\n>So have you patched for the latest of the many sendmail root exploits?\n>\n>Vince.\n>\n\nI keep my systems up to latest and greatest that passes the lab. \n Currently 8.12.0b19. Since I keep things up to date and read the \ndocumentation... I tend to avoid most security problems. Do keep in \nmind that most of the latest issues are symbiotic problems due to issues \nfound in LK capabilities.\n\n-d\n\n\n",
"msg_date": "Fri, 24 Aug 2001 13:31:30 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: [OT] Re: List response time..."
},
{
"msg_contents": "speedboy <speedboy@nomicrosoft.org> writes:\n\n> > It's in the configuration. I run much more than the above and have no \n> > issues at all.\n> \n> Yeah, some people shouldn't have root even if they own the machine.\n\nSince I was the original poster I'm going to take minor umbrage. I've\nbeen writing and distributing free software for over ten years, and my\nwork can be found in every Linux and *BSD distribution. What have you\ndone for the world lately?\n\nI also do know how to configure sendmail, another thing I did for over\nten years until I switched to qmail in 1998. I will have to\nrespectfully disagree with David Ford, with the proviso that it is\ncertainly possible that recent sendmail releases have better queuing\nbehaviour. David, have you ever tried qmail or postfix? Why not?\n\nIan\n",
"msg_date": "24 Aug 2001 15:22:47 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n\n> >ooooooooooooooooooooooohhhhhhhhhhhhhhhhhhhhhh.... I've been raggin on\n> >Marc on that one for well over a year, maybe two.. I started using\n> >qmail when it was still in .7something beta and never looked back. The\n> >folks at Security Focus have moved all of the lists to ezmlm (part of\n> >qmail) and have had nothing but success... But don't tell Marc.\n> >\n> \n> And ezlm is -ever- so quick to tell you your mail is bouncing when\n> your link goes down for a few hours or is sporadic. I know of several\n> others that simply send you the emails that are in queue.\n\nI don't know what you are referring to here. ezmlm simply handles\nbounces generated by the MTA. qmail does not bounce mail merely\nbecause a link goes down for a few hours or is sporadic.\n\nThere is an issue here which you may be referring to: vanilla ezmlm\ndoes not handle temporary failure DSN notices very well--it treats\nthem as bounces. This is easily fixable, and in fact I believe that\nezmlm+idx (which is what most people use) does handle them correctly\nby default.\n\nIan\n",
"msg_date": "24 Aug 2001 15:26:12 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: List response time..."
},
{
"msg_contents": "Hey guys,\n\nCan you move this thread elsewhere?\n\nIt's EXTREMELY off topic now.\n\n:(\n\nRegards and best wishes,\n\nJustin Clift\n\n\nIan Lance Taylor wrote:\n> \n> David Ford <david@blue-labs.org> writes:\n> \n> > >ooooooooooooooooooooooohhhhhhhhhhhhhhhhhhhhhh.... I've been raggin on\n> > >Marc on that one for well over a year, maybe two.. I started using\n> > >qmail when it was still in .7something beta and never looked back. The\n> > >folks at Security Focus have moved all of the lists to ezmlm (part of\n> > >qmail) and have had nothing but success... But don't tell Marc.\n> > >\n> >\n> > And ezlm is -ever- so quick to tell you your mail is bouncing when\n> > your link goes down for a few hours or is sporadic. I know of several\n> > others that simply send you the emails that are in queue.\n> \n> I don't know what you are referring to here. ezmlm simply handles\n> bounces generated by the MTA. qmail does not bounce mail merely\n> because a link goes down for a few hours or is sporadic.\n> \n> There is an issue here which you may be referring to: vanilla ezmlm\n> does not handle temporary failure DSN notices very well--it treats\n> them as bounces. This is easily fixable, and in fact I believe that\n> ezmlm+idx (which is what most people use) does handle them correctly\n> by default.\n> \n> Ian\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sun, 26 Aug 2001 01:02:29 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: List response time..."
}
] |
[
{
"msg_contents": "> Regarding the licencing of the code, I always release my code\n> under GPL, which is the licence I prefer, but my code in the\n> backend is obviously released under the original postgres\n> licence. Since the module is loaded dynamically and not linked\n> into the backend I don't see a problem here.\n\nThe problem is how to use user-locks in commercial projects.\nSome loadable interface functions are required to use in-backend\nuser lock code, but interface is so simple - if one would write\nnew functions they would look the same as yours covered by GPL.\n\n> If the licence becomes a problem I can easily change it, but\n> I prefer the GPL if possible.\n\nActually I don't see why to cover your contrib module by GPL.\nNot so much IP (intellectual property) there. Real new things\nwhich make new feature possible are in lock manager.\n\nVadim\n",
"msg_date": "Tue, 21 Aug 2001 10:12:20 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: User locks code"
}
] |
[
{
"msg_contents": "\n> > I would object even if there's such a way.\n> > People in Japan have hardly noticed that the strange\n> > behabior is due to the strange locale(LC_COLLATE).\n> \n> I don't think we should design our systems in a way that\ninconveniences\n> many users because some users are using broken operating systems. If\n> Japanese users have not realized yet that the locale support they are\n> using is broken, then it's not the right solution to disable it in\n> PostgreSQL by default. In that case the problem would just persist\nfor\n> the system as a whole. The right solution is for them to turn off\nlocale\n> support in their operating system, the way it's supposed to be done.\n\nI do not agree with your above statement, I would also want a way to\nturn \nit off in PostreSQL alone and leave the OS and rest as is (without a\nneed \nto worry about). (Our admins use C, En_US, De_DE, De_AT here, but no\nlocale \nsupport in the db)\n\nImho we also need to keep in mind that other DB's don't create locale\naware\nchar columns by default eighter (they have nchar or some other extended \ncreate table syntax).\n\nAndreas\n",
"msg_date": "Tue, 21 Aug 2001 19:50:34 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "RE: Locale by default?"
}
] |
[
{
"msg_contents": "> > I don't see problem here - just a few bytes in shmem for\n> > key. Auxiliary table would keep refcounters for keys.\n> \n> I think that running out of shmem *would* be a problem for such a\n> facility. We have a hard enough time now sizing the lock table for\n\nAuxiliary table would have fixed size and so no new keys would be\nadded if no space. I don't see problem with default 8Kb aux table,\ndo you?\n\n> system locks, even though they use fixed-size keys and the system as\n> a whole is designed to ensure that not too many locks will be held\n> simultaneously. (For example, SELECT FOR UPDATE doesn't try to use\n> per-tuple locks.) Earlier in this thread, someone proposed using\n> user locks as a substitute for SELECT FOR UPDATE. I can guarantee\n> you that that someone will run out of shared memory before long,\n> if the userlock table resides in shared memory.\n\nHow is proposed \"key locking\" is different from user locks we\nhave right now? Anyone can try to acquire many-many user locks.\n\nVadim\n",
"msg_date": "Tue, 21 Aug 2001 10:54:45 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: RE: User locks code "
}
] |
[
{
"msg_contents": "\n> Face it, everything has locale support these day. PostgreSQL is one\nof\n> the few packages that even has it as an option to turn it off. Users\nof\n> binary packages of PostgreSQL are all invariably faced with locale\n> features. So it's not like sudden unasked-for locale support is going\nto\n> be a major shock.\n\nWhat makes you so opposed to a GUC for disabling locale support ?\n\nAndreas\n",
"msg_date": "Tue, 21 Aug 2001 19:58:47 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "RE: Locale by default?"
},
{
"msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> What makes you so opposed to a GUC for disabling locale support ?\n\nNothing. It may in fact be the best solution.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 21 Aug 2001 23:14:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "RE: Locale by default?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Zeugswetter Andreas SB SD writes:\n>> What makes you so opposed to a GUC for disabling locale support ?\n\n> Nothing. It may in fact be the best solution.\n\nAs long as locale has to be an initdb-time setting, a GUC var won't\nhelp much.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 19:46:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale by default? "
}
] |
[
{
"msg_contents": "\n> > If your solution is short-lived, your change would be not\n> > only useless but also harmful. So I expect locale-aware\n> > people to confirm that we are in the right direction.\n> \n> I am a bit confused here. We have tinkered with LIKE indexing at\nleast a\n> year. Now that a solution is found that *works*, it is claimed that\nit is\n> harmful because LIKE was doing the wrong thing in the first place.\nOTOH,\n> I have not seen anyone independently claim that LIKE is wrong, nor do\nI\n> see anyone proposing to actually change it.\n\nBecause we configure --without-locale ? How should we see or object to \nwrong behavior in a part of the software we don't need or use ?\n\nAndreas\n",
"msg_date": "Tue, 21 Aug 2001 20:03:22 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
},
{
"msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> > I am a bit confused here. We have tinkered with LIKE indexing at\n> least a\n> > year. Now that a solution is found that *works*, it is claimed that\n> it is\n> > harmful because LIKE was doing the wrong thing in the first place.\n> OTOH,\n> > I have not seen anyone independently claim that LIKE is wrong, nor do\n> I\n> > see anyone proposing to actually change it.\n>\n> Because we configure --without-locale ? How should we see or object to\n> wrong behavior in a part of the software we don't need or use ?\n\nWrong answer. Read my second sentence again and compare it to your first\nsentence.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 21 Aug 2001 20:16:58 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "RE: Progress report on locale safe LIKE indexing"
}
] |
[
{
"msg_contents": "\nI need to manage a large number of various databases, some assigned to\ndifferent people, others only internal to the company I work for. I would\nlike to be able to update the pg_hba.conf file from inside a psql console\nconnection but only if I an connected to template1 AND the main postgres\nuser. I don't want to have to connect to the database machine via a\ndifferent means because it's a step that can be forgotten very easily.\n\nIf I had the option to be able to create a new pg_hba.conf entry then I\ncould remember to do it right after I create a new database, and a user\nfor it.\n\n\n Chris Bowlby,\n -----------------------------------------------------\n Web Developer @ Hub.org.\n excalibur@hub.org\n www.hub.org\n 1-902-542-3657\n -----------------------------------------------------\n\n",
"msg_date": "Tue, 21 Aug 2001 14:20:11 -0400 (EDT)",
"msg_from": "Chris Bowlby <excalibur@hub.org>",
"msg_from_op": true,
"msg_subject": "request, if not already available."
}
] |
[
{
"msg_contents": "Hi;\n\nI am reverse engineering a PostgreSQL database by querying catalog\ntables. I have run into a problem where I can not determine the exact\ninfo used in i.e. the CREATE TABLE statement. For example; how to\ndetermine the exact precision/length and scale used in a NUMERIC(p,s)\ncolumn def.\n\nLooking at the PostgreSQL ODBC driver I see that it does some funky\nstuff such as reporting VARCHAR(maxlen) instead of i.e. VARCHAR(50) or\nwhatever the column def was. So it appears that the author had similar\nproblems. The result is that the ODBC driver does not appear to be\nabsolutely accurate.\n\nIs this information availible somewhere in the catalog tables?\n\nPeter\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 11:49:54 -0700",
"msg_from": "Peter Harvey <pharvey@codebydesign.com>",
"msg_from_op": true,
"msg_subject": "query column def"
},
{
"msg_contents": "Peter Harvey <pharvey@codebydesign.com> writes:\n> Is this information availible somewhere in the catalog tables?\n\nYes, in the atttypmod column of pg_attribute.\n\nI'd recommend looking at psql's \\d commands (describe.c), or at\npg_dump, to see the approved way to retrieve catalog info. Those\nare kept up to date pretty faithfully, whereas other interfaces\naren't necessarily. (Feel free to submit a patch to fix ODBC...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 15:22:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: query column def "
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nFolks-\n\n I'm looking into some modifications for the Castor project on how we get\nidenties from inserted objects into the database. Specifically, if we have\nused the SERIAL type to link in a sequence number during row insertion, it would\nbe great to get that sequence number when we do the insert. Both Oracle and\nmssql have something to provide this. Getting the OID would be second-best\nsolution for us.\n\n Now, I understand that in the Statement class, we have getInsertedOID() in the\ntable. However, the problem we run into is that this isn't accessiable if we\nuse something like poolman to provide database pooling of connections. (You \nget the poolMan Statement object which is wraps the Statement classes of the\ndriver.) \n\n So, what I'm looking for is some of the following ways to get the primary key\nback, or the OID if nothing else...\n\n 1) Being able to use the RETURNING clause in prepared statements, like this \n \"INSERT INTO tableName (key1,...) \n VALUES (value1,...)\n RETURNING primKeyName INTO ?\"\n Which is what Oracle provides.\n\n 2) Working like Sybase and call \"select @@OID\" on the next call, which would\n be trapped by the statment object to return a resultset with the oid of\n the last inserted oid.\n\n 3) or, someother way that where one doesn't need direct access to method\n calls to get 'getInsertedOID()', but indirect ones.\n\nThoughts?\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7gr83iysnOdCML0URAj+QAJ91RZXOoEA+xnu68YhIA4euNfIWOgCfcG1/\n5L5ATkdL/wPwTnbQy0NI2jI=\n=l/7B\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 21 Aug 2001 13:06:15 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n> 1) Being able to use the RETURNING clause in prepared statements, like this\n> \"INSERT INTO tableName (key1,...) \n> VALUES (value1,...)\n> RETURNING primKeyName INTO ?\"\n> Which is what Oracle provides.\n\nINSERT ... RETURNING was discussed recently, and I think people agreed\nit's a good idea, but it got hung up on some unresolved issues about how\nit should interact with ON INSERT rules for views. Search the pghackers\nmailing list archives for details. At this point I think it's probably\ntoo late to consider it for 7.2, but I'm still open to doing it in 7.3\nif we can come up with a bulletproof spec.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 16:50:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2... some questions... "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nWhat about the 'select @@last_oid' to make the getInsertedOID() call available\neven when the driver is wrapped by a pooling manager?\n\nHow do people feel about this? (I would be happy to contribute to the codebase\nfor this one.) I understand the RETURNING clause is more work... and I would\nlike to help out on that for 7.3.\n\n\nOn 21-Aug-2001 Tom Lane wrote:\n> Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n>> 1) Being able to use the RETURNING clause in prepared statements, like\n>> this\n>> \"INSERT INTO tableName (key1,...) \n>> VALUES (value1,...)\n>> RETURNING primKeyName INTO ?\"\n>> Which is what Oracle provides.\n> \n> INSERT ... RETURNING was discussed recently, and I think people agreed\n> it's a good idea, but it got hung up on some unresolved issues about how\n> it should interact with ON INSERT rules for views. Search the pghackers\n> mailing list archives for details. At this point I think it's probably\n> too late to consider it for 7.2, but I'm still open to doing it in 7.3\n> if we can come up with a bulletproof spec.\n> \n> regards, tom lane\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7gtIoiysnOdCML0URAmgIAJwJVf2BGhFq88bXHY3yni9qzGohegCdHmPf\n7Xnb57gfiP2xoMC8x5mIhWU=\n=p5Bx\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 21 Aug 2001 14:27:04 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n> What about the 'select @@last_oid' to make the getInsertedOID() call\n> available even when the driver is wrapped by a pooling manager?\n\n> How do people feel about this?\n\nYech. At least, not with *that* syntax. @@ is a valid operator name\nin Postgres.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 17:31:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2... some questions... "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> it should interact with ON INSERT rules for views. Search the pghackers\n> mailing list archives for details. At this point I think it's probably\n\nUgh... a quick search shows me asking this same thing in May, when the 'big\nTODO' list was being commented on. Now that I remember, I thought that the\nRETURNING clause was being implemented. I didn't realize there wasn't\nagreement on it.\n\nI need to pay more attention to what I write. :-)\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7gtSBiysnOdCML0URAtXNAJ4+W3AgpEk5QZM5IKCFFMQ2tGP5UQCeM1Lx\nTXQw4pL4ew65B1iH+EfmZhk=\n=Ho0k\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 21 Aug 2001 14:37:05 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nOk, so you're not opposed to the idea then, just the syntax. Does anyone\noppose having this concept in the JDBC driver? And what syntax is acceptable?\nCould we just do \n'select getInsertedOID()'\nwhich would break people who have functions called getInsertedOID() of course...\n\n\n\nOn 21-Aug-2001 Tom Lane wrote:\n> Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n>> What about the 'select @@last_oid' to make the getInsertedOID() call\n>> available even when the driver is wrapped by a pooling manager?\n> \n>> How do people feel about this?\n> \n> Yech. At least, not with *that* syntax. @@ is a valid operator name\n> in Postgres.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7gv5yiysnOdCML0URAq7qAJkBRhAcE9wctn7bUAv7UMwN3n9+nwCeJR4V\nymYTw8l3f9WU4V5idFsibAE=\n=UQ2M\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 21 Aug 2001 17:36:02 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "I am assuming that this would be a new function in the server. \nTherefore this wouldn't be jdbc specific and would be available to all \nclient interfaces. I don't see what this really has to do with JDBC.\n\nthanks,\n--Barry\n\nNed Wolpert wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> Ok, so you're not opposed to the idea then, just the syntax. Does anyone\n> oppose having this concept in the JDBC driver? And what syntax is acceptable?\n> Could we just do \n> 'select getInsertedOID()'\n> which would break people who have functions called getInsertedOID() of course...\n> \n> \n> \n> On 21-Aug-2001 Tom Lane wrote:\n> \n>>Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n>>\n>>>What about the 'select @@last_oid' to make the getInsertedOID() call\n>>>available even when the driver is wrapped by a pooling manager?\n>>>\n>>>How do people feel about this?\n>>>\n>>Yech. At least, not with *that* syntax. @@ is a valid operator name\n>>in Postgres.\n>>\n>> regards, tom lane\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: Have you checked our extensive FAQ?\n>>\n>>http://www.postgresql.org/users-lounge/docs/faq.html\n>>\n> \n> \n> Virtually, \n> Ned Wolpert <ned.wolpert@knowledgenet.com>\n> \n> D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.0.6 (GNU/Linux)\n> Comment: For info see http://www.gnupg.org\n> \n> iD8DBQE7gv5yiysnOdCML0URAq7qAJkBRhAcE9wctn7bUAv7UMwN3n9+nwCeJR4V\n> ymYTw8l3f9WU4V5idFsibAE=\n> =UQ2M\n> -----END PGP SIGNATURE-----\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n",
"msg_date": "Wed, 22 Aug 2001 03:31:41 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nActually, it's not a new function on the server... I'm just trying to find a\nway to access the getInsertedOID() method in the statement object without\nhaving direct access to the statement object. (My use-case is when the JDBC\ndriver is wrapped in a pooling manager, like PoolMan, it wraps all classes.)\n\nI wanted to catch the line 'select @@last_oid' which would return a result set\nwith the single entry based on the results of the method call 'getInsertedOID'\nbut some people didn't like the syntax. So, I'm seaking comments to see if\nthere is a better syntax people like, as long as they don't mind the\nfunctionality. One option is the 'faking' of the function call on the server,\nwhich really isn't a good option in itself, but outside of my original 'catch'\nline, its all I got.\n\nWhat's your thoughts? Do you see the need for the functionality? Do you have\na solution that I need?\n\nOn 22-Aug-2001 Barry Lind wrote:\n> I am assuming that this would be a new function in the server. \n> Therefore this wouldn't be jdbc specific and would be available to all \n> client interfaces. I don't see what this really has to do with JDBC.\n> \n> thanks,\n> --Barry\n> \n> Ned Wolpert wrote:\n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n>> \n>> \n>> Ok, so you're not opposed to the idea then, just the syntax. Does anyone\n>> oppose having this concept in the JDBC driver? And what syntax is\n>> acceptable?\n>> Could we just do \n>> 'select getInsertedOID()'\n>> which would break people who have functions called getInsertedOID() of\n>> course...\n>> \n>> \n>> \n>> On 21-Aug-2001 Tom Lane wrote:\n>> \n>>>Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n>>>\n>>>>What about the 'select @@last_oid' to make the getInsertedOID() call\n>>>>available even when the driver is wrapped by a pooling manager?\n>>>>\n>>>>How do people feel about this?\n>>>>\n>>>Yech. At least, not with *that* syntax. @@ is a valid operator name\n>>>in Postgres.\n>>>\n>>> regards, tom lane\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 5: Have you checked our extensive FAQ?\n>>>\n>>>http://www.postgresql.org/users-lounge/docs/faq.html\n>>>\n>> \n>> \n>> Virtually, \n>> Ned Wolpert <ned.wolpert@knowledgenet.com>\n>> \n>> D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n>> -----BEGIN PGP SIGNATURE-----\n>> Version: GnuPG v1.0.6 (GNU/Linux)\n>> Comment: For info see http://www.gnupg.org\n>> \n>> iD8DBQE7gv5yiysnOdCML0URAq7qAJkBRhAcE9wctn7bUAv7UMwN3n9+nwCeJR4V\n>> ymYTw8l3f9WU4V5idFsibAE=\n>> =UQ2M\n>> -----END PGP SIGNATURE-----\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>> \n>> \n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7g+agiysnOdCML0URAvbvAJ9GO/spmwQYZessjk4IenhtPuguSwCdHRQN\nxH+tnGqKpmg/UOSnxOevek0=\n=pcr+\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 22 Aug 2001 10:06:40 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": ">\n> What's your thoughts? Do you see the need for the functionality? Do you have\n> a solution that I need?\n\nDefinitely need the functionality. It's one of the things holding up me\nporting an Informix system. Laziness is a bigger holdup of course - the\nInformix system is so bulletproof and I'm slowly re-writing all the 4GL in\nJava.\n\nFWIW, Informix returns the new SERIAL value through a structure\nSQLCA.SQLERRD[3] from memory. Not applicable to a PostgreSQL solution I'd\nsay.\n\n>\n> On 22-Aug-2001 Barry Lind wrote:\n> > I am assuming that this would be a new function in the server.\n> > Therefore this wouldn't be jdbc specific and would be available to all\n> > client interfaces. I don't see what this really has to do with JDBC.\n> >\n> > thanks,\n> > --Barry\n> >\n> > Ned Wolpert wrote:\n> >> -----BEGIN PGP SIGNED MESSAGE-----\n> >> Hash: SHA1\n> >>\n> >>\n> >> Ok, so you're not opposed to the idea then, just the syntax. Does anyone\n> >> oppose having this concept in the JDBC driver? And what syntax is\n> >> acceptable?\n> >> Could we just do\n> >> 'select getInsertedOID()'\n> >> which would break people who have functions called getInsertedOID() of\n> >> course...\n> >>\n> >>\n> >>\n> >> On 21-Aug-2001 Tom Lane wrote:\n> >>\n> >>>Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n> >>>\n> >>>>What about the 'select @@last_oid' to make the getInsertedOID() call\n> >>>>available even when the driver is wrapped by a pooling manager?\n> >>>>\n> >>>>How do people feel about this?\n> >>>>\n> >>>Yech. At least, not with *that* syntax. @@ is a valid operator name\n> >>>in Postgres.\n> >>>\n> >>> regards, tom lane\n> >>>\n> >>>---------------------------(end of broadcast)---------------------------\n> >>>TIP 5: Have you checked our extensive FAQ?\n> >>>\n> >>>http://www.postgresql.org/users-lounge/docs/faq.html\n> >>>\n> >>\n> >>\n> >> Virtually,\n> >> Ned Wolpert <ned.wolpert@knowledgenet.com>\n> >>\n> >> D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45\n> >> -----BEGIN PGP SIGNATURE-----\n> >> Version: GnuPG v1.0.6 (GNU/Linux)\n> >> Comment: For info see http://www.gnupg.org\n> >>\n> >> iD8DBQE7gv5yiysnOdCML0URAq7qAJkBRhAcE9wctn7bUAv7UMwN3n9+nwCeJR4V\n> >> ymYTw8l3f9WU4V5idFsibAE=\n> >> =UQ2M\n> >> -----END PGP SIGNATURE-----\n> >>\n> >> ---------------------------(end of broadcast)---------------------------\n> >> TIP 2: you can get off all lists at once with the unregister command\n> >> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >>\n> >>\n>\n>\n> Virtually,\n> Ned Wolpert <ned.wolpert@knowledgenet.com>\n>\n> D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.0.6 (GNU/Linux)\n> Comment: For info see http://www.gnupg.org\n>\n> iD8DBQE7g+agiysnOdCML0URAvbvAJ9GO/spmwQYZessjk4IenhtPuguSwCdHRQN\n> xH+tnGqKpmg/UOSnxOevek0=\n> =pcr+\n> -----END PGP SIGNATURE-----\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Thu, 23 Aug 2001 08:33:28 +1000 (EST)",
"msg_from": "Peter Wiley <wiley@mmspl.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "> >\n> > What's your thoughts? Do you see the need for the functionality? Do you have\n> > a solution that I need?\n> \n> Definitely need the functionality. It's one of the things holding up me\n> porting an Informix system. Laziness is a bigger holdup of course - the\n> Informix system is so bulletproof and I'm slowly re-writing all the 4GL in\n> Java.\n> \n> FWIW, Informix returns the new SERIAL value through a structure\n> SQLCA.SQLERRD[3] from memory. Not applicable to a PostgreSQL solution I'd\n> say.\n\nIf I remember correctly, that is the ANSI standard embedded C way to\nreturn such values.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 22 Aug 2001 19:05:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "Ned,\n\n\nI would only agree to this functionality if it where a backend function. \n By putting it in the front end, you now need to front end to \nunderstand the special function. (And then you we are likely going to \nhave requests that this special function be available in all the front \nends JDBC, ODBC, perl, etc.).\n\nFor the front end to understand the function it needs to parse the SQL \nstatement. Thus under your proposal every select statement needs to be \nparsed to see if one of the selected items is the special function. I \nstrive to ensure that the jdbc code does not need to parse the SQL \nstatements and understand the grammer of the SQL language. Since \nfunctions can appear in select lists, where clauses, orderbys, and even \ninsert and update statements, you quickly end up with the client needing \nto reimplement the entire parser that is already in the backend. You \ncould argue that this really could be special cased (i.e. it must be \nexactly 'select getInsertedOID()' case and whitespace makes a \ndifference, but then all you really have is a kludge for a specific \nproblem, not a framework to solve other similar problems. I would argue \nthat a framework does exist to solve this and other problems and that \nframework is to add additional functions into the backend.\n\nGiven that the JDBC driver already does provide the information via the \ngetLastOID() method, we are really dealing with a small isolated \nproblem here because of the use of a connection pool that doesn't let \nyou get at that method. (Unfortunatly for you, it is the problem you \nare facing).\n\nIn general I don't think the benefits of this feature (i.e. providing \ninformation that is currently available, except when using certain 3rd \nparty connection pooling mechanisms) are worth the long term costs.\n\nIf the function was implemented in the backend, I think it would be a \ngood idea.\n\nthanks,\n--Barry\n\nNed Wolpert wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Actually, it's not a new function on the server... I'm just trying to find a\n> way to access the getInsertedOID() method in the statement object without\n> having direct access to the statement object. (My use-case is when the JDBC\n> driver is wrapped in a pooling manager, like PoolMan, it wraps all classes.)\n> \n> I wanted to catch the line 'select @@last_oid' which would return a result set\n> with the single entry based on the results of the method call 'getInsertedOID'\n> but some people didn't like the syntax. So, I'm seaking comments to see if\n> there is a better syntax people like, as long as they don't mind the\n> functionality. One option is the 'faking' of the function call on the server,\n> which really isn't a good option in itself, but outside of my original 'catch'\n> line, its all I got.\n> \n> What's your thoughts? Do you see the need for the functionality? Do you have\n> a solution that I need?\n> \n> On 22-Aug-2001 Barry Lind wrote:\n> \n>>I am assuming that this would be a new function in the server. \n>>Therefore this wouldn't be jdbc specific and would be available to all \n>>client interfaces. I don't see what this really has to do with JDBC.\n>>\n>>thanks,\n>>--Barry\n>>\n>>Ned Wolpert wrote:\n>>\n>>>-----BEGIN PGP SIGNED MESSAGE-----\n>>>Hash: SHA1\n>>>\n>>>\n>>>Ok, so you're not opposed to the idea then, just the syntax. Does anyone\n>>>oppose having this concept in the JDBC driver? And what syntax is\n>>>acceptable?\n>>>Could we just do \n>>>'select getInsertedOID()'\n>>>which would break people who have functions called getInsertedOID() of\n>>>course...\n>>>\n>>>\n>>>\n>>>On 21-Aug-2001 Tom Lane wrote:\n>>>\n>>>\n>>>>Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n>>>>\n>>>>\n>>>>>What about the 'select @@last_oid' to make the getInsertedOID() call\n>>>>>available even when the driver is wrapped by a pooling manager?\n>>>>>\n>>>>>How do people feel about this?\n>>>>>\n>>>>>\n>>>>Yech. At least, not with *that* syntax. @@ is a valid operator name\n>>>>in Postgres.\n>>>>\n>>>> regards, tom lane\n>>>>\n>>>>---------------------------(end of broadcast)---------------------------\n>>>>TIP 5: Have you checked our extensive FAQ?\n>>>>\n>>>>http://www.postgresql.org/users-lounge/docs/faq.html\n>>>>\n>>>>\n>>>\n>>>Virtually, \n>>>Ned Wolpert <ned.wolpert@knowledgenet.com>\n>>>\n>>>D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n>>>-----BEGIN PGP SIGNATURE-----\n>>>Version: GnuPG v1.0.6 (GNU/Linux)\n>>>Comment: For info see http://www.gnupg.org\n>>>\n>>>iD8DBQE7gv5yiysnOdCML0URAq7qAJkBRhAcE9wctn7bUAv7UMwN3n9+nwCeJR4V\n>>>ymYTw8l3f9WU4V5idFsibAE=\n>>>=UQ2M\n>>>-----END PGP SIGNATURE-----\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 2: you can get off all lists at once with the unregister command\n>>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>>>\n>>>\n>>>\n> \n> \n> Virtually, \n> Ned Wolpert <ned.wolpert@knowledgenet.com>\n> \n> D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.0.6 (GNU/Linux)\n> Comment: For info see http://www.gnupg.org\n> \n> iD8DBQE7g+agiysnOdCML0URAvbvAJ9GO/spmwQYZessjk4IenhtPuguSwCdHRQN\n> xH+tnGqKpmg/UOSnxOevek0=\n> =pcr+\n> -----END PGP SIGNATURE-----\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n",
"msg_date": "Thu, 23 Aug 2001 10:11:18 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n(For those unfamiliar with the topic, looking for a way to get the last\ninserted OID through a sql call, rather than a method call off the JDBC\ndriver)\n\nOn 23-Aug-2001 Barry Lind wrote:\n> I would only agree to this functionality if it where a backend function. \n\nFor me, the best method to deal with this problem is implement the returning\nclause on the backend... but isn't an option in the short term....\n\n> For the front end to understand the function it needs to parse the SQL \n> statement. Thus under your proposal every select statement needs to be \n> parsed to see if one of the selected items is the special function. I \n> strive to ensure that the jdbc code does not need to parse the SQL \n> statements and understand the grammer of the SQL language. Since \n> functions can appear in select lists, where clauses, orderbys, and even \n\nThis is a problem, I agree. In short, supporting 'select @@last_oid' (my\noriginal syntax) is not a framework within itself, but a short-term \"kludge\" as\nyou mentioned. \n\nBut what are the options that should be pursued? I want to solve this one way\nor another. (And willing to work on an acceptable solution.)\n\n> Given that the JDBC driver already does provide the information via the \n> getLastOID() method, we are really dealing with a small isolated \n> problem here because of the use of a connection pool that doesn't let \n> you get at that method. (Unfortunatly for you, it is the problem you \n> are facing).\n\nWell, it's not really that isolated. The method call 'getLastOID()' isn't in\nthe backend either. That's the problem. This method provides functionality\nwhich is very useful, but not in a uniformally applied way where each driver\ncan benefit either.\n \n> If the function was implemented in the backend, I think it would be a \n> good idea.\n\nPerhaps this is the solution after all. (And the reason I forward this to the\npghackers list as well) Should the backend support the function\ngetLastInsertedOID() or even getLastInsertedPrimaryKey() (or both)? Now, I can\ntry to write the functions, and see if it can be separated into the contrib\nsection of the psql repository if people would like.\n\nDoes this work for you? (And anyone else reading this)\n\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7hUCLiysnOdCML0URArdAAJ4kI4S00AVzGgazsGS5nTMu+0X8CwCeOLQ8\nTVTTzaQdEt6uJrbVAm0Dd4s=\n=U3aZ\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 23 Aug 2001 10:42:35 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "New backend functions? [was Re: JDBC changes for 7.2... some\n\tquestions...]"
},
{
"msg_contents": "Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n> Should the backend support the function getLastInsertedOID() or even\n> getLastInsertedPrimaryKey() (or both)?\n\nI don't think you have any chance of doing the latter --- for one thing,\nhow are you going to declare that function's return type? But the\nformer seems doable and reasonable to me: whenever an OID is returned\nto the client in an INSERT or UPDATE command result, also stash it in\na static variable that can be picked up by this function.\n\nPlease pick a more SQL-friendly (ie, case insensitive) naming\nconvention, though. And note that it'd apply to both INSERT and UPDATE.\nMaybe get_last_returned_oid() ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Aug 2001 14:44:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] New backend functions? [was Re: JDBC changes for 7.2...\n\tsome questions...]"
},
{
"msg_contents": "On Thu, 23 Aug 2001 14:44:19 -0400, you wrote:\n>Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n>> Should the backend support the function getLastInsertedOID()?\n>\n>seems doable and reasonable to me: whenever an OID is returned\n>to the client in an INSERT or UPDATE command result, also stash it in\n>a static variable that can be picked up by this function.\n\nWhat should the semantics be exactly?\n\nHow about the multiple INSERT's i've been reading about on\nhackers? ... Only the OID of the last row inserted by the\nstatement?\n\nHow about an UPDATE statement that updates multiple rows?\n\nHow about JDBC batchExecute() when it performs multiple\nINSERT/UPDATE's? ... Only the OID of the last UPDATE or INSERT\nstatement in the batch?\n\nHow about triggers that insert/update extra rows? ... Only the\nOID of the row directly inserted by the client statement?\n\nHow about Large Objects? Should inserting or updating a large\nobject affect getLastInsertedOID()?\n\nI assume this OID would be associated with a client connection.\nIs this going to work with client side connection pooling?\n\nHow about transaction semantics? INSERT row 1, Commit, INSERT\nrow 2, Rollback... what should getLastInsertedOID() return? Can\nit, with a static variable?\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Thu, 23 Aug 2001 21:45:13 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] New backend functions? [was Re: JDBC changes for\n\t7.2... some questions...]"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 23-Aug-2001 Rene Pijlman wrote:\n> What should the semantics be exactly?\n> \n> How about the multiple INSERT's i've been reading about on\n> hackers? ... Only the OID of the last row inserted by the\n> statement?\n> \n> How about an UPDATE statement that updates multiple rows?\n\nWell, here's my thoughts on this...\n\nThe functionality would be that the very last inserted or updated OID would be\nstored in this static variable that is associated with the connection/session. \nSo, in multiple inserts or updates, it is the last oid affect where this\nvariable would be updated. \n \n> How about triggers that insert/update extra rows? ... Only the\n> OID of the row directly inserted by the client statement?\n\nIt would be the last updated request caused by any insert or update, regardless\nof if its a trigger, preparedStatement, etc.\n \n> How about Large Objects? Should inserting or updating a large\n> object affect getLastInsertedOID()?\n\nYes.\n\n> I assume this OID would be associated with a client connection.\n> Is this going to work with client side connection pooling?\n\nIt must... that's the reason for this. Specifically, the JDBC driver has a\nmethod in it that is called getInsertedOID() which provides the last\nsucessfully inserted row's OID. This is specific to the driver, and JDBC\npooling techniques do not allow access to this method. (It's not part of the\nJDBC spec) So, to make this data accessable to the users in a pooling\ncondition, the call \"select getLastOID()\" needs to return the OID that is\nspecific to the session.\n\nIn Java, pooling techniques generally are aquired, then released, as dependant\non the client or timeout procedures, and not randomly used for individual\nqueries. (Mostly because of the need for the same driver during a transaction\nthat takes multiple queries.)\n \n> How about transaction semantics? INSERT row 1, Commit, INSERT\n> row 2, Rollback... what should getLastInsertedOID() return? Can\n> it, with a static variable?\n\nGood question. I'd start with rollback not affecting this value. Reason being\nthat this function would be mostly used in a transaction anyways. I would not\nobject to making this method only available during a transaction block if that\nhelps.\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7hWEwiysnOdCML0URAk3xAJ92nYoy22mP4Yk8xk53vojlF42w5gCfbnZf\nuexoQ9yqexctXvQM0yx+g2Y=\n=yK6n\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 23 Aug 2001 13:01:52 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [JDBC] New backend functions? [was Re: JDBC ch"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nI like your function name, get_last_returned_oid(). That works for me.\n\nOn 23-Aug-2001 Tom Lane wrote:\n> Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n>> Should the backend support the function getLastInsertedOID() or even\n>> getLastInsertedPrimaryKey() (or both)?\n> \n> I don't think you have any chance of doing the latter --- for one thing,\n> how are you going to declare that function's return type? But the\n> former seems doable and reasonable to me: whenever an OID is returned\n> to the client in an INSERT or UPDATE command result, also stash it in\n> a static variable that can be picked up by this function.\n> \n> Please pick a more SQL-friendly (ie, case insensitive) naming\n> convention, though. And note that it'd apply to both INSERT and UPDATE.\n> Maybe get_last_returned_oid() ?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7hWGbiysnOdCML0URAkqAAJ9Liv8VS+CPMYozG1q1tuy7vGLuEACfUJRM\nHdbns8MxyOVgurx5ztV8YZU=\n=BbF3\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 23 Aug 2001 13:03:39 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: [JDBC] New backend functions? [was Re: JDBC changes for 7.2."
},
{
"msg_contents": "Rene Pijlman <rpijlman@wanadoo.nl> writes:\n> On Thu, 23 Aug 2001 14:44:19 -0400, you wrote:\n>> seems doable and reasonable to me: whenever an OID is returned\n>> to the client in an INSERT or UPDATE command result, also stash it in\n>> a static variable that can be picked up by this function.\n\n> What should the semantics be exactly?\n\nJust the same as the command result string.\n\n> How about the multiple INSERT's i've been reading about on\n> hackers? ... Only the OID of the last row inserted by the\n> statement?\n\nNo OID is returned when multiple rows are inserted or updated. I'd say\nthat should be the semantics of this function, too.\n\n> How about JDBC batchExecute() when it performs multiple\n> INSERT/UPDATE's?\n\nBy definition, this is a backend function. It cannot know anything of\nJDBC.\n\n> I assume this OID would be associated with a client connection.\n> Is this going to work with client side connection pooling?\n\nGood point. Will this really get around the original poster's problem??\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Aug 2001 17:56:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] New backend functions? [was Re: JDBC changes for\n\t7.2... some questions...]"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 23-Aug-2001 Tom Lane wrote:\n>> I assume this OID would be associated with a client connection.\n>> Is this going to work with client side connection pooling?\n> \n> Good point. Will this really get around the original poster's problem??\n\nIt must. If transactions are on, any pooling mechanism needs to continue to\nuse that connection for the client unti the transaction is done. (Most require\nthe client to either tell the pool manager the connection is no longer need,\nvia a close() call, or a pool-manager specific call, precisely because the\nclient needs it to complete the transaction.\n\nMy feeling is that if this is a problem, then this method call may need to be\nlimited to the transaction context, but I hope that this is not the case.\nMost pool managers (and I'm only speaking about Java here) require some\nactivity on the client to give up the connection, either directly or\nindirectly. \n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7hYNRiysnOdCML0URAre3AJ94x/4mfeaJX3IQjRtyTWafeaR/BgCeIB4V\nliQyRjblBSuX38R0kq+NvVw=\n=ltfC\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 23 Aug 2001 15:27:29 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [JDBC] New backend functions? [was Re: JDBC ch"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n> > Should the backend support the function getLastInsertedOID() or even\n> > getLastInsertedPrimaryKey() (or both)?\n> \n> I don't think you have any chance of doing the latter --- for one thing,\n> how are you going to declare that function's return type? But the\n> former seems doable and reasonable to me: whenever an OID is returned\n\nHmm OIDs would be optional in 7.2.\nIs it known(announced) to pgsql-jdbc list ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 24 Aug 2001 09:54:31 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] New backend functions? [was Re: JDBC changes\n\tfor 7.2... some questions...]"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Hmm OIDs would be optional in 7.2.\n> Is it known(announced) to pgsql-jdbc list ?\n\nDoesn't seem particularly relevant to this issue though. An application\nthat's using OIDs to identify rows would certainly not choose to create\nits tables without OIDs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Aug 2001 20:56:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] New backend functions? [was Re: JDBC changes for\n\t7.2... some questions...]"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nSounds like there aren't objections to my requested function,\nget_last_returned_oid(). I'm going to work on the function call for\npostgres this weekend. \n\nPurpose: Retain the last oid returned (if any) in a variable\n associated with the connection on the backend, so the next request to\n the backend has access to it. It will be set when an insert or update\n is completed, so preparedStatements and stored procedures can use it.\n\nIf I'm able to provide patches by Monday, and if it works fine (without causing\ngeneral meltdowns :-) would this be able to be in the 7.2 beta, or will it\nneed to be part of 7.3? (The answer to this question will help me decide on\nhow much time I should spend on it this weekend.)\n\n\n> On 23-Aug-2001 Tom Lane wrote:\n>>> I assume this OID would be associated with a client connection.\n>>> Is this going to work with client side connection pooling?\n>> \n>> Good point. Will this really get around the original poster's problem??\n> \n> It must. If transactions are on, any pooling mechanism needs to continue to\n> use that connection for the client unti the transaction is done. (Most\n> require\n> the client to either tell the pool manager the connection is no longer need,\n> via a close() call, or a pool-manager specific call, precisely because the\n> client needs it to complete the transaction.\n> \n> My feeling is that if this is a problem, then this method call may need to be\n> limited to the transaction context, but I hope that this is not the case.\n> Most pool managers (and I'm only speaking about Java here) require some\n> activity on the client to give up the connection, either directly or\n> indirectly. \n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7hpkHiysnOdCML0URAhPnAJ9z/aWCR88kk60WmZJRalusOYm78ACeLPl7\njRlgOPLcuPd7JCsJy5JomUA=\n=ruJB\n-----END PGP SIGNATURE-----\n",
"msg_date": "Fri, 24 Aug 2001 11:12:23 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [JDBC] New backend functions?"
},
{
"msg_contents": "Ned Wolpert <ned.wolpert@knowledgenet.com> writes:\n> If I'm able to provide patches by Monday, and if it works fine (without causing\n> general meltdowns :-) would this be able to be in the 7.2 beta, or will it\n> need to be part of 7.3?\n\n7.2 is still wide open; in fact I'm working on major restructuring of\npg_log that I have every intention of committing before 7.2 beta ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Aug 2001 15:37:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] New backend functions? "
},
{
"msg_contents": "Ned Wolpert writes:\n\n> Now, I understand that in the Statement class, we have getInsertedOID() in the\n> table. However, the problem we run into is that this isn't accessiable if we\n> use something like poolman to provide database pooling of connections. (You\n> get the poolMan Statement object which is wraps the Statement classes of the\n> driver.)\n\nI think no one has asked yet *why* it isn't \"accessible\".\n\nMaybe the getInsertedOID function needs to be moved to some other class?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 24 Aug 2001 21:44:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nlast_oid() works for me. I have no specific concern on the exact name, just\nthe functionality. (Call it fred() if you like. :-)\n\nOn 24-Aug-2001 Peter Eisentraut wrote:\n> Ned Wolpert writes:\n> \n>> Sounds like there aren't objections to my requested function,\n>> get_last_returned_oid(). I'm going to work on the function call for\n>> postgres this weekend.\n> \n> Please don't name functions get_*. All functions \"get\" something.\n> \n> Btw., if you call get_last_returned_oid() and then call it again, do you\n> get the \"last returned oid\" returned by get_last_returned_oid? The\n> \"returned\" needs to be revised. last_oid should be sufficient.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7hrARiysnOdCML0URAtv/AJ9UmsbpbA4M6VVOYD90LMrV0FelTwCcCo/r\n+aMk4VQ11xWYktgbBWCyOM0=\n=qQry\n-----END PGP SIGNATURE-----\n",
"msg_date": "Fri, 24 Aug 2001 12:50:41 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [JDBC] New backend functions?"
},
{
"msg_contents": "Ned Wolpert writes:\n\n> Sounds like there aren't objections to my requested function,\n> get_last_returned_oid(). I'm going to work on the function call for\n> postgres this weekend.\n\nPlease don't name functions get_*. All functions \"get\" something.\n\nBtw., if you call get_last_returned_oid() and then call it again, do you\nget the \"last returned oid\" returned by get_last_returned_oid? The\n\"returned\" needs to be revised. last_oid should be sufficient.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 24 Aug 2001 21:51:23 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [JDBC] New backend functions?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nIt's not accessiable since its not in the JDBC spec. Specifically, when you\nuse PoolMan (www.codestudio.com) as the pooling manager, PoolMan has its own\nJDBC classes that wraps any JDBC compliant driver. Since this method is\nspecific to PostgreSQL, it's not in the standard JDBC classes. Most JDBC\npoolers wrap classes similarly. (Well, except oracle's pool manager. They\nprovide their own pooling manager.)\n\nIn any case, I think this functionality is useful beyond just JDBC.\n\nOn 24-Aug-2001 Peter Eisentraut wrote:\n> Ned Wolpert writes:\n> \n>> Now, I understand that in the Statement class, we have getInsertedOID() in\n>> the\n>> table. However, the problem we run into is that this isn't accessiable if\n>> we\n>> use something like poolman to provide database pooling of connections. (You\n>> get the poolMan Statement object which is wraps the Statement classes of the\n>> driver.)\n> \n> I think no one has asked yet *why* it isn't \"accessible\".\n> \n> Maybe the getInsertedOID function needs to be moved to some other class?\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7hrDTiysnOdCML0URAj93AJ9fPjtXEuVj6Wvc6bbOp/eEV6EdtACdEqe+\n/MCOd239rhEXfb4j1yajuoQ=\n=nem4\n-----END PGP SIGNATURE-----\n",
"msg_date": "Fri, 24 Aug 2001 12:53:55 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "The reason why it isn't accessible is that some implementations of \nconnection pools (including one I have written), return a wrapper object \naround the connection object. This wrapper object just implements the \njava.sql.Connection interface defined by the jdbc spec. If the wrapped \nconnection (an org.postgresql.Connection object in this case) has extra \nmethods (such as getInsertedOID()) there is no way to access those extra \nmethods as the wrapper does not contain them.\n\nIf you were using the postgres connection object directly you would \nsimply cast the object to an org.postgresql.Connection and then you \nwould be able to access the extra methods. But you can't in this case \nbecause you are dealing with a wrapper object (i.e. something like \ncom.foo.connectionpool.Connection).\n\nthanks,\n--Barry\n\nPeter Eisentraut wrote:\n> Ned Wolpert writes:\n> \n> \n>> Now, I understand that in the Statement class, we have getInsertedOID() in the\n>>table. However, the problem we run into is that this isn't accessiable if we\n>>use something like poolman to provide database pooling of connections. (You\n>>get the poolMan Statement object which is wraps the Statement classes of the\n>>driver.)\n>>\n> \n> I think no one has asked yet *why* it isn't \"accessible\".\n> \n> Maybe the getInsertedOID function needs to be moved to some other class?\n> \n> \n\n\n",
"msg_date": "Fri, 24 Aug 2001 13:12:14 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "On Fri, 24 Aug 2001 12:53:55 -0700 (MST), you wrote:\n>It's not accessiable since its not in the JDBC spec. Specifically, when you\n>use PoolMan (www.codestudio.com) as the pooling manager, PoolMan has its own\n>JDBC classes that wraps any JDBC compliant driver. \n\nDo they also wrap Connection? Since the last oid is associated\nwith a client connection to the backend, it might be a solution\nto add a getLastOID() method to Connection.\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Sat, 25 Aug 2001 00:02:02 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nYes. Every class is wrapped; that is how most connection pooling managers in\nJava tend to work.\n\nSpecifically, you load PoolMan directly (via DriverManager) or indirectly\n(JNDI) and it reads its configuration properties to figure out what classes to\nload. You only have access to PoolMan's classes, which are a proxy to the\nPostgreSQL JDBC classes. (Or which-ever JDBC classes that are in use. PoolMan\nworks with and JDBC-compliant driver and datasource.)\n\nOn 24-Aug-2001 Rene Pijlman wrote:\n> Do they also wrap Connection? Since the last oid is associated\n> with a client connection to the backend, it might be a solution\n> to add a getLastOID() method to Connection.\n> \n> Regards,\n> Ren��� Pijlman\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7htPviysnOdCML0URApzMAKCAp5PpDRXkQVRgkin46+QvibJFogCdFQsl\nsFesIHynhct9C+CEQSK6WZs=\n=wnEQ\n-----END PGP SIGNATURE-----\n",
"msg_date": "Fri, 24 Aug 2001 15:23:43 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "At 03:02 PM 8/24/2001, Rene Pijlman wrote:\n>Do they also wrap Connection? Since the last oid is associated\n>with a client connection to the backend, it might be a solution\n>to add a getLastOID() method to Connection.\n\nPoolMan wraps Connection, Statement, PreparedStatement, CallableStatement \nand ResultSet with their own \"smart\" versions. The wrappers do not provide \naccessors to the underlying objects.\n\nNote that I'm not using PoolMan. I evaluated it several months ago and \ndecided to just write a *very* simple ConnectionPool for the time being.\n\nDave Harkness\n\n",
"msg_date": "Fri, 24 Aug 2001 16:57:11 -0700",
"msg_from": "Dave Harkness <daveh@MEconomy.com>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2... some questions..."
},
{
"msg_contents": "\nIt's been mentioned before, but a set of error numbers for database errors\nwould make trapping exceptions and dealing with them gracefully a LOT\nsimpler. I have java code that runs against Oracle, Informix, PostgreSQL,\nMS SQL Server and Cloudscape. All(?) the others have an error code as well\nas an error message and it's a lot easier to get the error code.\n\nOf course, they all have *different* error codes for the same error (ie\nprimary key violation). Nothing is ever simple.\n\nPeter Wiley\n\n",
"msg_date": "Mon, 27 Aug 2001 08:48:52 +1000 (EST)",
"msg_from": "Peter Wiley <wiley@mmspl.com.au>",
"msg_from_op": false,
"msg_subject": "JDBC changes for 7.2 - wish list item"
},
{
"msg_contents": "This is on the TODO list for the backend. As a post a couple of days \nago stated, when the backend has error code support, it will also be \nadded to JDBC. Until then....\n\nthanks,\n--Barry\n\nPeter Wiley wrote:\n> It's been mentioned before, but a set of error numbers for database errors\n> would make trapping exceptions and dealing with them gracefully a LOT\n> simpler. I have java code that runs against Oracle, Informix, PostgreSQL,\n> MS SQL Server and Cloudscape. All(?) the others have an error code as well\n> as an error message and it's a lot easier to get the error code.\n> \n> Of course, they all have *different* error codes for the same error (ie\n> primary key violation). Nothing is ever simple.\n> \n> Peter Wiley\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n\n",
"msg_date": "Mon, 27 Aug 2001 09:29:58 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2 - wish list item"
},
{
"msg_contents": "On Mon, 27 Aug 2001 08:48:52 +1000 (EST), you wrote:\n>It's been mentioned before, but a set of error numbers for database errors\n>would make trapping exceptions and dealing with them gracefully a LOT\n>simpler. I have java code that runs against Oracle, Informix, PostgreSQL,\n>MS SQL Server and Cloudscape. All(?) the others have an error code as well\n>as an error message and it's a lot easier to get the error code.\n\nI agree. Its on the list on\nhttp://lab.applinet.nl/postgresql-jdbc/#SQLException. This\nrequires new functionality in the backend.\n\n>Of course, they all have *different* error codes for the same error (ie\n>primary key violation). Nothing is ever simple.\n\nPerhaps the SQLState string in SQLException can make this easier\n(if we can support this with PostgreSQL). This is supposed to\ncontain a string identifying the exception, following the Open\nGroup SQLState conventions. I'm not sure how useful these are.\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Mon, 27 Aug 2001 20:01:18 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": false,
"msg_subject": "Re: JDBC changes for 7.2 - wish list item"
}
] |
[
{
"msg_contents": "Hi,\n\n fortunately the problems with a malfunctioning client during\n the authentication don't cause the v7.2 postmaster to hang\n any more (thanks to Peter and Tom). The client authentication\n is moved into the forked off process.\n\n Now one little problem remains. If a bogus client causes a\n child to hang before becoming a real backend, this child is\n in the backend list of the postmaster, but has all signals\n blocked. Thus, preventing the postmaster from beeing able to\n shutdown.\n\n I think the correct behaviour should be to enable SIGTERM and\n SIGQUIT during client authentication and simply exit(0) if\n they occur. If so, what would be the best way to get these\n two signals out of the block mask?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 21 Aug 2001 17:47:44 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Signals blocked during auth"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Now one little problem remains. If a bogus client causes a\n> child to hang before becoming a real backend, this child is\n> in the backend list of the postmaster, but has all signals\n> blocked. Thus, preventing the postmaster from beeing able to\n> shutdown.\n\nI think this is fairly irrelevant, because a not-yet-backend should\nhave a fairly short timeout (a few seconds) before just shutting\ndown anyway, so that malfunctioning clients can't cause denial of\nservice; the particular case you mention is just one scenario.\n\nI have been intending to implement this soon if Peter didn't.\n\nOTOH, it'd be easy enough to turn on SIGTERM/SIGQUIT too, if you\nthink there's really any value in it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 18:54:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Signals blocked during auth "
},
{
"msg_contents": "Tom Lane writes:\n\n> I think this is fairly irrelevant, because a not-yet-backend should\n> have a fairly short timeout (a few seconds) before just shutting\n> down anyway, so that malfunctioning clients can't cause denial of\n> service; the particular case you mention is just one scenario.\n\nI have a note here about an authentication timeout on the order of a few\nminutes. You never know what sort of things PAM or Kerberos can go\nthrough behind the scenes.\n\n> OTOH, it'd be easy enough to turn on SIGTERM/SIGQUIT too, if you\n> think there's really any value in it.\n\nI think that would be reasonable.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 22 Aug 2001 17:55:58 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Signals blocked during auth "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n>\n> > I think this is fairly irrelevant, because a not-yet-backend should\n> > have a fairly short timeout (a few seconds) before just shutting\n> > down anyway, so that malfunctioning clients can't cause denial of\n> > service; the particular case you mention is just one scenario.\n>\n> I have a note here about an authentication timeout on the order of a few\n> minutes. You never know what sort of things PAM or Kerberos can go\n> through behind the scenes.\n>\n> > OTOH, it'd be easy enough to turn on SIGTERM/SIGQUIT too, if you\n> > think there's really any value in it.\n>\n> I think that would be reasonable.\n\n OK, I'll go ahead and enable these two during authentication\n with a special signal handler that simply does exit(0). The\n postmaster expects all it's children to suicide anytime soon\n more or less bloody depending on if he send's TERM or QUIT.\n But at least, they have to terminate without waiting for the\n client or otherwise infinitely.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 22 Aug 2001 13:14:57 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Signals blocked during auth"
}
] |
[
{
"msg_contents": "Peter Eisentraut wrote:\n> We've had some problem reports that the current practice of initdb\n> assigning to the postgres user the same usesysid as the user id of the\n> Unix user running initdb has caused some clashes.\n> ...\n> I think the simplest fix would be to assign a fixed usesysid of 1.\n\nI was initially lukewarm about this idea, but I've just thought of a\nreason to like it ;-).\n\nI've been thinking a little bit about how one might recover from Really\nStupid Mistakes, like deleting one's only superuser pg_shadow entry.\n(Let's see ... you can't make another one ... and you can't easily run\npg_dump without a superuser identity ... is your database a lost cause?)\n\nI think that the only way to get around this kind of thing in extremis\nis to shut down the postmaster and run a standalone backend, in which\nyou can do a CREATE USER or whatever other surgery you need to perform.\nAccordingly, a standalone backend should not do any permission-checking;\nthe fact that you are able to start a backend with access to the\ndatabase files should be good enough evidence that you are the\nsuperuser.\n\nHowever there's still a problem, if you've made this particular variety\nof Really Stupid Mistake: the standalone backend won't fire up.\n\n$ postgres template1\nDEBUG: database system was shut down at 2001-08-21 17:56:07 EDT\nDEBUG: checkpoint record is at (0, 39113800)\nDEBUG: redo record is at (0, 39113800); undo record is at (0, 0); shutdown TRUE\n\nDEBUG: next transaction id: 8595; next oid: 262492\nDEBUG: database system is ready\nFATAL 1: user \"postgres\" does not exist\nDEBUG: shutting down\nDEBUG: database system is shut down\n\nWhat I'm thinking is that if we hard-wired usesysid = 1 for the\nsuperuser, it'd be possible to arrange for standalone backends to fire\nup with that sysid and superuserness assumed, and not consult pg_shadow\nat all. Then you'd have a platform in which you could do CREATE USER.\n\nThoughts?\n\nNext mind-bending problem: recover from DROP TABLE pg_class ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 18:17:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: A fixed user id for the postgres user?"
},
{
"msg_contents": "Tom Lane writes:\n\n> What I'm thinking is that if we hard-wired usesysid = 1 for the\n> superuser, it'd be possible to arrange for standalone backends to fire\n> up with that sysid and superuserness assumed, and not consult pg_shadow\n> at all. Then you'd have a platform in which you could do CREATE USER.\n\nI had always figured that you could use bki to recover from these things,\nbut a quick attempt shows that you can't.\n\nYou proposal makes sense from a Unix admin point of view (booting into\nsingle user mode without password). Since we have a check against root\naccess and against too liberal PGDATA permissions, I think this would be\nsafe. Possibly we need to guard against setgid problems as well.\n\n> Next mind-bending problem: recover from DROP TABLE pg_class ;-)\n\nDefinitely BKI land. But that usecatupd field does make some sense,\napparently.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 22 Aug 2001 18:03:33 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: A fixed user id for the postgres user?"
},
{
"msg_contents": "Tom Lane writes:\n\n> I've been thinking a little bit about how one might recover from Really\n> Stupid Mistakes, like deleting one's only superuser pg_shadow entry.\n\n> What I'm thinking is that if we hard-wired usesysid = 1 for the\n> superuser, it'd be possible to arrange for standalone backends to fire\n> up with that sysid and superuserness assumed, and not consult pg_shadow\n> at all. Then you'd have a platform in which you could do CREATE USER.\n\nFYI: I'm working on this now. It seems to work out nicely; at least I\nwas able to recover from DELETE FROM pg_shadow; without a problem.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 8 Sep 2001 00:19:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: A fixed user id for the postgres user?"
}
] |
[
{
"msg_contents": "\nIn the seemingly hundreds of thousands of messages on the bug database\ntopic I think I've come up with the following..\n\nNeeds\n-----------------------------------------------------------------------\neasy reporting of bugs - sent to bugs list\neasy lookup of previous bugs\nsummary of fix or workaround\ndetail of fix or work around\nlittle to no intervention of\n developers\nability of developer to add\n comments\n\n\nThat should sum it up.\n\nNow some history.. Over the last couple of years we've tried a\nnumber (5 I think) of bug tracking packages. Either Marc or me\nor both have had to learn it, install it, get it going and the\nresult has been the same - the maintainers don't want to update\nit, it's a pain in the ass to administrator, set up, etc.\n\nThe current bugtool.\n\nAfter a bunch of these failures I asked for input on what was\nneeded in a tool. Web input interface, ability to track the\nbug report, email notification to the bug list, email notification\nto the reporter of the bug.\n\nThe current bugtool does this, however the maintainers don't want\nto close the reports. I'm not faulting them, they're doing their\njobs by fixing the bugs and reporting them to the bugs list.\n\nUpdating the database.\n\nWe've had a couple of volunteers to keep the database up to date.\nIs it enough? I dunno, if I were to guess I'd have to look at\nprevious experience and say probably not. But I don't want that\nto discourage anything or anyone.\n\nRealities\n\nPostgreSQL is growing by leaps and bounds. Ross pointed out this\nfact earlier today. A solution has to happen and it has to happen\nnow. If a tool is to be adapted to this task it will be the one\nI'm most familiar with - the current one.\n\nSolution..\n\nIs implementing yet another bugtool going to be the solution?\nProbably not. Do I want to go for number six? No.\n\nOf the ideas posted, these stick out:\n\n\to Web input\n\to Minimal staff involvement\n\to Maximal mailing list reporting\n\to History\n\to Searchability\n\n\n\nHere's what I propose.\n\nThe current tool has a form - we keep it.\nThe current tool mails to the bugs list - we keep it.\n\nRather than searching the bugs list for open bugs that may not even\nbe open, the search tool will need to search not only the database\nbut it needs to also search the archives. For now (until the 400+\nare classified) the search should/will search the bugs mailing list\nrather than the database.\n\nRecruit more than two people to help update the bugs database.\n\nAfter the database is somewhat up to date, include it into the\nnormal search mechanism.\n\nNow then.. The folks that actually fix things, will this suffice\nas a start to our shortcomingss? If not, what is missing?? If\nso, let me know and I'll implement this in the short term. Silence\nat this time is definitely NOT A GOOD THING!!!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 20:08:19 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "bugs - lets call an exterminator!"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n\n> Now some history.. Over the last couple of years we've tried a\n> number (5 I think) of bug tracking packages. Either Marc or me\n> or both have had to learn it, install it, get it going and the\n> result has been the same - the maintainers don't want to update\n> it, it's a pain in the ass to administrator, set up, etc.\n> \n> The current bugtool.\n> \n> After a bunch of these failures I asked for input on what was\n> needed in a tool. Web input interface, ability to track the\n> bug report, email notification to the bug list, email notification\n> to the reporter of the bug.\n\nFTR, we're using bugzilla for this and it works great. We're working\non porting it PostgreSQL.\n\n ftp://people.redhat.com/dkl/ should contain a recent state \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "21 Aug 2001 22:02:24 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: bugs - lets call an exterminator!"
},
{
"msg_contents": "On Tue, 21 Aug 2001, Vince Vielhaber wrote:\n\n>\n> In the seemingly hundreds of thousands of messages on the bug database\n> topic I think I've come up with the following..\n>\n> Solution..\n>\n> Is implementing yet another bugtool going to be the solution?\n> Probably not. Do I want to go for number six? No.\n>\n>\n> The current tool has a form - we keep it.\n> The current tool mails to the bugs list - we keep it.\n\nYou are correct on implementing another bug reporting tool - why re-invent\nthe wheel? Why not use the bugzilla project for bug tracking? I do\nbelieve it has a postgresql backend by now and if it doesn't - I am sure\nit will soon or would be trivial to make a backend and contribute it back.\nThis tool has been popularized by Mozilla and RedHat ... saying that I am\nsure the couple of RedHat employees on the list wouldn't mind giving a\nhand with setup and what not (though they will have to speak for\nthemselves on this issues).\n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n",
"msg_date": "Tue, 21 Aug 2001 21:07:03 -0500 (CDT)",
"msg_from": "\"D. Hageman\" <dhageman@dracken.com>",
"msg_from_op": false,
"msg_subject": "Re: bugs - lets call an exterminator!"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Needs\n\n> easy reporting of bugs - sent to bugs list\n> easy lookup of previous bugs\n> summary of fix or workaround\n> detail of fix or work around\n> little to no intervention of\n> developers\n> ability of developer to add\n> comments\n\n> That should sum it up.\n\nCheck.\n\n> We've had a couple of volunteers to keep the database up to date.\n> Is it enough? I dunno, if I were to guess I'd have to look at\n> previous experience and say probably not.\n\nAFAIR, we had *zero* people paying any attention to the state of the\nbug database up to now. A couple of people ought to make a big\ndifference.\n\n> Is implementing yet another bugtool going to be the solution?\n> Probably not. Do I want to go for number six? No.\n\nIf you're the man maintaining it then I'm certainly not going to tell\nyou how to do your job. OTOH --- it does seem like a lot of people\nlike Bugzilla. Might be worth at least a cursory look.\n\n> Here's what I propose.\n\n> The current tool has a form - we keep it.\n> The current tool mails to the bugs list - we keep it.\n\nThose are both fine. How do we get feedback in the other direction,\nie mailing lists to bug database? That's the $64 question in my mind\nat the moment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Aug 2001 23:30:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bugs - lets call an exterminator! "
},
{
"msg_contents": "I think it'd be not so difficult to extend our mailware\n(fts.postgresql.org) to handle bug-list. Actually,\nmailware has much more features, it's already has search/read,\ntrack features. Adding post is trivial. Developers (who actually\nfix a bugs) usually read mailing lists and reply's to BUG, which\nshould be automatically go to corresponding bug-thread.\n\n\tOleg\nOn Tue, 21 Aug 2001, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > Needs\n>\n> > easy reporting of bugs - sent to bugs list\n> > easy lookup of previous bugs\n> > summary of fix or workaround\n> > detail of fix or work around\n> > little to no intervention of\n> > developers\n> > ability of developer to add\n> > comments\n>\n> > That should sum it up.\n>\n> Check.\n>\n> > We've had a couple of volunteers to keep the database up to date.\n> > Is it enough? I dunno, if I were to guess I'd have to look at\n> > previous experience and say probably not.\n>\n> AFAIR, we had *zero* people paying any attention to the state of the\n> bug database up to now. A couple of people ought to make a big\n> difference.\n>\n> > Is implementing yet another bugtool going to be the solution?\n> > Probably not. Do I want to go for number six? No.\n>\n> If you're the man maintaining it then I'm certainly not going to tell\n> you how to do your job. OTOH --- it does seem like a lot of people\n> like Bugzilla. Might be worth at least a cursory look.\n>\n> > Here's what I propose.\n>\n> > The current tool has a form - we keep it.\n> > The current tool mails to the bugs list - we keep it.\n>\n> Those are both fine. How do we get feedback in the other direction,\n> ie mailing lists to bug database? That's the $64 question in my mind\n> at the moment.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 22 Aug 2001 15:01:40 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: bugs - lets call an exterminator! "
},
{
"msg_contents": "On Tue, 21 Aug 2001, D. Hageman wrote:\n\n> On Tue, 21 Aug 2001, Vince Vielhaber wrote:\n>\n> >\n> > In the seemingly hundreds of thousands of messages on the bug database\n> > topic I think I've come up with the following..\n> >\n> > Solution..\n> >\n> > Is implementing yet another bugtool going to be the solution?\n> > Probably not. Do I want to go for number six? No.\n> >\n> >\n> > The current tool has a form - we keep it.\n> > The current tool mails to the bugs list - we keep it.\n>\n> You are correct on implementing another bug reporting tool - why re-invent\n> the wheel? Why not use the bugzilla project for bug tracking? I do\n> believe it has a postgresql backend by now and if it doesn't - I am sure\n> it will soon or would be trivial to make a backend and contribute it back.\n> This tool has been popularized by Mozilla and RedHat ... saying that I am\n> sure the couple of RedHat employees on the list wouldn't mind giving a\n> hand with setup and what not (though they will have to speak for\n> themselves on this issues).\n\nEverybody keeps saying bugzilla. What EXACTLY will bugzilla do for us\nthat would make me want to learn it and install it? BTW, the current\nwheel was invented a year ago 'cuze nothing really fit what we needed.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 22 Aug 2001 14:56:06 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: bugs - lets call an exterminator!"
},
{
"msg_contents": "Vince asks:\n\n> Everybody keeps saying bugzilla. What EXACTLY will bugzilla do for us\n> that would make me want to learn it and install it? BTW, the current\n> wheel was invented a year ago 'cuze nothing really fit what we needed.\n\nThe reasons I would choose Bugzilla:\n\n1. It's *not* written by us so (in theory) we don't have to waste time\ndeveloping yet another bug tracking solution.\n\n2. It sends email to people involved with a bug whenever the detail\nassociated with that bug is modified. This includes the reporter, who\noften will feedback that it now works, at which time the fixer or the\nreporter can mark the bug as fixed.\n\n3. It complains when a NEW bug hasn't been looked at for /n/ days --\nthis means that any not-a-bug's will be closed, while any that are\nreally bugs will be accepted.\n\n4. Good query facilities, if a little complex to use.\n\n5. I think Bugzilla's concepts of products, components and versions fit\nthe way we work.\nI envisage that 'Postgres', 'Interfaces', 'Languages' might be products\nthat we would have.\nWithin 'Postgres' we would have the various subsystems that make up the\ncore.\nWithin 'Interfaces' we would have 'JDBC', 'ODBC' etc.\nWithin 'Languages' we would have 'PL/pgSQL' etc.\n\n\nArguments accepted.\n\n\nThere are other tools the Mozilla project uses that we could also use:\n\nTinderbox -- continuous automated builds, including subsequent regression\ntests\n(useful for seeing who broke CVS).\nBonsai -- CVS integration for Bugzilla\n\n\nCheers,\n\nColin\n\n\n",
"msg_date": "Thu, 23 Aug 2001 10:00:40 +0200",
"msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: bugs - lets call an exterminator!"
},
{
"msg_contents": "On Thu, 23 Aug 2001, Colin 't Hart wrote:\n\n> Vince asks:\n>\n> > Everybody keeps saying bugzilla. What EXACTLY will bugzilla do for us\n> > that would make me want to learn it and install it? BTW, the current\n> > wheel was invented a year ago 'cuze nothing really fit what we needed.\n>\n> The reasons I would choose Bugzilla:\n>\n> 1. It's *not* written by us so (in theory) we don't have to waste time\n> developing yet another bug tracking solution.\n\nWhat we have is already developed and refining it isn't a problem.\n\n> 2. It sends email to people involved with a bug whenever the detail\n> associated with that bug is modified. This includes the reporter, who\n> often will feedback that it now works, at which time the fixer or the\n> reporter can mark the bug as fixed.\n\nWhat we have already does this, but noone was using it.\n\n> 3. It complains when a NEW bug hasn't been looked at for /n/ days --\n> this means that any not-a-bug's will be closed, while any that are\n> really bugs will be accepted.\n\nThis would piss off the developers.\n\n> 4. Good query facilities, if a little complex to use.\n\nPlease elaborate.\n\n> 5. I think Bugzilla's concepts of products, components and versions fit\n> the way we work.\n> I envisage that 'Postgres', 'Interfaces', 'Languages' might be products\n> that we would have.\n> Within 'Postgres' we would have the various subsystems that make up the\n> core.\n> Within 'Interfaces' we would have 'JDBC', 'ODBC' etc.\n> Within 'Languages' we would have 'PL/pgSQL' etc.\n\nI can see a little benefit to this, but for the most part the same\npeople that are working on the core pieces of PostgreSQL are also\nworking on the interfaces and languages.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 23 Aug 2001 06:18:27 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: bugs - lets call an exterminator!"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Thu, 23 Aug 2001, Colin 't Hart wrote:\n>> 5. I think Bugzilla's concepts of products, components and versions fit\n>> the way we work.\n>> I envisage that 'Postgres', 'Interfaces', 'Languages' might be products\n>> that we would have.\n>> Within 'Postgres' we would have the various subsystems that make up the\n>> core.\n>> Within 'Interfaces' we would have 'JDBC', 'ODBC' etc.\n>> Within 'Languages' we would have 'PL/pgSQL' etc.\n\n> I can see a little benefit to this, but for the most part the same\n> people that are working on the core pieces of PostgreSQL are also\n> working on the interfaces and languages.\n\nI would argue against subdividing a bug database at all. I don't think\nthe project is large enough to require it (we are in no danger of\nbecoming the size of Mozilla anytime soon). But more importantly,\nsubdivision introduces the risk of misclassification of a bug --- and\nin my experience the initial reporter of a bug *very* frequently\nmisidentifies where the problem is. So unless additional effort is\nexpended to reclassify bugs (is that even possible in Bugzilla?), the\nclassification will degenerate to the point of being a hindrance rather\nthan a help in locating things. Overall I just don't see that much\nbenefit from a classification system.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Aug 2001 09:25:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: bugs - lets call an exterminator! "
},
{
"msg_contents": "Tom Lane wrote:\n\n> it does seem like a lot of people\n> like Bugzilla. Might be worth at least a cursory look.\n\nWe do use Bugzilla and I believe is a very good tool, which should fit\nnicely with the open development style of PostgreSQL community. New\nversion is due in a few weeks and it's been already noted that a\nPostgreSQL backend is almost ready. The Bugzilla community is growing\nfast, BTW.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 28 Aug 2001 12:13:21 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: bugs - lets call an exterminator!"
},
{
"msg_contents": "Vince Vielhaber wrote:\n\n> Everybody keeps saying bugzilla. What EXACTLY will bugzilla do for us\n> that would make me want to learn it and install it? BTW, the current\n> wheel was invented a year ago 'cuze nothing really fit what we needed.\n\nI believe the greatest advantage for the PostgreSQL is that a Bugzilla\ninstallation would allow end-users as well as developers to check if a\nbug has already been reported, look for existing bugs, submit patches,\nadd comments, see progress in development. This is very similar to the\nopen-development style of the core development team.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 28 Aug 2001 12:18:28 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: bugs - lets call an exterminator!"
},
{
"msg_contents": "Tom Lane wrote:\n\n>Vince Vielhaber <vev@michvhf.com> writes:\n>\n>>On Thu, 23 Aug 2001, Colin 't Hart wrote:\n>>\n>>>5. I think Bugzilla's concepts of products, components and versions fit\n>>>the way we work.\n>>>I envisage that 'Postgres', 'Interfaces', 'Languages' might be products\n>>>that we would have.\n>>>Within 'Postgres' we would have the various subsystems that make up the\n>>>core.\n>>>Within 'Interfaces' we would have 'JDBC', 'ODBC' etc.\n>>>Within 'Languages' we would have 'PL/pgSQL' etc.\n>>>\n>\n>>I can see a little benefit to this, but for the most part the same\n>>people that are working on the core pieces of PostgreSQL are also\n>>working on the interfaces and languages.\n>>\n>\n>I would argue against subdividing a bug database at all. I don't think\n>the project is large enough to require it (we are in no danger of\n>becoming the size of Mozilla anytime soon). But more importantly,\n>subdivision introduces the risk of misclassification of a bug --- and\n>in my experience the initial reporter of a bug *very* frequently\n>misidentifies where the problem is. So unless additional effort is\n>expended to reclassify bugs (is that even possible in Bugzilla?), the\n>classification will degenerate to the point of being a hindrance rather\n>than a help in locating things. Overall I just don't see that much\n>benefit from a classification system.\n>\nBugzilla does provide for the reclassification bugs. I have \nmisidentified where bugs were in Mozilla and have had them reclassified \ninto different areas/components of that project.\n\n\n\n\n\n\nTom Lane wrote:\n\nVince Vielhaber <vev@michvhf.com> writes:\n\nOn Thu, 23 Aug 2001, Colin 't Hart wrote:\n\n5. I think Bugzilla's concepts of products, components and versions fitthe way we work.I envisage that 'Postgres', 'Interfaces', 'Languages' might be productsthat we would have.Within 'Postgres' we would have the various subsystems that make up thecore.Within 'Interfaces' we would have 'JDBC', 'ODBC' etc.Within 'Languages' we would have 'PL/pgSQL' etc.\n\n\n\n\nI can see a little benefit to this, but for the most part the samepeople that are working on the core pieces of PostgreSQL are alsoworking on the interfaces and languages.\n\nI would argue against subdividing a bug database at all. I don't thinkthe project is large enough to require it (we are in no danger ofbecoming the size of Mozilla anytime soon). But more importantly,subdivision introduces the risk of misclassification of a bug --- andin my experience the initial reporter of a bug *very* frequentlymisidentifies where the problem is. So unless additional effort isexpended to reclassify bugs (is that even possible in Bugzilla?), theclassification will degenerate to the point of being a hindrance ratherthan a help in locating things. Overall I just don't see that muchbenefit from a classification system.\n\nBugzilla does provide for the reclassification bugs. I have misidentified\nwhere bugs were in Mozilla and have had them reclassified into different\nareas/components of that project.",
"msg_date": "Wed, 29 Aug 2001 04:56:20 -0500",
"msg_from": "Thomas Swan <tswan@olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: bugs - lets call an exterminator!"
}
] |
[
{
"msg_contents": "At 20:08 21/08/01 -0400, Vince Vielhaber wrote:\n>\n>In the seemingly hundreds of thousands of messages on the bug database\n>topic I think I've come up with the following..\n\nYour first pass needs to have a simple mail and web based ssystem for\ndevelopers to at least close bugs. The CC idea is probably fine. You might\neven want to put an X-header in the mail message, then for each bugs list\nmessage automatically generate a footer with a web link to close the bug,\nor some such - a bit like the TIPs we get all the time. This way, a\ndeveloper can go to any message relating to the bug, click on the link &\nclose it. This should also send off a message to the list etc.\n\nAlso, to make the jobs of the volunteers easier, it would be good for each\nbug details page sho show a list of bugs mailing list trafic relating to\nthe bug, sorted by inverse date order (or, better, sub-threads). Then we\ncan look in the most recent two or three to check the status...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 22 Aug 2001 12:32:17 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: bugs - lets call an exterminator!"
}
] |
[
{
"msg_contents": "Hi people!.I developed a work for an university course, which I wish to share with you.\n\nI extended the foreign key clause in the create table in order to permit insertions and updates on a referencing table (a table with foreign key\nattributes) with all kind of actions (the existing ones plus CASCADE, SET NULL and SET DEFAULT).\n\nI think it is important to handle situations where the referencing data is available but it cannot be inserted due to the lack of the referenced\ntuple. It is ugly, for example, to request the user to create a dummy referenced entry previous to the insertion since it can be done\nautomatically with the proposed functionality.\nApplying it in the context of a product with a well-defined execution model of triggers, like PostgreSQL, I do not introduce any kind of\nindetermination in the sequence of verification of the referential constraints, because we know beforehand, depending on the order of creation of\nthe tables and constraints, which will be the resulting order of the chain of verifications. So, when a referencing table is updated or tuples\nare added to it, even when this table is the origin of various referential chains of verifications, the resulting behavior only depends on the\norder of creation mentioned above. (I insist with the theme of determinism because I think this is the main problem for which no database product\nincludes this characteristic). I tested the code with examples of such cases (taking modified problematical examples from a text of Markowitz)\nand it works well.\n\nThe new syntax for the column_constraint_clause (and table_constraint_clause) of the CREATE TABLE statement that I propose (and implement) is:\n\n ...\n [ ON INSERT action ]\n [ ON DELETE action ]\n [ ON UPDATE_LEFT action ]\n [ ON UPDATE_RIGHT action ]\n ...\nwhere\n\n\"ON DELETE action\"\n stays the same as before (it refers to deletes in the referenced table),\n\n\"ON UPDATE_RIGHT action\"\n is the original ON UPDATE action (like before, it refers to modifications in the referenced table),\n\n\"ON UPDATE_LEFT action\"\n specifies the action to do when a referencing column (a FK_column) in the referencing table is being updated to a new value, and this new\nvalue do not exist like pk_value in the pk_table. If the row is updated, but the referencing column is not changed, no action is done. There are\nthe following actions.\n\n NO ACTION\n Disallows update of row.\n\n RESTRICT\n Disallows update of row.\n\n CASCADE\n Updates the value of the referenced column (the pk_column) to the new value of the\n referencing column (the fk_column).\n\n SET NULL\n Sets the referencing column values to NULL.\n\n SET DEFAULT\n Sets the referencing column values to their default value.\n\n\"ON INSERT action\"\n specifies the action to do when a referencing row (a FK_row) in the referencing table is being inserted, and the new fk_values do not exist\nlike pk_values in the referenced table (pk_table). There are the following actions.\n\n NO ACTION\n Disallows insert of row.\n\n RESTRICT\n Disallows insert of row.\n\n CASCADE\n Inserts a new row into the referenced table which pk_columns take the values of the new fk_columns, and the other attributes are set to\nNULL values (if it is allowed).\n\n SET NULL\n Sets the referencing column values to NULL.\n\n SET DEFAULT\n Sets the referencing column values to their default value.\n\nI have not added new files, just modified the existing ones (so the makefiles stay like before). I send a diff (-c) against the version 7.0.2\n(the one I worked with).\n\nIn summary, the patch contains:\n\n* modifications to the grammar to include the new syntax of the CREATE TABLE statement (to recognize the new tokens and do the appropriate\nstuff).\n\n* Addition of definitions of flags and masks for FOREIGN KEY constraints in CreateStmt.\n\n* the new generic trigger procedures for referential integrity constraint checks.\n\n* modifications to the parser stage to accept them (in procedures transformCreateStmt() and\n transformAlterTableStmt() ).\n\n* update to declarations for operations on built-in types.\n\n* extension of the definition of the system \"procedure\" relation (pg_proc) along with the\n relation's initial contents.\n\n* modifications to the TRIGGERs support code to accept the new characteristics.\n\nMany thanks in advance to those who read and (maybe) consider all this, regards\n\n Jose Luis Ozzano (jozzano@exa.unicen.edu.ar)\n",
"msg_date": "Wed, 22 Aug 2001 10:45:14 +0300 (GMT+03:00)",
"msg_from": "jozzano <jozzano@exa.unicen.edu.ar>",
"msg_from_op": true,
"msg_subject": "PATCH proposed with new features for CREATE TABLE"
},
{
"msg_contents": "\nCan someone comment on this?\n\n> Hi people!.I developed a work for an university course, which I wish to share with you.\n> \n> I extended the foreign key clause in the create table in order to permit insertions and updates on a referencing table (a table with foreign key\n> attributes) with all kind of actions (the existing ones plus CASCADE, SET NULL and SET DEFAULT).\n> \n> I think it is important to handle situations where the referencing data is available but it cannot be inserted due to the lack of the referenced\n> tuple. It is ugly, for example, to request the user to create a dummy referenced entry previous to the insertion since it can be done\n> automatically with the proposed functionality.\n> Applying it in the context of a product with a well-defined execution model of triggers, like PostgreSQL, I do not introduce any kind of\n> indetermination in the sequence of verification of the referential constraints, because we know beforehand, depending on the order of creation of\n> the tables and constraints, which will be the resulting order of the chain of verifications. So, when a referencing table is updated or tuples\n> are added to it, even when this table is the origin of various referential chains of verifications, the resulting behavior only depends on the\n> order of creation mentioned above. (I insist with the theme of determinism because I think this is the main problem for which no database product\n> includes this characteristic). I tested the code with examples of such cases (taking modified problematical examples from a text of Markowitz)\n> and it works well.\n> \n> The new syntax for the column_constraint_clause (and table_constraint_clause) of the CREATE TABLE statement that I propose (and implement) is:\n> \n> ...\n> [ ON INSERT action ]\n> [ ON DELETE action ]\n> [ ON UPDATE_LEFT action ]\n> [ ON UPDATE_RIGHT action ]\n> ...\n> where\n> \n> \"ON DELETE action\"\n> stays the same as before (it refers to deletes in the referenced table),\n> \n> \"ON UPDATE_RIGHT action\"\n> is the original ON UPDATE action (like before, it refers to modifications in the referenced table),\n> \n> \"ON UPDATE_LEFT action\"\n> specifies the action to do when a referencing column (a FK_column) in the referencing table is being updated to a new value, and this new\n> value do not exist like pk_value in the pk_table. If the row is updated, but the referencing column is not changed, no action is done. There are\n> the following actions.\n> \n> NO ACTION\n> Disallows update of row.\n> \n> RESTRICT\n> Disallows update of row.\n> \n> CASCADE\n> Updates the value of the referenced column (the pk_column) to the new value of the\n> referencing column (the fk_column).\n> \n> SET NULL\n> Sets the referencing column values to NULL.\n> \n> SET DEFAULT\n> Sets the referencing column values to their default value.\n> \n> \"ON INSERT action\"\n> specifies the action to do when a referencing row (a FK_row) in the referencing table is being inserted, and the new fk_values do not exist\n> like pk_values in the referenced table (pk_table). There are the following actions.\n> \n> NO ACTION\n> Disallows insert of row.\n> \n> RESTRICT\n> Disallows insert of row.\n> \n> CASCADE\n> Inserts a new row into the referenced table which pk_columns take the values of the new fk_columns, and the other attributes are set to\n> NULL values (if it is allowed).\n> \n> SET NULL\n> Sets the referencing column values to NULL.\n> \n> SET DEFAULT\n> Sets the referencing column values to their default value.\n> \n> I have not added new files, just modified the existing ones (so the makefiles stay like before). I send a diff (-c) against the version 7.0.2\n> (the one I worked with).\n> \n> In summary, the patch contains:\n> \n> * modifications to the grammar to include the new syntax of the CREATE TABLE statement (to recognize the new tokens and do the appropriate\n> stuff).\n> \n> * Addition of definitions of flags and masks for FOREIGN KEY constraints in CreateStmt.\n> \n> * the new generic trigger procedures for referential integrity constraint checks.\n> \n> * modifications to the parser stage to accept them (in procedures transformCreateStmt() and\n> transformAlterTableStmt() ).\n> \n> * update to declarations for operations on built-in types.\n> \n> * extension of the definition of the system \"procedure\" relation (pg_proc) along with the\n> relation's initial contents.\n> \n> * modifications to the TRIGGERs support code to accept the new characteristics.\n> \n> Many thanks in advance to those who read and (maybe) consider all this, regards\n> \n> Jose Luis Ozzano (jozzano@exa.unicen.edu.ar)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 24 Aug 2001 12:06:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PATCH proposed with new features for CREATE TABLE"
},
{
"msg_contents": "On Fri, 24 Aug 2001, Bruce Momjian wrote:\n\n> Can someone comment on this?\n\nI sent him some concerns I had (including the fact that we\ncan't rename ON UPDATE since it's in the spec). I'm working\nthrough some more behavioral concerns I have, but I haven't\ndecided whether or not they're actually problems. The patch\nis pretty long and I haven't had a chance to look through it,\nbut my guess is that it won't apply entirely cleanly since \nthere's been work done on alot of the sections it touches\nsince 7.0.x, but it probably shouldn't be too hard to beat\nit into submission.\n\n",
"msg_date": "Fri, 24 Aug 2001 11:50:05 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH proposed with new features for CREATE TABLE"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> On Fri, 24 Aug 2001, Bruce Momjian wrote:\n>> Can someone comment on this?\n\n> I sent him some concerns I had (including the fact that we\n> can't rename ON UPDATE since it's in the spec).\n\nI was also concerned about the fact that it didn't seem to have a lot\nto do with the SQL-mandated behaviors... even though we don't currently\nhave all the SQL features, adding non-spec stuff now might create\nproblems when we want to add the spec features.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Aug 2001 16:04:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PATCH proposed with new features for CREATE TABLE "
},
{
"msg_contents": "> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Fri, 24 Aug 2001, Bruce Momjian wrote:\n> >> Can someone comment on this?\n> \n> > I sent him some concerns I had (including the fact that we\n> > can't rename ON UPDATE since it's in the spec).\n> \n> I was also concerned about the fact that it didn't seem to have a lot\n> to do with the SQL-mandated behaviors... even though we don't currently\n> have all the SQL features, adding non-spec stuff now might create\n> problems when we want to add the spec features.\n\nYea, I know, but already have lots of non-standard stuff that is going\nto cause problems when get go to standards and this features is missing\nfrom every release, and people ask for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 24 Aug 2001 16:53:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PATCH proposed with new features for CREATE TABLE"
},
{
"msg_contents": "\nTo get this applied, we will need to hear from people who want this\nfunctionality. Sorry.\n\n\n> Hi people!.I developed a work for an university course, which I wish to share with you.\n> \n> I extended the foreign key clause in the create table in order to permit insertions and updates on a referencing table (a table with foreign key\n> attributes) with all kind of actions (the existing ones plus CASCADE, SET NULL and SET DEFAULT).\n> \n> I think it is important to handle situations where the referencing data is available but it cannot be inserted due to the lack of the referenced\n> tuple. It is ugly, for example, to request the user to create a dummy referenced entry previous to the insertion since it can be done\n> automatically with the proposed functionality.\n> Applying it in the context of a product with a well-defined execution model of triggers, like PostgreSQL, I do not introduce any kind of\n> indetermination in the sequence of verification of the referential constraints, because we know beforehand, depending on the order of creation of\n> the tables and constraints, which will be the resulting order of the chain of verifications. So, when a referencing table is updated or tuples\n> are added to it, even when this table is the origin of various referential chains of verifications, the resulting behavior only depends on the\n> order of creation mentioned above. (I insist with the theme of determinism because I think this is the main problem for which no database product\n> includes this characteristic). I tested the code with examples of such cases (taking modified problematical examples from a text of Markowitz)\n> and it works well.\n> \n> The new syntax for the column_constraint_clause (and table_constraint_clause) of the CREATE TABLE statement that I propose (and implement) is:\n> \n> ...\n> [ ON INSERT action ]\n> [ ON DELETE action ]\n> [ ON UPDATE_LEFT action ]\n> [ ON UPDATE_RIGHT action ]\n> ...\n> where\n> \n> \"ON DELETE action\"\n> stays the same as before (it refers to deletes in the referenced table),\n> \n> \"ON UPDATE_RIGHT action\"\n> is the original ON UPDATE action (like before, it refers to modifications in the referenced table),\n> \n> \"ON UPDATE_LEFT action\"\n> specifies the action to do when a referencing column (a FK_column) in the referencing table is being updated to a new value, and this new\n> value do not exist like pk_value in the pk_table. If the row is updated, but the referencing column is not changed, no action is done. There are\n> the following actions.\n> \n> NO ACTION\n> Disallows update of row.\n> \n> RESTRICT\n> Disallows update of row.\n> \n> CASCADE\n> Updates the value of the referenced column (the pk_column) to the new value of the\n> referencing column (the fk_column).\n> \n> SET NULL\n> Sets the referencing column values to NULL.\n> \n> SET DEFAULT\n> Sets the referencing column values to their default value.\n> \n> \"ON INSERT action\"\n> specifies the action to do when a referencing row (a FK_row) in the referencing table is being inserted, and the new fk_values do not exist\n> like pk_values in the referenced table (pk_table). There are the following actions.\n> \n> NO ACTION\n> Disallows insert of row.\n> \n> RESTRICT\n> Disallows insert of row.\n> \n> CASCADE\n> Inserts a new row into the referenced table which pk_columns take the values of the new fk_columns, and the other attributes are set to\n> NULL values (if it is allowed).\n> \n> SET NULL\n> Sets the referencing column values to NULL.\n> \n> SET DEFAULT\n> Sets the referencing column values to their default value.\n> \n> I have not added new files, just modified the existing ones (so the makefiles stay like before). I send a diff (-c) against the version 7.0.2\n> (the one I worked with).\n> \n> In summary, the patch contains:\n> \n> * modifications to the grammar to include the new syntax of the CREATE TABLE statement (to recognize the new tokens and do the appropriate\n> stuff).\n> \n> * Addition of definitions of flags and masks for FOREIGN KEY constraints in CreateStmt.\n> \n> * the new generic trigger procedures for referential integrity constraint checks.\n> \n> * modifications to the parser stage to accept them (in procedures transformCreateStmt() and\n> transformAlterTableStmt() ).\n> \n> * update to declarations for operations on built-in types.\n> \n> * extension of the definition of the system \"procedure\" relation (pg_proc) along with the\n> relation's initial contents.\n> \n> * modifications to the TRIGGERs support code to accept the new characteristics.\n> \n> Many thanks in advance to those who read and (maybe) consider all this, regards\n> \n> Jose Luis Ozzano (jozzano@exa.unicen.edu.ar)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 16:13:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PATCH proposed with new features for CREATE TABLE"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.