threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "I have implemented this TODO item:\n\n\t* Add getpid() function to backend\n\nThere were a large number of pg_stat functions that access pids and\nbackends slots so I added it there:\n\t\n\ttest=> select pg_stat_get_backend_mypid();\n\t pg_stat_get_backend_mypid \n\t---------------------------\n\t 2757\n\t(1 row)\n\nApplied.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/utils/adt/pgstatfuncs.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/pgstatfuncs.c,v\nretrieving revision 1.4\ndiff -c -r1.4 pgstatfuncs.c\n*** src/backend/utils/adt/pgstatfuncs.c\t25 Oct 2001 05:49:45 -0000\t1.4\n--- src/backend/utils/adt/pgstatfuncs.c\t31 Jul 2002 00:36:27 -0000\n***************\n*** 19,24 ****\n--- 19,25 ----\n extern Datum pg_stat_get_blocks_hit(PG_FUNCTION_ARGS);\n \n extern Datum pg_stat_get_backend_idset(PG_FUNCTION_ARGS);\n+ extern Datum pg_stat_get_backend_mypid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_pid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_dbid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_userid(PG_FUNCTION_ARGS);\n***************\n*** 208,213 ****\n--- 209,221 ----\n \n \t((ReturnSetInfo *) (fcinfo->resultinfo))->isDone = ExprMultipleResult;\n \tPG_RETURN_INT32(result);\n+ }\n+ \n+ \n+ Datum\n+ pg_stat_get_backend_mypid(PG_FUNCTION_ARGS)\n+ {\n+ \tPG_RETURN_INT32(MyProcPid);\n }\n \n \nIndex: src/include/catalog/pg_proc.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/catalog/pg_proc.h,v\nretrieving revision 1.246\ndiff -c -r1.246 pg_proc.h\n*** src/include/catalog/pg_proc.h\t24 Jul 2002 19:11:13 -0000\t1.246\n--- src/include/catalog/pg_proc.h\t31 Jul 2002 00:36:36 -0000\n***************\n*** 2703,2708 ****\n--- 2703,2710 ----\n DESCR(\"Statistics: Number of blocks found in cache\");\n DATA(insert OID = 1936 ( pg_stat_get_backend_idset\t\tPGNSP PGUID 12 f f t t s 0 23 \"\"\tpg_stat_get_backend_idset - _null_ ));\n DESCR(\"Statistics: Currently active backend IDs\");\n+ DATA(insert OID = 2026 ( pg_stat_get_backend_mypid\t\tPGNSP PGUID 12 f f t f s 0 23 \"\"\tpg_stat_get_backend_mypid - _null_ ));\n+ DESCR(\"Statistics: My backend ID\");\n DATA(insert OID = 1937 ( pg_stat_get_backend_pid\t\tPGNSP PGUID 12 f f t f s 1 23 \"23\" pg_stat_get_backend_pid - _null_ ));\n DESCR(\"Statistics: PID of backend\");\n DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\" pg_stat_get_backend_dbid - _null_ ));",
"msg_date": "Tue, 30 Jul 2002 20:40:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "getpid() function"
},
{
"msg_contents": "On Tue, Jul 30, 2002 at 08:40:13PM -0400, Bruce Momjian wrote:\n> I have implemented this TODO item:\n> \n> \t* Add getpid() function to backend\n> \n> There were a large number of pg_stat functions that access pids and\n> backends slots so I added it there:\n> \t\n> \ttest=> select pg_stat_get_backend_mypid();\n\nIf we're going to add it to pg_stat_*, why is 'backend' part of the\nname? All the existing backend_* function fetch some piece of data\nabout a given backend -- whereas this function does not (it takes\nno arguments).\n\nIMHO, a better name would be something like 'backend_process_id()',\nor 'unix_pid', or 'backend_pid()'.\n\nAlso, can you add some documentation on this?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Tue, 30 Jul 2002 21:17:58 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "OK, renamed to backend_pid() to match the libpq name. I was unsure\nabout merging it into the stats stuff myself.\n\t\n\tsetest=> select backend_pid();\n\t backend_pid \n\t-------------\n\t 12996\n\t(1 row)\n\nWhere does the mention belong in the docs? I have it in the monitoring\nsection in the stats section right now.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> On Tue, Jul 30, 2002 at 08:40:13PM -0400, Bruce Momjian wrote:\n> > I have implemented this TODO item:\n> > \n> > \t* Add getpid() function to backend\n> > \n> > There were a large number of pg_stat functions that access pids and\n> > backends slots so I added it there:\n> > \t\n> > \ttest=> select pg_stat_get_backend_mypid();\n> \n> If we're going to add it to pg_stat_*, why is 'backend' part of the\n> name? All the existing backend_* function fetch some piece of data\n> about a given backend -- whereas this function does not (it takes\n> no arguments).\n> \n> IMHO, a better name would be something like 'backend_process_id()',\n> or 'unix_pid', or 'backend_pid()'.\n> \n> Also, can you add some documentation on this?\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/monitoring.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/monitoring.sgml,v\nretrieving revision 1.7\ndiff -c -r1.7 monitoring.sgml\n*** doc/src/sgml/monitoring.sgml\t22 Mar 2002 19:20:14 -0000\t1.7\n--- doc/src/sgml/monitoring.sgml\t31 Jul 2002 01:42:11 -0000\n***************\n*** 481,490 ****\n </row>\n \n <row>\n <entry><function>pg_stat_get_backend_pid</function>(<type>integer</type>)</entry>\n <entry><type>integer</type></entry>\n <entry>\n! PID of backend process\n </entry>\n </row>\n \n--- 481,498 ----\n </row>\n \n <row>\n+ <entry><function>backend_pid</function>()</entry>\n+ <entry><type>integer</type></entry>\n+ <entry>\n+ Process ID of current backend\n+ </entry>\n+ </row>\n+ \n+ <row>\n <entry><function>pg_stat_get_backend_pid</function>(<type>integer</type>)</entry>\n <entry><type>integer</type></entry>\n <entry>\n! Process ID of all backend processes\n </entry>\n </row>\n \nIndex: src/backend/utils/adt/pgstatfuncs.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/pgstatfuncs.c,v\nretrieving revision 1.5\ndiff -c -r1.5 pgstatfuncs.c\n*** src/backend/utils/adt/pgstatfuncs.c\t31 Jul 2002 00:40:40 -0000\t1.5\n--- src/backend/utils/adt/pgstatfuncs.c\t31 Jul 2002 01:42:12 -0000\n***************\n*** 19,25 ****\n extern Datum pg_stat_get_blocks_hit(PG_FUNCTION_ARGS);\n \n extern Datum pg_stat_get_backend_idset(PG_FUNCTION_ARGS);\n! extern Datum pg_stat_get_backend_mypid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_pid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_dbid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_userid(PG_FUNCTION_ARGS);\n--- 19,25 ----\n extern Datum pg_stat_get_blocks_hit(PG_FUNCTION_ARGS);\n \n extern Datum pg_stat_get_backend_idset(PG_FUNCTION_ARGS);\n! extern Datum backend_pid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_pid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_dbid(PG_FUNCTION_ARGS);\n extern Datum pg_stat_get_backend_userid(PG_FUNCTION_ARGS);\n***************\n*** 213,219 ****\n \n \n Datum\n! pg_stat_get_backend_mypid(PG_FUNCTION_ARGS)\n {\n \tPG_RETURN_INT32(MyProcPid);\n }\n--- 213,219 ----\n \n \n Datum\n! backend_pid(PG_FUNCTION_ARGS)\n {\n \tPG_RETURN_INT32(MyProcPid);\n }\nIndex: src/include/catalog/pg_proc.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/catalog/pg_proc.h,v\nretrieving revision 1.247\ndiff -c -r1.247 pg_proc.h\n*** src/include/catalog/pg_proc.h\t31 Jul 2002 00:40:40 -0000\t1.247\n--- src/include/catalog/pg_proc.h\t31 Jul 2002 01:42:20 -0000\n***************\n*** 2703,2710 ****\n DESCR(\"Statistics: Number of blocks found in cache\");\n DATA(insert OID = 1936 ( pg_stat_get_backend_idset\t\tPGNSP PGUID 12 f f t t s 0 23 \"\"\tpg_stat_get_backend_idset - _null_ ));\n DESCR(\"Statistics: Currently active backend IDs\");\n! DATA(insert OID = 2026 ( pg_stat_get_backend_mypid\t\tPGNSP PGUID 12 f f t f s 0 23 \"\"\tpg_stat_get_backend_mypid - _null_ ));\n! DESCR(\"Statistics: My backend ID\");\n DATA(insert OID = 1937 ( pg_stat_get_backend_pid\t\tPGNSP PGUID 12 f f t f s 1 23 \"23\" pg_stat_get_backend_pid - _null_ ));\n DESCR(\"Statistics: PID of backend\");\n DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\" pg_stat_get_backend_dbid - _null_ ));\n--- 2703,2710 ----\n DESCR(\"Statistics: Number of blocks found in cache\");\n DATA(insert OID = 1936 ( pg_stat_get_backend_idset\t\tPGNSP PGUID 12 f f t t s 0 23 \"\"\tpg_stat_get_backend_idset - _null_ ));\n DESCR(\"Statistics: Currently active backend IDs\");\n! DATA(insert OID = 2026 ( backend_pid\t\t\t\t\tPGNSP PGUID 12 f f t f s 0 23 \"\"\tbackend_pid - _null_ ));\n! DESCR(\"Statistics: Current backend ID\");\n DATA(insert OID = 1937 ( pg_stat_get_backend_pid\t\tPGNSP PGUID 12 f f t f s 1 23 \"23\" pg_stat_get_backend_pid - _null_ ));\n DESCR(\"Statistics: PID of backend\");\n DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\" pg_stat_get_backend_dbid - _null_ ));",
"msg_date": "Tue, 30 Jul 2002 21:48:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "On Tue, Jul 30, 2002 at 09:48:42PM -0400, Bruce Momjian wrote:\n> OK, renamed to backend_pid() to match the libpq name.\n\nOk, thanks.\n\n> Where does the mention belong in the docs? I have it in the monitoring\n> section in the stats section right now.\n\nI'd vote for User's Guide -> Functions & Operators -> Misc. Functions. I\ndon't think it belongs in the monitoring section, since it isn't part of\nthe stats collector and isn't really used for monitoring.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Tue, 30 Jul 2002 21:59:29 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Neil Conway wrote:\n> On Tue, Jul 30, 2002 at 09:48:42PM -0400, Bruce Momjian wrote:\n> > OK, renamed to backend_pid() to match the libpq name.\n> \n> Ok, thanks.\n> \n> > Where does the mention belong in the docs? I have it in the monitoring\n> > section in the stats section right now.\n> \n> I'd vote for User's Guide -> Functions & Operators -> Misc. Functions. I\n> don't think it belongs in the monitoring section, since it isn't part of\n> the stats collector and isn't really used for monitoring.\n\nOK, docs moved to new section. I kept the function in pgstatfuncs.c. \nNot sure where else to put it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Jul 2002 22:43:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> On Tue, Jul 30, 2002 at 09:48:42PM -0400, Bruce Momjian wrote:\n>> Where does the mention belong in the docs? I have it in the monitoring\n>> section in the stats section right now.\n\n> I'd vote for User's Guide -> Functions & Operators -> Misc. Functions.\n\nThere is a table in that section for \"session information functions\",\nwhich seems the correct choice.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Jul 2002 23:29:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function "
},
{
"msg_contents": "Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > On Tue, Jul 30, 2002 at 09:48:42PM -0400, Bruce Momjian wrote:\n> >> Where does the mention belong in the docs? I have it in the monitoring\n> >> section in the stats section right now.\n> \n> > I'd vote for User's Guide -> Functions & Operators -> Misc. Functions.\n> \n> There is a table in that section for \"session information functions\",\n> which seems the correct choice.\n\nThat's where I put it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Jul 2002 23:30:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "On Tue, Jul 30, 2002 at 09:48:42PM -0400, Bruce Momjian wrote:\n> \n> OK, renamed to backend_pid() to match the libpq name. I was unsure\n> about merging it into the stats stuff myself.\n> \t\n> \tsetest=> select backend_pid();\n> \t backend_pid \n> \t-------------\n> \t 12996\n> \t(1 row)\n\n Is there some common convention of names? Why not pg_backend_pid()?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 1 Aug 2002 12:01:52 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "On Thu, Aug 01, 2002 at 12:01:52PM +0200, Karel Zak wrote:\n> Is there some common convention of names?\n\nNo, there isn't (for example, pg_stat_backend_id() versus\ncurrent_schema() -- or pg_get_viewdef() versus obj_description() ).\nNow that we have table functions, we might be using more built-in\nfunctions to provide information to the user -- so there will be\nan increasing need for some kind of naming convention for built-in\nfunctions. However, establishing a naming convention without\nbreaking backwards compatibility might be tricky.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Thu, 1 Aug 2002 10:44:23 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "On Thu, 2002-08-01 at 10:44, Neil Conway wrote:\n> On Thu, Aug 01, 2002 at 12:01:52PM +0200, Karel Zak wrote:\n> > Is there some common convention of names?\n\n> functions. However, establishing a naming convention without\n> breaking backwards compatibility might be tricky.\n\nSupporting both names for a release with comments in the release notes\nstating the old names will disappear soon should be enough.\n\nMost of the time it'll be a simple replacement command. Providing a\nfind -exec sed statement may help.\n\n",
"msg_date": "01 Aug 2002 10:48:32 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "On Thu, Aug 01, 2002 at 10:44:23AM -0400, Neil Conway wrote:\n> On Thu, Aug 01, 2002 at 12:01:52PM +0200, Karel Zak wrote:\n> > Is there some common convention of names?\n> \n> No, there isn't (for example, pg_stat_backend_id() versus\n\n I know -- for this I asked. IMHO for large project like PostgreSQL\n it's important. It's not good if there is possible speculate about\n name of new function. It must be unmistakable -- for this is needful\n make some convension. If somebody add new function and it's released,\n it's in the PostgreSQL almost forever.\n\n> current_schema() -- or pg_get_viewdef() versus obj_description() ).\n> Now that we have table functions, we might be using more built-in\n> functions to provide information to the user -- so there will be\n> an increasing need for some kind of naming convention for built-in\n> functions. However, establishing a naming convention without\n> breaking backwards compatibility might be tricky.\n \n Yes, but we can try be clean for new stuff.\n \n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 1 Aug 2002 17:09:52 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "\nAdded to TODO:\n\n\t* Consistently name server-side internal functions\n\n---------------------------------------------------------------------------\n\nKarel Zak wrote:\n> On Thu, Aug 01, 2002 at 10:44:23AM -0400, Neil Conway wrote:\n> > On Thu, Aug 01, 2002 at 12:01:52PM +0200, Karel Zak wrote:\n> > > Is there some common convention of names?\n> > \n> > No, there isn't (for example, pg_stat_backend_id() versus\n> \n> I know -- for this I asked. IMHO for large project like PostgreSQL\n> it's important. It's not good if there is possible speculate about\n> name of new function. It must be unmistakable -- for this is needful\n> make some convension. If somebody add new function and it's released,\n> it's in the PostgreSQL almost forever.\n> \n> > current_schema() -- or pg_get_viewdef() versus obj_description() ).\n> > Now that we have table functions, we might be using more built-in\n> > functions to provide information to the user -- so there will be\n> > an increasing need for some kind of naming convention for built-in\n> > functions. However, establishing a naming convention without\n> > breaking backwards compatibility might be tricky.\n> \n> Yes, but we can try be clean for new stuff.\n> \n> Karel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 13:41:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "\nI can rename backend_pid if people want. I just made it consistent\nwith the other functions in that docs area. Comments?\n\n---------------------------------------------------------------------------\n\nKarel Zak wrote:\n> On Thu, Aug 01, 2002 at 10:44:23AM -0400, Neil Conway wrote:\n> > On Thu, Aug 01, 2002 at 12:01:52PM +0200, Karel Zak wrote:\n> > > Is there some common convention of names?\n> > \n> > No, there isn't (for example, pg_stat_backend_id() versus\n> \n> I know -- for this I asked. IMHO for large project like PostgreSQL\n> it's important. It's not good if there is possible speculate about\n> name of new function. It must be unmistakable -- for this is needful\n> make some convension. If somebody add new function and it's released,\n> it's in the PostgreSQL almost forever.\n> \n> > current_schema() -- or pg_get_viewdef() versus obj_description() ).\n> > Now that we have table functions, we might be using more built-in\n> > functions to provide information to the user -- so there will be\n> > an increasing need for some kind of naming convention for built-in\n> > functions. However, establishing a naming convention without\n> > breaking backwards compatibility might be tricky.\n> \n> Yes, but we can try be clean for new stuff.\n> \n> Karel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 13:42:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "On Thu, 2002-08-01 at 19:41, Bruce Momjian wrote:\n> \n> Added to TODO:\n> \n> \t* Consistently name server-side internal functions\n\nI'd suggest:\n\n * Make up rules for consistently naming server-side internal functions\n\n * Consistently name _new_ server-side internal functions\n\n * make a plan for moving existing server-side internal functions\n to consistent naming\n\n---------------\nHannu\n",
"msg_date": "01 Aug 2002 20:50:36 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "On Thu, Aug 01, 2002 at 05:09:52PM +0200, Karel Zak wrote:\n> I know -- for this I asked. IMHO for large project like PostgreSQL\n> it's important. It's not good if there is possible speculate about\n> name of new function. It must be unmistakable -- for this is needful\n> make some convension. If somebody add new function and it's released,\n> it's in the PostgreSQL almost forever.\n\nI agree that a naming convention would be useful in some circumstances,\nbut for commonly-used functions, I think it would do more harm than\ngood. 'pg_nextval()' is awfully ugly, for example.\n\nAnd if we're going to have a naming convention for builtin functions,\nwhat about builtin types? 'pg_int4', anyone? :-)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Thu, 1 Aug 2002 15:09:25 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Neil Conway writes:\n\n> On Thu, Aug 01, 2002 at 12:01:52PM +0200, Karel Zak wrote:\n> > Is there some common convention of names?\n>\n> No, there isn't (for example, pg_stat_backend_id() versus\n> current_schema() -- or pg_get_viewdef() versus obj_description() ).\n\nThe \"pg_\" naming scheme is obsolete because system and user namespaces are\nnow isolated. Anything involving \"get\" is also redundant, IMHO, because\nwe aren't dealing with object-oriented things. Besides that, the\nconvention in SQL seems to be to use full noun phrases with words\nseparated by underscores.\n\nSo if \"pg_get_viewdef\" where reinvented today, by me, it would be called\n\"view_definition\".\n\nA whole 'nother issue is to use the right terms for the right things. For\nexample, the term \"backend\" is rather ambiguous and PostgreSQL uses it\ndifferently from everyone else. Instead I would use \"server process\" when\nreferring to the PID.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:05:11 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Neil Conway writes:\n> \n> > On Thu, Aug 01, 2002 at 12:01:52PM +0200, Karel Zak wrote:\n> > > Is there some common convention of names?\n> >\n> > No, there isn't (for example, pg_stat_backend_id() versus\n> > current_schema() -- or pg_get_viewdef() versus obj_description() ).\n> \n> The \"pg_\" naming scheme is obsolete because system and user namespaces are\n> now isolated. Anything involving \"get\" is also redundant, IMHO, because\n> we aren't dealing with object-oriented things. Besides that, the\n> convention in SQL seems to be to use full noun phrases with words\n> separated by underscores.\n> \n> So if \"pg_get_viewdef\" where reinvented today, by me, it would be called\n> \"view_definition\".\n> \n> A whole 'nother issue is to use the right terms for the right things. For\n> example, the term \"backend\" is rather ambiguous and PostgreSQL uses it\n> differently from everyone else. Instead I would use \"server process\" when\n> referring to the PID.\n\nYes, I wanted to match libpq's function, which is the way people used to\nget the pid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 17:02:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "> No, there isn't (for example, pg_stat_backend_id() versus\n> current_schema() -- or pg_get_viewdef() versus obj_description() ).\n> Now that we have table functions, we might be using more built-in\n> functions to provide information to the user -- so there will be\n> an increasing need for some kind of naming convention for built-in\n> functions. However, establishing a naming convention without\n> breaking backwards compatibility might be tricky.\n\nI personally think that as many functions as possible should be prefixed\npg_*... People are still used to avoiding pg_ as a prefix.\n\nChris\n\n",
"msg_date": "Fri, 2 Aug 2002 09:13:06 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I can rename backend_pid if people want. I just made it consistent\n> with the other functions in that docs area. Comments?\n\nI'd go for pg_backend_pid, I think. It's not an SQL standard function\nand certainly never will be, so some sort of prefix seems appropriate.\n\nPerhaps a more relevant question is why are we cluttering the namespace\nwith any such function at all? What's the use case for it? We've\ngotten along fine without one so far, and I don't really think that we\n*ought* to be exposing random bits of internal implementation details\nat the SQL level.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 00:23:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function "
},
{
"msg_contents": "...\n> Perhaps a more relevant question is why are we cluttering the namespace\n> with any such function at all? What's the use case for it? We've\n> gotten along fine without one so far, and I don't really think that we\n> *ought* to be exposing random bits of internal implementation details\n> at the SQL level.\n\nActually, I was wondering the same thing, maybe for a different reason.\nExposing the backend internals could have security implications (though\ndon't make me concoct a scenario to prove it ;)\n\nAlthough it might have some usefulness for debugging, I think it should\nnot be an \"installed by default\" feature, so istm would be a great\ncandidate for a contrib/ function or library. If someone needs it, it is\nalmost immediately available.\n\n - Thomas\n",
"msg_date": "Thu, 01 Aug 2002 21:48:34 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "On Thu, Aug 01, 2002 at 01:41:49PM -0400, Bruce Momjian wrote:\n> \n> Added to TODO:\n> \n> \t* Consistently name server-side internal functions\n\n OK, good start of discussion is define groups of the PostgreSQL \n functions:\n\n 1/ Extern compatible functions\n\n The functions compatible with standards or customs\n or others SQL servers. For example trim, to_char, ...\n \n 2/ PostgreSQL specific functions used in standard SQL operations\n \n (the function works with standard data and not load it from \n internal PostgreSQL stuff).\n\n For example convert(), all datetype function like int(). The name \n convenition must be like names in group 1/\n\n 3/ PostgreSQL specific system functions\n\n For example pg_backend_pid(). IMHO clean solution is\n use \"pg_\" prefix.\n\n 4/ The calls without '( )'\n\n For example \"SELECT current_user;\" IMHO right is not use\n \"pg_\" prefix _if_ you call it without braces. _But_ if you call\n it with '()' and function can be member of group 3/ is right use\n \"pg_\" prefix.\n\n For example:\n SELECT current_user;\n SELECT pg_current_user();\n\n 5/ Deprecated functions\n\n In docs marked as \"deprecated\" and will removed in some major\n release (for example in 8.0).\n\n\n 6/ ???\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Fri, 2 Aug 2002 10:26:36 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "> 2/ PostgreSQL specific functions used in standard SQL operations\n> \n> (the function works with standard data and not load it from \n> internal PostgreSQL stuff).\n> \n> For example convert(), all datetype function like int(). The name \n> convenition must be like names in group 1/\n\nFYI, I have been proposing SQL99 compatible convert(). I would like to\nadd it if no one objects.\n--\nTatsuo Ishii\n\n",
"msg_date": "Fri, 02 Aug 2002 17:38:37 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "> > For example convert(), all datetype function like int(). The name\n> > convenition must be like names in group 1/\n>\n> FYI, I have been proposing SQL99 compatible convert(). I would like to\n> add it if no one objects.\n\nNo objection, but what does it do out of interest? Will it cause a\nbackwards compatibility problem at all?\n\nChris\n\n",
"msg_date": "Fri, 2 Aug 2002 16:50:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Hello,\n\nSorry if it's wrong list for the question. Could you suggest some tweaks \nto the PostgreSQL 7.2.1 to handle the following types of tables faster?\n\nHere we have table \"stats\" with something over one millon records. \nObvious \"SELECT COUNT(*) FROM stats \" takes over 40 seconds to execute, \nand this amount of time does not shorten considerably in subsequent \nsimilar requests. All the databases are vacuumed nightly.\n\nCREATE TABLE \"stats\" (\n \"url\" varchar(50),\n \"src_port\" varchar(10),\n \"ip\" varchar(16),\n \"dst_port\" varchar(10),\n \"proto\" varchar(10),\n \"size\" int8,\n \"login\" varchar(20),\n \"start_date\" timestamptz,\n \"end_date\" timestamptz,\n \"aggregated\" int4\n);\nCREATE INDEX \"aggregated_stats_key\" ON \"stats\" (\"aggregated\");\nCREATE INDEX \"ip_stats_key\" ON \"stats\" (\"ip\");\n\nstats=> explain select count(*) from stats;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=113331.10..113331.10 rows=1 width=0)\n -> Seq Scan on stats (cost=0.00..110085.28 rows=1298328 width=0)\n\nEXPLAIN\nstats=> select count(*) from stats;\n count \n---------\n 1298328\n(1 row)\n\nThe system is FreeBSD-4.6-stable, softupdates on, Athlon XP 1500+, 512 Mb DDR, ATA 100 HDD.\n\nThanks in advance,\nYar\n\n\n",
"msg_date": "Fri, 02 Aug 2002 12:59:19 +0400",
"msg_from": "Yaroslav Dmitriev <yar@warlock.ru>",
"msg_from_op": false,
"msg_subject": "[]performance issues"
},
{
"msg_contents": "> Here we have table \"stats\" with something over one millon records.\n> Obvious \"SELECT COUNT(*) FROM stats \" takes over 40 seconds to execute,\n> and this amount of time does not shorten considerably in subsequent\n> similar requests. All the databases are vacuumed nightly.\n\nDoing a row count requires a sequential scan in Postgres.\n\nTry creating another summary table that just has one row and one column and\nis an integer.\n\nThen, create a trigger on your stats table that fires whenever a new row is\nadded or deleted and updates the tally of rows in the summary table.\n\nThen, just select from the summary table to get an instantaneous count. Of\ncourse, insert and deletes will be marginally slowed down.\n\nRefer to the docs for CREATE TRIGGER, CREATE FUNCTION and PL/PGSQL for more\ninfo on how to do this.\n\nRegards,\n\nChris\n\n",
"msg_date": "Fri, 2 Aug 2002 17:15:38 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: []performance issues"
},
{
"msg_contents": "On Fri, Aug 02, 2002 at 05:38:37PM +0900, Tatsuo Ishii wrote:\n> > 2/ PostgreSQL specific functions used in standard SQL operations\n> > \n> > (the function works with standard data and not load it from \n> > internal PostgreSQL stuff).\n> > \n> > For example convert(), all datetype function like int(). The name \n> > convenition must be like names in group 1/\n> \n> FYI, I have been proposing SQL99 compatible convert(). I would like to\n> add it if no one objects.\n\n I use convert() as example only. I think there is more function\n for group 2/.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Fri, 2 Aug 2002 11:51:39 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n\n>Doing a row count requires a sequential scan in Postgres.\n>\n>Try creating another summary table that just has one row and one column and\n>is an integer.\n> \n>\n\nI have THREE summary tables derived from \"stats\" with different levels \nof aggregation. They work quite fast, But:\n\n1) Summary tables grow too\n2) There are requests which cannot be predicted, so they involve the \n\"stats\" table itself.\n\nSo I am still interested in PostgreSQL's ability to deal with \nmultimillon records tables.\n\nBest regards,\nYar.\n\n",
"msg_date": "Fri, 02 Aug 2002 15:48:39 +0400",
"msg_from": "Yaroslav Dmitriev <yar@warlock.ru>",
"msg_from_op": false,
"msg_subject": "Re: []performance issues"
},
{
"msg_contents": "\ntimes change if you do\n\"SELECT COUNT(1) FROM stats\" ?\n\n--\n:: Sergio A. Kessler ::\nLinux user #64005 - http://counter.li.org\n\n\"Yaroslav Dmitriev\" <yar@warlock.ru> escribi� en el mensaje\nnews:3D4A49E7.6090405@warlock.ru...\n> Hello,\n>\n> Sorry if it's wrong list for the question. Could you suggest some tweaks\n> to the PostgreSQL 7.2.1 to handle the following types of tables faster?\n>\n> Here we have table \"stats\" with something over one millon records.\n> Obvious \"SELECT COUNT(*) FROM stats \" takes over 40 seconds to execute,\n> and this amount of time does not shorten considerably in subsequent\n> similar requests. All the databases are vacuumed nightly.\n>\n> CREATE TABLE \"stats\" (\n> \"url\" varchar(50),\n> \"src_port\" varchar(10),\n> \"ip\" varchar(16),\n> \"dst_port\" varchar(10),\n> \"proto\" varchar(10),\n> \"size\" int8,\n> \"login\" varchar(20),\n> \"start_date\" timestamptz,\n> \"end_date\" timestamptz,\n> \"aggregated\" int4\n> );\n> CREATE INDEX \"aggregated_stats_key\" ON \"stats\" (\"aggregated\");\n> CREATE INDEX \"ip_stats_key\" ON \"stats\" (\"ip\");\n>\n> stats=> explain select count(*) from stats;\n> NOTICE: QUERY PLAN:\n>\n> Aggregate (cost=113331.10..113331.10 rows=1 width=0)\n> -> Seq Scan on stats (cost=0.00..110085.28 rows=1298328 width=0)\n>\n> EXPLAIN\n> stats=> select count(*) from stats;\n> count\n> ---------\n> 1298328\n> (1 row)\n>\n> The system is FreeBSD-4.6-stable, softupdates on, Athlon XP 1500+, 512 Mb\nDDR, ATA 100 HDD.\n>\n> Thanks in advance,\n> Yar\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n",
"msg_date": "Fri, 2 Aug 2002 10:38:03 -0300",
"msg_from": "\"Sergio A. Kessler\" <sak@ksb.com.ar>",
"msg_from_op": false,
"msg_subject": "Re: []performance issues"
},
{
"msg_contents": "On Fri, Aug 02, 2002 at 03:48:39PM +0400, Yaroslav Dmitriev wrote:\n> \n> So I am still interested in PostgreSQL's ability to deal with \n> multimillon records tables.\n\n[x-posted and Reply-To: to -general; this isn't a development\nproblem.]\n\nWe have tables with multimillion records, and they are fast. But not\nfast to count(). The MVCC design of PostgreSQL will give you very\nfew concurerncy problems, but you pay for that in the response time\nof certain kinds of aggregates, which cannot use an index.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 2 Aug 2002 11:39:32 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] []performance issues"
},
{
"msg_contents": "\nCount() is slow even on your Sun server with 16gb ram? How big is the\ndatabase?\n\nDavid Blood\n-----Original Message-----\nFrom: pgsql-general-owner@postgresql.org\n[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Andrew Sullivan\nSent: Friday, August 02, 2002 9:40 AM\nTo: PostgreSQL-development\nCc: PostgreSQL general list\nSubject: Re: [GENERAL] [HACKERS] []performance issues\n\nOn Fri, Aug 02, 2002 at 03:48:39PM +0400, Yaroslav Dmitriev wrote:\n> \n> So I am still interested in PostgreSQL's ability to deal with \n> multimillon records tables.\n\n[x-posted and Reply-To: to -general; this isn't a development\nproblem.]\n\nWe have tables with multimillion records, and they are fast. But not\nfast to count(). The MVCC design of PostgreSQL will give you very\nfew concurerncy problems, but you pay for that in the response time\nof certain kinds of aggregates, which cannot use an index.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n\n\n\n",
"msg_date": "Fri, 2 Aug 2002 09:57:16 -0600",
"msg_from": "\"David Blood\" <david@matraex.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] []performance issues"
},
{
"msg_contents": "On Fri, 2002-08-02 at 11:39, Andrew Sullivan wrote:\n> On Fri, Aug 02, 2002 at 03:48:39PM +0400, Yaroslav Dmitriev wrote:\n> > \n> > So I am still interested in PostgreSQL's ability to deal with \n> > multimillon records tables.\n> \n> [x-posted and Reply-To: to -general; this isn't a development\n> problem.]\n> \n> We have tables with multimillion records, and they are fast. But not\n> fast to count(). The MVCC design of PostgreSQL will give you very\n> few concurerncy problems, but you pay for that in the response time\n> of certain kinds of aggregates, which cannot use an index.\n\nOf course, as suggested this is easily overcome by keeping your own c\ncounter.\n\nbegin;\ninsert into bigtable values ();\nupdate into counttable set count=count+1;\ncommit;\n\nNow you get all the fun concurrency issues -- but fetching the\ninformation will be quick. What happens more, the counts, or the\ninserts :)\n\n",
"msg_date": "02 Aug 2002 14:08:02 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] []performance issues"
},
{
"msg_contents": "On Fri, Aug 02, 2002 at 09:57:16AM -0600, David Blood wrote:\n> \n> Count() is slow even on your Sun server with 16gb ram? How big is the\n> database?\n\nWell, just relatively slow! It's always going to be relatively slow\nto seqscan a few million records. We have some tables which have\nmaybe 4 or 4.5 million records in them. (I don't spend a lot of time\ncount()ing them ;-)\n\nA\n\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 2 Aug 2002 14:11:09 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] []performance issues"
},
{
"msg_contents": "> Hello,\n> \n> Sorry if it's wrong list for the question. Could you suggest some\n> tweaks to the PostgreSQL 7.2.1 to handle the following types of tables\n> faster? \n> \n> Here we have table \"stats\" with something over one millon records. \n> Obvious \"SELECT COUNT(*) FROM stats \" takes over 40 seconds to\n> execute, and this amount of time does not shorten considerably in\n> subsequent similar requests. All the databases are vacuumed nightly.\n> \n> CREATE TABLE \"stats\" (\n> \"url\" varchar(50),\n> \"src_port\" varchar(10),\n> \"ip\" varchar(16),\n> \"dst_port\" varchar(10),\n> \"proto\" varchar(10),\n> \"size\" int8,\n> \"login\" varchar(20),\n> \"start_date\" timestamptz,\n> \"end_date\" timestamptz,\n> \"aggregated\" int4\n> );\n> CREATE INDEX \"aggregated_stats_key\" ON \"stats\" (\"aggregated\");\n> CREATE INDEX \"ip_stats_key\" ON \"stats\" (\"ip\");\n> \n> stats=> explain select count(*) from stats;\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=113331.10..113331.10 rows=1 width=0)\n> -> Seq Scan on stats (cost=0.00..110085.28 rows=1298328 width=0)\n> \n> EXPLAIN\n> stats=> select count(*) from stats;\n> count \n> ---------\n> 1298328\n> (1 row)\n> \n> The system is FreeBSD-4.6-stable, softupdates on, Athlon XP 1500+, 512\n> Mb DDR, ATA 100 HDD. \n> \n> Thanks in advance,\n> Yar\n> \n\nI have been dealing with a similar problem.. First I switched to scsi, \nsecond I installed enough memory and increased shared memory (in both \nfreebsd kernel and pg.conf) so that the entire database could fit into \nram; this combined with the summary table idea keeps me out of most \ntrouble\n\n",
"msg_date": "Fri, 2 Aug 2002 18:27:44 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: []performance issues"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > Perhaps a more relevant question is why are we cluttering the namespace\n> > with any such function at all? What's the use case for it? We've\n> > gotten along fine without one so far, and I don't really think that we\n> > *ought* to be exposing random bits of internal implementation details\n> > at the SQL level.\n> \n> Actually, I was wondering the same thing, maybe for a different reason.\n> Exposing the backend internals could have security implications (though\n> don't make me concoct a scenario to prove it ;)\n> \n> Although it might have some usefulness for debugging, I think it should\n> not be an \"installed by default\" feature, so istm would be a great\n> candidate for a contrib/ function or library. If someone needs it, it is\n> almost immediately available.\n\nIt was requested because it is exposed in libpq and people need it to\ngenerate unique names and stuff like that from within psql and\nfunctions. Seems like a valid use for the pid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Aug 2002 16:09:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Perhaps a more relevant question is why are we cluttering the namespace\n> > with any such function at all? What's the use case for it?\n\n> It was requested because it is exposed in libpq and people need it to\n> generate unique names and stuff like that from within psql and\n> functions. Seems like a valid use for the pid.\n\nThe sole reason libpq exposes it is so that you can tell a self-notify\nfrom an incoming notify. (ie, given you are LISTENing on a condition\nthat both you and other clients send NOTIFYs for, is this particular\nmessage one that you sent yourself, or not? Compare the originator PID\nin the NOTIFY message to your backend_pid to find out.) I put that\nfeature in back around 6.4, because it allowed some important\noptimizations in an app I had that used LISTEN/NOTIFY a lot.\n\nSince NOTIFY messages aren't even visible at the SQL level, the above is\nnot a reason for making PIDs visible at the SQL level.\n\nI'm really dubious about using backend PID for the sort of purpose you\nsuggest. Unique names would be *much* more safely handled with, say,\na sequence generator. If you are not using libpq or another client\nlibrary that can give you a backend-PID API call, then very likely you\ndon't have a lot of control over the backend connection either, and\nshouldn't assume that backend PID is going to be stable for you.\n(Think about pooled connections in a webserver, etc.)\n\nFinally, the most legitimate uses of PID (like discovering a backend PID\nto send SIGINT to, when some client query is running wild) are not\nsupported at all by a function that can only return your own backend's\nPID, because that's seldom the PID you need to know. The\npg_stat_activity view handles this much better.\n\nSo I'm still unconvinced that we need or want this ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 16:25:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function "
},
{
"msg_contents": "> So I am still interested in PostgreSQL's ability to deal with\n> multimillon records tables.\n\nPostgres has no problem with multimillion row tables - many people on this\nlist run them - just don't do sequential scans on them if you can't afford\nthe time it takes.\n\nChris\n\n\n",
"msg_date": "Sat, 3 Aug 2002 18:07:50 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: []performance issues"
},
{
"msg_contents": "On Sat, 2002-08-03 at 01:25, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Perhaps a more relevant question is why are we cluttering the namespace\n> > > with any such function at all? What's the use case for it?\n> \n> > It was requested because it is exposed in libpq and people need it to\n> > generate unique names and stuff like that from within psql and\n> > functions. Seems like a valid use for the pid.\n> \n> The sole reason libpq exposes it is so that you can tell a self-notify\n> from an incoming notify. (ie, given you are LISTENing on a condition\n> that both you and other clients send NOTIFYs for, is this particular\n> message one that you sent yourself, or not? Compare the originator PID\n> in the NOTIFY message to your backend_pid to find out.) I put that\n> feature in back around 6.4, because it allowed some important\n> optimizations in an app I had that used LISTEN/NOTIFY a lot.\n> \n> Since NOTIFY messages aren't even visible at the SQL level, the above is\n> not a reason for making PIDs visible at the SQL level.\n\nWhen I last time showed how backend_pid function can be trivially\ndefined as \n\nhannu=# create function getpid() returns int\nhannu-# as '/lib/libc.so.6','getpid' language 'C';\nCREATE\nhannu=# select getpid();\n getpid \n--------\n 2832\n(1 row)\n\nYou claimed that NOTIFY uses some _other_ backend id (i.e. not process\nid).\n\nBut when I now tested it it seems that this is not the case, notify does\nuse the actual process id.\n\nhannu=# listen a;\nLISTEN\nhannu=# notify a;\nNOTIFY\nAsynchronous NOTIFY 'a' from backend with pid 2832 received.\n\n> \n> So I'm still unconvinced that we need or want this ...\n> \n\nAnd you can do it trivially as long as we support old-style C functions\nanyway.\n\n------------\nHannu\n\n",
"msg_date": "03 Aug 2002 17:55:36 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> You claimed that NOTIFY uses some _other_ backend id (i.e. not process\n> id).\n\nI did? Must have been momentary brain fade on my part. It's always\nbeen process ID.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 12:30:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function "
},
{
"msg_contents": "\nAs I remember, most cases where people have recently been asking for\nbackend pid were related to temp tables because they were named by pid. \nI don't think they are anymore. (?)\n\nWe can do two things. We can either rename it to pg_backend_pid and\nmove it to the statistics section in the docs, where the backend pids of\nall active backends are available, or remove my code additions and see\nif anyone asks for it in 7.3.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Perhaps a more relevant question is why are we cluttering the namespace\n> > > with any such function at all? What's the use case for it?\n> \n> > It was requested because it is exposed in libpq and people need it to\n> > generate unique names and stuff like that from within psql and\n> > functions. Seems like a valid use for the pid.\n> \n> The sole reason libpq exposes it is so that you can tell a self-notify\n> from an incoming notify. (ie, given you are LISTENing on a condition\n> that both you and other clients send NOTIFYs for, is this particular\n> message one that you sent yourself, or not? Compare the originator PID\n> in the NOTIFY message to your backend_pid to find out.) I put that\n> feature in back around 6.4, because it allowed some important\n> optimizations in an app I had that used LISTEN/NOTIFY a lot.\n> \n> Since NOTIFY messages aren't even visible at the SQL level, the above is\n> not a reason for making PIDs visible at the SQL level.\n> \n> I'm really dubious about using backend PID for the sort of purpose you\n> suggest. Unique names would be *much* more safely handled with, say,\n> a sequence generator. If you are not using libpq or another client\n> library that can give you a backend-PID API call, then very likely you\n> don't have a lot of control over the backend connection either, and\n> shouldn't assume that backend PID is going to be stable for you.\n> (Think about pooled connections in a webserver, etc.)\n> \n> Finally, the most legitimate uses of PID (like discovering a backend PID\n> to send SIGINT to, when some client query is running wild) are not\n> supported at all by a function that can only return your own backend's\n> PID, because that's seldom the PID you need to know. The\n> pg_stat_activity view handles this much better.\n> \n> So I'm still unconvinced that we need or want this ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 21:11:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> As I remember, most cases where people have recently been asking for\n> backend pid were related to temp tables because they were named by pid. \n\nAh, good point.\n\n> I don't think they are anymore. (?)\n\nCheck.\n\n> We can do two things. We can either rename it to pg_backend_pid and\n> move it to the statistics section in the docs, where the backend pids of\n> all active backends are available, or remove my code additions and see\n> if anyone asks for it in 7.3.\n\nLet's take it out and wait to see if anyone really still wants it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 22:19:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > As I remember, most cases where people have recently been asking for\n> > backend pid were related to temp tables because they were named by pid. \n> \n> Ah, good point.\n> \n> > I don't think they are anymore. (?)\n> \n> Check.\n> \n> > We can do two things. We can either rename it to pg_backend_pid and\n> > move it to the statistics section in the docs, where the backend pids of\n> > all active backends are available, or remove my code additions and see\n> > if anyone asks for it in 7.3.\n> \n> Let's take it out and wait to see if anyone really still wants it.\n\nJust when I am ready to throw it away, I come up with a use for the\nfunction:\n\n\ttest=> select * from pg_stat_activity where procpid != backend_pid();\n\nThis shows all activity _except_ my session, which pgmonitor or others\nmay want to use, and I can think of no other way to do it.\n\nComments? Maybe this is why it should be called pg_backend_id and put\nin the stat section.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 23:03:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Let's take it out and wait to see if anyone really still wants it.\n\n> Just when I am ready to throw it away, I come up with a use for the\n> function:\n\n> \ttest=> select * from pg_stat_activity where procpid != backend_pid();\n\n> This shows all activity _except_ my session, which pgmonitor or others\n> may want to use, and I can think of no other way to do it.\n\nHm. Actually this seems like an argument for exposing MyBackendId, since\nwhat pg_stat_activity really depends on is BackendId. But as that view\nis presently defined, you'd not be able to write\n\tWHERE backendid = my_backend_id()\nbecause the view doesn't expose backendid.\n\n> Comments? Maybe this is why it should be called pg_backend_id and put\n> in the stat section.\n\n*Please* don't call it pg_backend_id --- that invites confusion with\nBackendId which is a different thing.\n\nI'd suggest pg_backend_pid.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 23:32:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Let's take it out and wait to see if anyone really still wants it.\n> \n> > Just when I am ready to throw it away, I come up with a use for the\n> > function:\n> \n> > \ttest=> select * from pg_stat_activity where procpid != backend_pid();\n> \n> > This shows all activity _except_ my session, which pgmonitor or others\n> > may want to use, and I can think of no other way to do it.\n> \n> Hm. Actually this seems like an argument for exposing MyBackendId, since\n> what pg_stat_activity really depends on is BackendId. But as that view\n> is presently defined, you'd not be able to write\n> \tWHERE backendid = my_backend_id()\n> because the view doesn't expose backendid.\n\nYes.\n\n> > Comments? Maybe this is why it should be called pg_backend_id and put\n> > in the stat section.\n> \n> *Please* don't call it pg_backend_id --- that invites confusion with\n> BackendId which is a different thing.\n> \n> I'd suggest pg_backend_pid.\n\nSorry, I mean pg_backend_pid. I could expose backend_id but it may\nconfuse people so pid is probably better. If you had the id, you could\nuse pg_stat_get_backend_pid() to get the pid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 23:38:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: getpid() function"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Sorry, I mean pg_backend_pid.\n\nOkay, I was unsure if that was a typo or not.\n\n> I could expose backend_id but it may\n> confuse people so pid is probably better. If you had the id, you could\n> use pg_stat_get_backend_pid() to get the pid.\n\nYeah, I thought of suggesting pg_backend_id() to return MyBackendId and\nthen pg_stat_get_backend_pid() to get the PID, but was stopped by the\nthought that this breaks down if the stats collector isn't running.\nWhile I'm not convinced that there's any need for backend PID that's not\nconnected to looking at stats-collector results, it's probably foolish\nto set up a mechanism that doesn't work outside that context. Let's go\nwith pg_backend_pid().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 23:44:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getpid() function "
},
{
"msg_contents": "We have tables of over 3.1 million records. Performance is fine for most \nthings as long as access hits an index. As already stated, count(*) \ntakes a long time. Just took over a minute for me to check the record \ncount. Our DB is primarily a data warehouse role. Creating an index on a \nchar(43) field on that table from scratch takes a while, but I think \nthat's expected. Under normal loads we have well under 1 second \"LIKE\" \nqueries on that the indexed char(43) field in the table with a join on a \ntable of 1.1 million records using a char(12) primary key.\n\nServer is a Dell PowerEdge 2400, Dual PIII 667's with a gig of memory, \n800 something megs allocated to postgres shared buffers.\n\n-Pete\n\nAndrew Sullivan wrote:\n\n>On Fri, Aug 02, 2002 at 03:48:39PM +0400, Yaroslav Dmitriev wrote:\n> \n>\n>>So I am still interested in PostgreSQL's ability to deal with \n>>multimillon records tables.\n>> \n>>\n>\n>[x-posted and Reply-To: to -general; this isn't a development\n>problem.]\n>\n>We have tables with multimillion records, and they are fast. But not\n>fast to count(). The MVCC design of PostgreSQL will give you very\n>few concurerncy problems, but you pay for that in the response time\n>of certain kinds of aggregates, which cannot use an index.\n>\n>A\n>\n> \n>\n\n",
"msg_date": "Mon, 05 Aug 2002 13:22:20 -0400",
"msg_from": "\"Peter A. Daly\" <petedaly@ix.netcom.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] []performance issues"
},
{
"msg_contents": "On Fri, Aug 02, 2002 at 02:08:02PM -0400, Rod Taylor wrote:\n> \n> Of course, as suggested this is easily overcome by keeping your own c\n> counter.\n> \n> begin;\n> insert into bigtable values ();\n> update into counttable set count=count+1;\n> commit;\n> \n> Now you get all the fun concurrency issues -- but fetching the\n> information will be quick. What happens more, the counts, or the\n> inserts :)\n\nYou could get around this with a trigger that just inserts 1 into one\ntable (call it counter_unposted), and then using an external process\nto take those units, add them to the value in counter_posted, and\ndelete them from counter_unposted. You'd always be a few minutes\nbehind, but you'd get a counter that's pretty close without too much\noverhead. Of course, this raises the obvious question: why use\ncount() at all?\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 6 Aug 2002 17:45:00 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] []performance issues"
}
] |
[
{
"msg_contents": "\nI'm having a weird problem on my \" PostgreSQL 7.2.1 on i386--netbsdelf,\ncompiled by GCC 2.95.3\" system. Executing these commands:\n\nCREATE TABLE test_one (id int PRIMARY KEY, value_one text);\nCREATE TABLE test_two (id int PRIMARY KEY, value_two text);\nCREATE VIEW test AS\n SELECT test_one.id, value_one, value_two\n FROM test_one\n JOIN test_two USING (id);\nCREATE RULE test_insert AS\n ON INSERT TO test\n DO (\n\tINSERT INTO test_one (id, value_one) VALUES (NEW.id, NEW.value_one);\n\tINSERT INTO test_two (id, value_two) VALUES (NEW.id, NEW.value_two);\n\t);\nINSERT INTO test VALUES (1, 'one', 'onemore');\n\nreturns \"ERROR: Cannot insert into a view without an appropriate rule\"\nfor that last statement. The rule does show up in pg_rules, though.\n\nWhat am I doing wrong here? Is there a bug?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 11:34:20 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": true,
"msg_subject": "Rules and Views"
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> CREATE VIEW test AS ...\n> CREATE RULE test_insert AS\n> ON INSERT TO test\n> DO ...\n> INSERT INTO test VALUES (1, 'one', 'onemore');\n> ERROR: Cannot insert into a view without an appropriate rule\n\n> What am I doing wrong here? Is there a bug?\n\nMake that \"ON INSERT DO INSTEAD\". As coded, the rule leaves the\noriginal insertion into the view still active.\n\nPerhaps the error message could be phrased better --- any thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Jul 2002 23:32:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Wed, 2002-07-31 at 10:22, Tom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > On Wed, 31 Jul 2002, Tom Lane wrote:\n> >> Well, to my mind that's what the error message says now. The reason\n> >> it didn't help you was that you *did* have a rule ... but it didn't\n> >> completely override the view insertion.\n> \n> > Right, like I said, my model was wrong. I didn't think of the error\n> > message as being an \"insert behaviour\" that had to be overridden; I\n> > thought of it as a \"there is no behaviour right now\" message.\n> \n> Hm. How about\n> \n> ERROR: Cannot insert into a view\n> \tYou need an unconditional ON INSERT DO INSTEAD rule\n\nSeems more accurate, but actually you may also have two or more\nconditional rules that cover all possibilities if taken together.\n\nMaybe\n\nERROR: Cannot insert into a view\n You need an ON INSERT DO INSTEAD rule that matches your INSERT\n\nWhich covers both cases.\n\n-----------------\nHannu\n\n",
"msg_date": "31 Jul 2002 09:16:49 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views"
},
{
"msg_contents": "On Tue, 30 Jul 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > CREATE VIEW test AS ...\n> > CREATE RULE test_insert AS\n> > ON INSERT TO test\n> > DO ...\n> > INSERT INTO test VALUES (1, 'one', 'onemore');\n> > ERROR: Cannot insert into a view without an appropriate rule\n>\n> > What am I doing wrong here? Is there a bug?\n>\n> Make that \"ON INSERT DO INSTEAD\". As coded, the rule leaves the\n> original insertion into the view still active.\n\nAh, I see! My model of how this was working was wrong.\n\n> Perhaps the error message could be phrased better --- any thoughts?\n\nMaybe a message that says something along the lines of \"cannot insert\ninto views; you need to override this behaviour with a rule\"? Also, some\nexamples in the manual would be helpful.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 13:40:21 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": true,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> ERROR: Cannot insert into a view without an appropriate rule\n\n>> Perhaps the error message could be phrased better --- any thoughts?\n\n> Maybe a message that says something along the lines of \"cannot insert\n> into views; you need to override this behaviour with a rule\"?\n\nWell, to my mind that's what the error message says now. The reason\nit didn't help you was that you *did* have a rule ... but it didn't\ncompletely override the view insertion.\n\nI'm not sure how to phrase a more useful message. Note that the place\nwhere the error can be detected doesn't have any good way to know that\na non-INSTEAD rule was in fact processed, so we can't say anything quite\nas obvious as \"You needed to use INSTEAD in your rule, luser\". Can we\ncover both the no-rule-at-all case and the had-a-rule-but-it-wasn't-\nINSTEAD case in a single, reasonably phrased error message? (Just\nto make life interesting, there's also the case where you made an\nINSTEAD rule but it's conditional.)\n\n> Also, some examples in the manual would be helpful.\n\nAren't there several already? But feel free to contribute more...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 01:03:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Wed, 31 Jul 2002, Tom Lane wrote:\n\n> Well, to my mind that's what the error message says now. The reason\n> it didn't help you was that you *did* have a rule ... but it didn't\n> completely override the view insertion.\n\nRight, like I said, my model was wrong. I didn't think of the error\nmessage as being an \"insert behaviour\" that had to be overridden; I\nthought of it as a \"there is no behaviour right now\" message.\n\nMaybe it's just me not reading the docs all that well; I wouldn't worry\nabout this if it's not been a problem for others.\n\n> > Also, some examples in the manual would be helpful.\n>\n> Aren't there several already? But feel free to contribute more...\n\nYeah, but nothing showing these rules on a view across two tables.\nI'll try to work it out and send it here for comments.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 14:15:16 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": true,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Wed, 31 Jul 2002, Tom Lane wrote:\n>> Well, to my mind that's what the error message says now. The reason\n>> it didn't help you was that you *did* have a rule ... but it didn't\n>> completely override the view insertion.\n\n> Right, like I said, my model was wrong. I didn't think of the error\n> message as being an \"insert behaviour\" that had to be overridden; I\n> thought of it as a \"there is no behaviour right now\" message.\n\nHm. How about\n\nERROR: Cannot insert into a view\n\tYou need an unconditional ON INSERT DO INSTEAD rule\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 01:22:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Wed, 31 Jul 2002, Tom Lane wrote:\n\n> ERROR: Cannot insert into a view\n> \tYou need an unconditional ON INSERT DO INSTEAD rule\n\nSounds great to me!\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 14:37:20 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": true,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Wed, 2002-07-31 at 10:22, Tom Lane wrote:\n>> Hm. How about\n>> \n>> ERROR: Cannot insert into a view\n>> You need an unconditional ON INSERT DO INSTEAD rule\n\n> Seems more accurate, but actually you may also have two or more\n> conditional rules that cover all possibilities if taken together.\n> Maybe\n> ERROR: Cannot insert into a view\n> You need an ON INSERT DO INSTEAD rule that matches your INSERT\n> Which covers both cases.\n\nActually not: the system insists that you provide an unconditional\nDO INSTEAD rule. The other would require trying to prove (during\nrule expansion) a theorem that the conditions of the available\nconditional rules cover all possible cases.\n\nAlternatively we could move the test for insertion-into-a-view out of\nthe rewriter and into a low level of the executor, producing an error\nmessage only if some inserted tuple actually gets past the rule\nconditions. I don't much care for that answer because (a) it turns a\nonce-per-query overhead check into once-per-tuple overhead, and\n(b) if you fail to span the full space of possibilities in your rule\nconditions, you might not find out about it until your application goes\nbelly-up in production. There's some version of Murphy's Law that says\nrare conditions arise with very low probability during testing, and very\nhigh probability as soon as you go live...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 02:32:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
}
] |
[
{
"msg_contents": "I have completed this TODO item:\n\n\t* Remove LockMethodTable.prio field, not used (Bruce)\n\nApplied.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/storage/lmgr/lmgr.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/storage/lmgr/lmgr.c,v\nretrieving revision 1.53\ndiff -c -r1.53 lmgr.c\n*** src/backend/storage/lmgr/lmgr.c\t20 Jun 2002 20:29:35 -0000\t1.53\n--- src/backend/storage/lmgr/lmgr.c\t21 Jul 2002 04:48:47 -0000\n***************\n*** 65,95 ****\n \n };\n \n- static int\tLockPrios[] = {\n- \t0,\n- \t/* AccessShareLock */\n- \t1,\n- \t/* RowShareLock */\n- \t2,\n- \t/* RowExclusiveLock */\n- \t3,\n- \t/* ShareUpdateExclusiveLock */\n- \t4,\n- \t/* ShareLock */\n- \t5,\n- \t/* ShareRowExclusiveLock */\n- \t6,\n- \t/* ExclusiveLock */\n- \t7,\n- \t/* AccessExclusiveLock */\n- \t8\n- };\n- \n LOCKMETHOD\tLockTableId = (LOCKMETHOD) NULL;\n LOCKMETHOD\tLongTermTableId = (LOCKMETHOD) NULL;\n \n /*\n! * Create the lock table described by LockConflicts and LockPrios.\n */\n LOCKMETHOD\n InitLockTable(int maxBackends)\n--- 65,75 ----\n \n };\n \n LOCKMETHOD\tLockTableId = (LOCKMETHOD) NULL;\n LOCKMETHOD\tLongTermTableId = (LOCKMETHOD) NULL;\n \n /*\n! * Create the lock table described by LockConflicts\n */\n LOCKMETHOD\n InitLockTable(int maxBackends)\n***************\n*** 97,104 ****\n \tint\t\t\tlockmethod;\n \n \tlockmethod = LockMethodTableInit(\"LockTable\",\n! \t\t\t\t\t\t\t\t\t LockConflicts, LockPrios,\n! \t\t\t\t\t\t\t\t\t MAX_LOCKMODES - 1, maxBackends);\n \tLockTableId = lockmethod;\n \n \tif (!(LockTableId))\n--- 77,84 ----\n \tint\t\t\tlockmethod;\n \n \tlockmethod = LockMethodTableInit(\"LockTable\",\n! \t\t\t\t\t\t\t\t\t LockConflicts, MAX_LOCKMODES - 1,\n! \t\t\t\t\t\t\t\t\t maxBackends);\n \tLockTableId = lockmethod;\n \n \tif (!(LockTableId))\nIndex: src/backend/storage/lmgr/lock.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/storage/lmgr/lock.c,v\nretrieving revision 1.110\ndiff -c -r1.110 lock.c\n*** src/backend/storage/lmgr/lock.c\t19 Jul 2002 00:17:40 -0000\t1.110\n--- src/backend/storage/lmgr/lock.c\t21 Jul 2002 04:48:51 -0000\n***************\n*** 208,225 ****\n static void\n LockMethodInit(LOCKMETHODTABLE *lockMethodTable,\n \t\t\t LOCKMASK *conflictsP,\n- \t\t\t int *prioP,\n \t\t\t int numModes)\n {\n \tint\t\t\ti;\n \n \tlockMethodTable->numLockModes = numModes;\n \tnumModes++;\n! \tfor (i = 0; i < numModes; i++, prioP++, conflictsP++)\n! \t{\n \t\tlockMethodTable->conflictTab[i] = *conflictsP;\n- \t\tlockMethodTable->prio[i] = *prioP;\n- \t}\n }\n \n /*\n--- 208,221 ----\n static void\n LockMethodInit(LOCKMETHODTABLE *lockMethodTable,\n \t\t\t LOCKMASK *conflictsP,\n \t\t\t int numModes)\n {\n \tint\t\t\ti;\n \n \tlockMethodTable->numLockModes = numModes;\n \tnumModes++;\n! \tfor (i = 0; i < numModes; i++, conflictsP++)\n \t\tlockMethodTable->conflictTab[i] = *conflictsP;\n }\n \n /*\n***************\n*** 234,240 ****\n LOCKMETHOD\n LockMethodTableInit(char *tabName,\n \t\t\t\t\tLOCKMASK *conflictsP,\n- \t\t\t\t\tint *prioP,\n \t\t\t\t\tint numModes,\n \t\t\t\t\tint maxBackends)\n {\n--- 230,235 ----\n***************\n*** 335,341 ****\n \t\telog(FATAL, \"LockMethodTableInit: couldn't initialize %s\", tabName);\n \n \t/* init data structures */\n! \tLockMethodInit(lockMethodTable, conflictsP, prioP, numModes);\n \n \tLWLockRelease(LockMgrLock);\n \n--- 330,336 ----\n \t\telog(FATAL, \"LockMethodTableInit: couldn't initialize %s\", tabName);\n \n \t/* init data structures */\n! \tLockMethodInit(lockMethodTable, conflictsP, numModes);\n \n \tLWLockRelease(LockMgrLock);\n \nIndex: src/include/storage/lock.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/storage/lock.h,v\nretrieving revision 1.63\ndiff -c -r1.63 lock.h\n*** src/include/storage/lock.h\t19 Jul 2002 00:17:40 -0000\t1.63\n--- src/include/storage/lock.h\t21 Jul 2002 04:48:52 -0000\n***************\n*** 80,89 ****\n *\t\ttype conflicts. conflictTab[i] is a mask with the j-th bit\n *\t\tturned on if lock types i and j conflict.\n *\n- * prio -- each lockmode has a priority, so, for example, waiting\n- *\t\twriters can be given priority over readers (to avoid\n- *\t\tstarvation). XXX this field is not actually used at present!\n- *\n * masterLock -- synchronizes access to the table\n *\n */\n--- 80,85 ----\n***************\n*** 94,100 ****\n \tLOCKMETHOD\tlockmethod;\n \tint\t\t\tnumLockModes;\n \tint\t\t\tconflictTab[MAX_LOCKMODES];\n- \tint\t\t\tprio[MAX_LOCKMODES];\n \tLWLockId\tmasterLock;\n } LOCKMETHODTABLE;\n \n--- 90,95 ----\n***************\n*** 215,221 ****\n extern void InitLocks(void);\n extern LOCKMETHODTABLE *GetLocksMethodTable(LOCK *lock);\n extern LOCKMETHOD LockMethodTableInit(char *tabName, LOCKMASK *conflictsP,\n! \t\t\t\t\tint *prioP, int numModes, int maxBackends);\n extern LOCKMETHOD LockMethodTableRename(LOCKMETHOD lockmethod);\n extern bool LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,\n \t\t\tTransactionId xid, LOCKMODE lockmode, bool dontWait);\n--- 210,216 ----\n extern void InitLocks(void);\n extern LOCKMETHODTABLE *GetLocksMethodTable(LOCK *lock);\n extern LOCKMETHOD LockMethodTableInit(char *tabName, LOCKMASK *conflictsP,\n! \t\t\t\t\tint numModes, int maxBackends);\n extern LOCKMETHOD LockMethodTableRename(LOCKMETHOD lockmethod);\n extern bool LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,\n \t\t\tTransactionId xid, LOCKMODE lockmode, bool dontWait);",
"msg_date": "Tue, 30 Jul 2002 23:12:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Remove LockMethodTable.prio"
}
] |
[
{
"msg_contents": "\nHere are the open items for 7.3. We have one more month to address them\nbefore beta.\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSocket permissions - only install user can access db by default\n\tunix_socket_permissions in postgresql.conf\nNAMEDATALEN - disk/performance penalty for increase, 64, 128?\nFUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\nPoint-in-time recovery - ready for 7.3?\nAllow easy display of usernames in a group (pg_hba.conf uses groups now)\nReindex/btree shrinkage - does reindex need work, can btree be shrunk?\nDROP COLUMN - ready?\nCLUSTER - ready?\ndisplay locks - ready?\nWin32 - timefame?\nPrepared statements - ready?\nSchema handling - ready? interfaces? client apps?\nDependency - pg_dump auto-create dependencies for 7.2.X data? \nglibc and mktime() - fix?\nFunctions Returning Sets - done?\necpg and bison issues - solved?\n\n\nDocumentation Changes\n---------------------\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Jul 2002 23:50:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Open 7.3 items"
},
{
"msg_contents": "> Schema handling - ready? interfaces? client apps?\n\nWith schemas, how about settings for automatically creating a schema for a\nuser when you create the user, etc.\n\nChris\n\n",
"msg_date": "Wed, 31 Jul 2002 12:00:29 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Tuesday 30 July 2002 11:50 pm, Bruce Momjian wrote:\n> Here are the open items for 7.3. We have one more month to address them\n> before beta.\n\n> Source Code Changes\n> -------------------\n\nBruce, is the config file location stuff not being addressed? I remember Mark \n(mlw) had worked up the patch for a '-C' switch, there was discussion, etc. \nDid it not make the todo? \n\nIf Peter or someone else doesn't beat me to it I might try my hand at that \none, as I would dearly love to be able to decouple the config files from \nPGDATA. It has been discussed; consensus was not reached as I recall.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 31 Jul 2002 00:05:21 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Lamar Owen wrote:\n> On Tuesday 30 July 2002 11:50 pm, Bruce Momjian wrote:\n> > Here are the open items for 7.3. We have one more month to address them\n> > before beta.\n> \n> > Source Code Changes\n> > -------------------\n> \n> Bruce, is the config file location stuff not being addressed? I remember Mark \n> (mlw) had worked up the patch for a '-C' switch, there was discussion, etc. \n> Did it not make the todo? \n> \n> If Peter or someone else doesn't beat me to it I might try my hand at that \n> one, as I would dearly love to be able to decouple the config files from \n> PGDATA. It has been discussed; consensus was not reached as I recall.\n\nThe issue was never resolved, so it did not make any lists.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 00:19:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Bruce, is the config file location stuff not being addressed? ...\n> If Peter or someone else doesn't beat me to it I might try my hand at that \n> one, as I would dearly love to be able to decouple the config files from \n> PGDATA. It has been discussed; consensus was not reached as I recall.\n\nI believe that last part was the sticking point. If you can find some\nconsensus on how it ought to work, then go for it. My own opinion is\nthat there is nothing broken there; certainly nothing so broken that\nwe need to force a change under schedule pressure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 00:19:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Bruce, is the config file location stuff not being addressed? ...\n> > If Peter or someone else doesn't beat me to it I might try my hand at that \n> > one, as I would dearly love to be able to decouple the config files from \n> > PGDATA. It has been discussed; consensus was not reached as I recall.\n> \n> I believe that last part was the sticking point. If you can find some\n> consensus on how it ought to work, then go for it. My own opinion is\n> that there is nothing broken there; certainly nothing so broken that\n> we need to force a change under schedule pressure.\n\nI don't feel we are under pressure. We have time to discuss and address\nit.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 00:22:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> ... My own opinion is\n>> that there is nothing broken there; certainly nothing so broken that\n>> we need to force a change under schedule pressure.\n\n> I don't feel we are under pressure. We have time to discuss and address\n> it.\n\nWell, it's all a matter of how you look at it. Isn't the point of your\npost that began this thread to start giving people a sense of time\npressure?\n\nI agree that if we could quickly come to a resolution about how this\nought to work, there's plenty of time to go off and implement it. But\n(1) we failed to come to a consensus before, so I'm not optimistic\nthan one will suddenly emerge now; (2) we've got a ton of other issues\nthat we *need* to deal with before beta. This one does not strike me\nas a must-fix, and so I'm loathe to spend much development time on it\nwhen there are so many open issues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 00:29:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> ... My own opinion is\n> >> that there is nothing broken there; certainly nothing so broken that\n> >> we need to force a change under schedule pressure.\n> \n> > I don't feel we are under pressure. We have time to discuss and address\n> > it.\n> \n> Well, it's all a matter of how you look at it. Isn't the point of your\n> post that began this thread to start giving people a sense of time\n> pressure?\n> \n> I agree that if we could quickly come to a resolution about how this\n> ought to work, there's plenty of time to go off and implement it. But\n> (1) we failed to come to a consensus before, so I'm not optimistic\n> than one will suddenly emerge now; (2) we've got a ton of other issues\n> that we *need* to deal with before beta. This one does not strike me\n> as a must-fix, and so I'm loathe to spend much development time on it\n> when there are so many open issues.\n\nWell, it is up to the individuals involved. If someone wants to deal\nwith it, and gets a consensus, and comes up with a patch, let them.\n\nThe list is for time pressure on those items. It does not affect new\nitems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 00:35:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Here are the open items for 7.3.\n\nSome comments ...\n\n> Socket permissions - only install user can access db by default\n\nI do not agree with this goal.\n\n> NAMEDATALEN - disk/performance penalty for increase, 64, 128?\n> FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n\nAt the moment I don't see a lot of solid evidence that increasing\nNAMEDATALEN has any performance penalty. Someone reported about\na 10% slowdown on pgbench with NAMEDATALEN=128 ... but Neil Conway\ntried to reproduce the result, and got about a 10% *speedup*.\nPersonally I think 10% is well within the noise spectrum for\npgbench, and so it's difficult to claim that we have established\nany performance difference at all. I have not tried to measure\nFUNC_MAX_ARGS differences.\n\n> Point-in-time recovery - ready for 7.3?\n\nAt the moment, it doesn't exist at all. If patches appear, we can\nreview 'em, but right now there is nothing to debate.\n\n> DROP COLUMN - ready?\n\nI'm on it.\n\n> Win32 - timefame?\n\nI've seen nothing to make me think this will be ready for 7.3.\n\n> Prepared statements - ready?\n\nI think we're close there; the patch seems okay, we're just debating\nminor syntax issues.\n\n> Schema handling - ready? interfaces? client apps?\n\nThe backend will be ready (it's not quite yet). pg_dump is ready.\npsql is very definitely not ready, nor is pgaccess. I don't know the\nstatus for JDBC or ODBC; any comments? The other interface libraries\nprobably don't care.\n\n> Dependency - pg_dump auto-create dependencies for 7.2.X data?\n\nHuh?\n\n> glibc and mktime() - fix?\n\nWe need a fix for this. Dunno what to do about it.\n\n> ecpg and bison issues - solved?\n\nNot yet :-(. Anyone have a line into the bison project?\n\n\nOther things on my radar screen:\n\n* I have about zero confidence in the recent tuple-header-size-reduction\npatches.\n\n* pg_conversion stuff --- do we understand this thing's behavior under\nfailure conditions? Does it work properly with namespaces and\ndependencies?\n\n* pg_dumpall probably ought to dump database privilege settings, also\nper-user and per-database GUC settings.\n\n* BeOS and QNX4 ports are busted.\n\n* The whole area of which implicit coercions should be allowed is a\nserious problem that we have not spent enough time on. There are\na number of cases in which CVS tip is clearly broken compared to\nprior releases.\n\n* Bear Giles' SSL patches seem to be causing unhappiness in some\nquarters.\n\n* libpqxx is not integrated into build process nor docs. It should\nbe integrated or reversed out before beta.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 00:50:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "> * pg_conversion stuff --- do we understand this thing's behavior under\n> failure conditions?\n\nAs far as I know, automatic encoding conversion behaves well under\nfailure conditions.\n\n> Does it work properly with namespaces and\n> dependencies?\n\nYes.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 31 Jul 2002 14:00:16 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "\nadd in 'fix pg_hba.conf / password issues' to that too :)\n\nOn Tue, 30 Jul 2002, Bruce Momjian wrote:\n\n>\n> Here are the open items for 7.3. We have one more month to address them\n> before beta.\n>\n> ---------------------------------------------------------------------------\n>\n> P O S T G R E S Q L\n>\n> 7 . 3 O P E N I T E M S\n>\n>\n> Current at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n>\n> Source Code Changes\n> -------------------\n> Socket permissions - only install user can access db by default\n> \tunix_socket_permissions in postgresql.conf\n> NAMEDATALEN - disk/performance penalty for increase, 64, 128?\n> FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n> Point-in-time recovery - ready for 7.3?\n> Allow easy display of usernames in a group (pg_hba.conf uses groups now)\n> Reindex/btree shrinkage - does reindex need work, can btree be shrunk?\n> DROP COLUMN - ready?\n> CLUSTER - ready?\n> display locks - ready?\n> Win32 - timefame?\n> Prepared statements - ready?\n> Schema handling - ready? interfaces? client apps?\n> Dependency - pg_dump auto-create dependencies for 7.2.X data?\n> glibc and mktime() - fix?\n> Functions Returning Sets - done?\n> ecpg and bison issues - solved?\n>\n>\n> Documentation Changes\n> ---------------------\n>\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Wed, 31 Jul 2002 02:01:43 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Tom Lane wrote:\n\n> * libpqxx is not integrated into build process nor docs. It should\n> be integrated or reversed out before beta.\n\nI've requestsed that Jeorgen(sp?) move this over to GBorg ... its\nsomething that can, and should be, built seperately from the base\ndistribution, along with at least a dozen other things we have bloating\nthe distribution now :( but at least that one hasn't been integrated yet\n...\n\nWe got into 'creeping featurisms' in the beginning because we had no other\nreally central location to put stuff ... we do now, and with better\nmechanisms in place for dealing with 'multiple maintainers' ... its\nabout time we start to trim the fat, before we can't fit through the\nproverbial door anymore ...\n\nPeronally, I find it quite annoying to have to download the complete\nserver distribution just to get libpq installed so that I can install\nmod_php4 ... and with all the talk about 'why mysql vs pgsql' that has\nbeen going on lately, its time to start look at how to make it easier to\n'add a new interface' without having to download the whole distribution\n'yet again' ...\n\n\n\n",
"msg_date": "Wed, 31 Jul 2002 02:11:06 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "> > Dependency - pg_dump auto-create dependencies for 7.2.X data?\n>\n> Huh?\n\nTaking a bunch of CREATE CONSTRAINT TRIGGERS and turning them into the\nproper pg_constraint entries...\n\nChris\n\n",
"msg_date": "Wed, 31 Jul 2002 13:19:16 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "...\n> I agree that if we could quickly come to a resolution about how this\n> ought to work, there's plenty of time to go off and implement it. But\n> (1) we failed to come to a consensus before, so I'm not optimistic\n> than one will suddenly emerge now; (2) we've got a ton of other issues\n> that we *need* to deal with before beta. This one does not strike me\n> as a must-fix, and so I'm loathe to spend much development time on it\n> when there are so many open issues.\n\nafaict someone else volunteered to do the work. There is no lack of\nconsensus that this is a useful feature, at least among those who take\nresponsibility to package PostgreSQL for particular platforms. How about\nletting them specify the requirements and if an acceptable solution\nemerges soon, we'll have it for 7.3...\n\n - Thomas\n",
"msg_date": "Tue, 30 Jul 2002 23:24:33 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": ">> Schema handling - ready? interfaces? client apps?\n> status for JDBC or ODBC; any comments? The other interface libraries\n> probably don't care.\n\nWhat about DBD::Pg? \n\n --\nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 12.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 12.00-16.00 Email:kar@kakidata.dk \n",
"msg_date": "Wed, 31 Jul 2002 07:04:55 GMT",
"msg_from": "\"Kaare Rasmussen\" <kar@kakidata.dk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Le Mercredi 31 Juillet 2002 05:50, Bruce Momjian a écrit :\n> Source Code Changes\nWhat about CREATE OR REPLACE VIEW which would be great for pgAdmin2. Thanks to \nall of you./Jean-Michel POURE\n",
"msg_date": "Wed, 31 Jul 2002 14:53:14 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "I too do not like alot of bloat in the distribution, but I also agree\nwith what Andrew is saying.\n\nCurrently, at the FTP site, you can download the whole tar file, or in 4\nseparate tarballs. How hard would it be to create a separate tarball for\nclient related packages? I am not sure if this would be a *great*\nsolution, but it might be a good compromise.\n\n --brett\n\n\nOn Wed, 2002-07-31 at 10:22, Andrew Sullivan wrote:\n> On Wed, Jul 31, 2002 at 02:08:33PM -0300, Marc G. Fournier wrote:\n> > On Wed, 31 Jul 2002, Tom Lane wrote:\n> \n> > > One reason for wanting to integrate libpqxx is that I don't think we'll\n> > > find out anything about its portability until we get a lot of people\n> > > trying to build it. If it's a separate distro that won't happen quickly.\n> > \n> > Who cares? Those that need a C++ interface will know where to find it,\n> > and will report bugs that they have ... why should it be tested on every\n> > platform when we *might* only have those on the Linux platform using it?\n> \n> This seems a bad argument. You can't say \"we support interface xyz\"\n> and never test it on anything except i80x86 Linux. Somebady comes\n> along and tries to make it go on Solaris, and it doesn't work: poof,\n> the cross-platform reputation that you and other have worked hard to\n> burnish goes away. Never mind that it's only a client library.\n> \n> Besides, more generally, Postgres already has a reputation as being\n> difficult to install. The proposal to separate out all the\n> \"non-basics\" (I'm not even sure how one would draw that line: maybe a\n> server-only package and a client-library package run through GBorg?)\n> would mean that anyone wanting to do something moderately complicated\n> would have a yet higher hurdle. Isn't that a problem?\n> \n> A \n> \n> -- \n> ----\n> Andrew Sullivan 87 Mowat Avenue \n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M6K 3E3\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n-- \nBrett Schwarz\nbrett_schwarz AT yahoo.com\n\n",
"msg_date": "31 Jul 2002 06:10:59 -0700",
"msg_from": "Brett Schwarz <brett_schwarz@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 02:11:06AM -0300, Marc G. Fournier wrote:\n> On Wed, 31 Jul 2002, Tom Lane wrote:\n> > * libpqxx is not integrated into build process nor docs. It should\n> > be integrated or reversed out before beta.\n> \n> I've requestsed that Jeorgen(sp?) move this over to GBorg ... its\n> something that can, and should be, built seperately from the base\n> distribution, along with at least a dozen other things we have bloating\n> the distribution now :( but at least that one hasn't been integrated yet\n> ...\n\nMentioning that on -hackers would have been nice -- I've spent a while\nthis week hacking autoconf / Makefiles to integrate libpqxx...\n\nThe problem I have with removing libpqxx is that libpq++ is a far\ninferior C++ interface. If we leave libpq++ as the only C++ interface\ndistributed with PostgreSQL, there will be a tendancy for people\nusing PostgreSQL & C++ to use the C++ support included with the\ndistribution. Similarly, the Perl interface included with\nPostgreSQL is widely regarded as inferior to DBD::Pg.\n\nIf we're going to start removing interfaces, I'd vote for the removal of\nperl5 & libpq++ as well as libpqxx. We can add a section to the\ndocumentation listing the available language interfaces (for languages\nlike Python, where several interfaces exist, this would probably be a\ngood idea anyway).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Wed, 31 Jul 2002 10:55:00 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> Mentioning that on -hackers would have been nice -- I've spent a while\n> this week hacking autoconf / Makefiles to integrate libpqxx...\n\nMarc's opinion is not the same thing as a done deal ;-) --- we still\nhave to discuss this, and if someone's already doing the integration\nwork I think that's an important factor.\n\n> If we're going to start removing interfaces, I'd vote for the removal of\n> perl5 & libpq++ as well as libpqxx.\n\nAgreed on that point. We shouldn't be promoting old, crufty interface\nlibraries when there are better ones available.\n\nI would personally prefer to see libpqxx integrated now, and then we\ncould plan to remove libpq++ in a release or two (after giving people\na reasonable opportunity to switch over). If anyone still cares about\nlibpq++ at that point, it could be given a home on gborg.\n\nOne reason for wanting to integrate libpqxx is that I don't think we'll\nfind out anything about its portability until we get a lot of people\ntrying to build it. If it's a separate distro that won't happen quickly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 11:05:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items ) "
},
{
"msg_contents": "On Tue, Jul 30, 2002 at 11:50:38PM -0400, Bruce Momjian wrote:\n> NAMEDATALEN - disk/performance penalty for increase, 64, 128?\n\nIn my personal testing, I've been unable to observe a significant\nperformance impact (as Tom mentioned, I tried getting some profiling\ndata with gprof + pgbench, and found that increasing NAMEDATALEN made\nthings *faster*). Whether that is enough of an endorsement to make\nthe change for 7.3, I'm not sure...\n\n> FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n\nUntil someone takes the time to determine what the performance\nimplications of this change will be, I don't think we should\nchange this. Given that no one has done any testing, I'm not\nconvinced that there's a lot of demand for this anyway.\n\n> Point-in-time recovery - ready for 7.3?\n> Reindex/btree shrinkage - does reindex need work, can btree be shrunk?\n\nI think both of these should probably wait for 7.4\n\n> display locks - ready?\n> Prepared statements - ready?\n\nBoth of these are ready, only trivial changes are required.\n\n> Schema handling - ready? interfaces? client apps?\n\nDo we want all client interfaces / admin apps to be aware of schemas in\ntime for beta 1, or is the plan to fix these during the beta cycle?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Wed, 31 Jul 2002 11:19:21 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 02:01:43AM -0300, Marc G. Fournier wrote:\n> add in 'fix pg_hba.conf / password issues' to that too :)\n\nI doubt that will make 7.3 -- the proposals I've seen on this topic\nrequire some reasonably complex additions to the authentication\nsystem. We also still need to hash out which design we're going\nto implement. Given that it's pretty esoteric, I'd prefer this\nwait for 7.4\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Wed, 31 Jul 2002 11:26:17 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n>> FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n\n> Until someone takes the time to determine what the performance\n> implications of this change will be, I don't think we should\n> change this. Given that no one has done any testing, I'm not\n> convinced that there's a lot of demand for this anyway.\n\nThe OpenACS guys really really wanted larger FUNC_MAX_ARGS (I think\nthey had some 25-arg functions). And we do see questions about\nincreasing the limit fairly often on the lists. I suspect we could\nbump it up to 32 at little cost --- but someone should run some\nexperiments to verify.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 11:31:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Wed, 31 Jul 2002, Neil Conway wrote:\n\n> On Wed, Jul 31, 2002 at 02:11:06AM -0300, Marc G. Fournier wrote:\n> > On Wed, 31 Jul 2002, Tom Lane wrote:\n> > > * libpqxx is not integrated into build process nor docs. It should\n> > > be integrated or reversed out before beta.\n> >\n> > I've requestsed that Jeorgen(sp?) move this over to GBorg ... its\n> > something that can, and should be, built seperately from the base\n> > distribution, along with at least a dozen other things we have bloating\n> > the distribution now :( but at least that one hasn't been integrated yet\n> > ...\n>\n> Mentioning that on -hackers would have been nice -- I've spent a while\n> this week hacking autoconf / Makefiles to integrate libpqxx...\n>\n> The problem I have with removing libpqxx is that libpq++ is a far\n> inferior C++ interface. If we leave libpq++ as the only C++ interface\n> distributed with PostgreSQL, there will be a tendancy for people\n> using PostgreSQL & C++ to use the C++ support included with the\n> distribution. Similarly, the Perl interface included with\n> PostgreSQL is widely regarded as inferior to DBD::Pg.\n\nExactly what I mean ... we have *alot* of fat in the distribution ...\nstuff that almost nobody uses ... or stuff that is generally inferior to\nsomething else out there ...\n\nWhat I *want* to see is a *base* server distribution vs the client side\ncode ... I *want* to be able to download libpq on a client machine to\ninstall mod_php4, for example, without having to download everything else\nthat I'll never need ...\n\n> If we're going to start removing interfaces, I'd vote for the removal of\n> perl5 & libpq++ as well as libpqxx. We can add a section to the\n> documentation listing the available language interfaces (for languages\n> like Python, where several interfaces exist, this would probably be a\n> good idea anyway).\n\nDefinitely ... python, tcl, pgaccess ... all of that needs to go ... they\nare \"specialty\" stuff that, in some cases, have nothing to do with the\nserver itself ..\n\nAnything that can build *from* libpq already being installed (except for\nthose that are required to admin the server (ie. psql, pg_dump, etc)\nshould be yanked and maintained outside of the distribution ... if nobody\nis maintaining it, then obviously nobody is using it, so why carry it?\n\nWe have the environment now to keep the development 'centralized' through\nGBorg, which also means that others can be provided with CVS access as\nmaintainers, if, for instance, Joergen(sp?) wishes to bring someone else\non board to help ...\n\n\n\n",
"msg_date": "Wed, 31 Jul 2002 13:45:00 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Tom Lane wrote:\n\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> > Mentioning that on -hackers would have been nice -- I've spent a while\n> > this week hacking autoconf / Makefiles to integrate libpqxx...\n>\n> Marc's opinion is not the same thing as a done deal ;-) --- we still\n> have to discuss this, and if someone's already doing the integration\n> work I think that's an important factor.\n>\n> > If we're going to start removing interfaces, I'd vote for the removal of\n> > perl5 & libpq++ as well as libpqxx.\n>\n> Agreed on that point. We shouldn't be promoting old, crufty interface\n> libraries when there are better ones available.\n>\n> I would personally prefer to see libpqxx integrated now, and then we\n> could plan to remove libpq++ in a release or two (after giving people\n> a reasonable opportunity to switch over). If anyone still cares about\n> libpq++ at that point, it could be given a home on gborg.\n>\n> One reason for wanting to integrate libpqxx is that I don't think we'll\n> find out anything about its portability until we get a lot of people\n> trying to build it. If it's a separate distro that won't happen quickly.\n\nWho cares? Those that need a C++ interface will know where to find it,\nand will report bugs that they have ... why should it be tested on every\nplatform when we *might* only have those on the Linux platform using it?\n\nWhat happens if/when libpqxx becomes the 'old, crufty interface' and\nsomething newer and shinier comes along? Where do we draw the line at\nwhat is in the distribution? For instance, why pgaccess vs a platform\nindependent version of PgAdmin vs PHPPgAdmin? Hell, IMHO, PHPPgAdmin\nwould most likely be more useful as more sites out there are running PHP\nthen likely have TCL installed ... but someone that is using TCL/AolServer\nwould definitely think otherwise ...\n\nBy branching out the fat, we make it *easier* for someone to take on\ndevelopment of it ... would libpqxx ever have been developed if Joergen\ncould have just worked on libpq++ in the first place, without having to\nsubmit patches?\n\nI really do not want to keep adding more users onto postgresql.org's\nservers just because \"hey, their interface is cool and useful so let's add\nit into the main CVS repository and give them CVS access to save them\nhaving to submit patches\" when we have a fully functioning collaborative\ndevelopment environment that gives them *more* then what we can give them\nnow ...\n\n1. the ability to pull in their own group of developers / committers on a\n per project basis\n2. the ability to make releases *as required*, instead of having to wait\n for us to do the next release\n\nThe benefit to us ... a much much smaller package of programs that have to\nbe maintained and tested and debugged before a release ...\n\nHell, how many packages do we currently have integrated with the whole\ndistribution that rely *nothing* on the server other then to be able to\nuse libpq to compile, that would benefit from being able to do releases?\nIf, for instance, the libpq++ interface gets patched up to fix a race\ncondition, or a vulnerability, the way things are right now, ppl have two\nchoices: wait for v7.3 to be released sometime in October, or upgrade to\nthe latest code via anon-cvs/cvsup ... and for package maintainers (rpm,\ndeb, pkg), they pretty much have no choice but to wait ...\n\nMove libpq++ out as its own, independent project, and that patch would\nforce a quick packaging and release of the new code, which those same\npackage maintainers could jump on quickly and get out to *their* users ...\n\nHow many packages/interfaces do we have 'integrated' right now that rely\nin no way on our release cycle *except* for the fact that they are\nintegrated?\n\nThe way we do things now made sense 7 years ago when we started out trying\nto get it as visible to the masses as possible ... and when we *didn't*\nhave a clean/easy way to manage them externally ... it doesn't make any\nsense to do anymore, and hasn't for a fair time now ...\n\n\n\n",
"msg_date": "Wed, 31 Jul 2002 14:08:33 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items ) "
},
{
"msg_contents": "On Wed, 31 Jul 2002, Neil Conway wrote:\n\n> On Wed, Jul 31, 2002 at 02:01:43AM -0300, Marc G. Fournier wrote:\n> > add in 'fix pg_hba.conf / password issues' to that too :)\n>\n> I doubt that will make 7.3 -- the proposals I've seen on this topic\n> require some reasonably complex additions to the authentication\n> system. We also still need to hash out which design we're going\n> to implement. Given that it's pretty esoteric, I'd prefer this\n> wait for 7.4\n\nThen, the current changes *should* be removed, as we have no idea how many\nsites out there we are going to break without that functionality ... I\nknow I personally have 200+ servers that will all break as soon as I move\nto v7.3 with it as is :(\n\n\n",
"msg_date": "Wed, 31 Jul 2002 14:10:17 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 02:08:33PM -0300, Marc G. Fournier wrote:\n> On Wed, 31 Jul 2002, Tom Lane wrote:\n\n> > One reason for wanting to integrate libpqxx is that I don't think we'll\n> > find out anything about its portability until we get a lot of people\n> > trying to build it. If it's a separate distro that won't happen quickly.\n> \n> Who cares? Those that need a C++ interface will know where to find it,\n> and will report bugs that they have ... why should it be tested on every\n> platform when we *might* only have those on the Linux platform using it?\n\nThis seems a bad argument. You can't say \"we support interface xyz\"\nand never test it on anything except i80x86 Linux. Somebady comes\nalong and tries to make it go on Solaris, and it doesn't work: poof,\nthe cross-platform reputation that you and other have worked hard to\nburnish goes away. Never mind that it's only a client library.\n\nBesides, more generally, Postgres already has a reputation as being\ndifficult to install. The proposal to separate out all the\n\"non-basics\" (I'm not even sure how one would draw that line: maybe a\nserver-only package and a client-library package run through GBorg?)\nwould mean that anyone wanting to do something moderately complicated\nwould have a yet higher hurdle. Isn't that a problem?\n\nA \n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 31 Jul 2002 13:22:20 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "> Besides, more generally, Postgres already has a reputation as being\n> difficult to install. The proposal to separate out all the\n> \"non-basics\" (I'm not even sure how one would draw that line: maybe a\n> server-only package and a client-library package run through GBorg?)\n> would mean that anyone wanting to do something moderately complicated\n> would have a yet higher hurdle. Isn't that a problem?\n\nWhen you install freebsd or linux, is it a problem that all the perl modules\nyou need have to fetched from cpan ? why can't they call just be part of the\nOS ?'\nlikewise with dns servers, samba, apache etc.. this is a bit of a stretched\nexample\nbut the point is the same.\n\nPersonall, I'd live to just be able to download the server with pg_dump psql\netc..\n\nBut that's just me for what it's worth.\n\njeff.\n\n",
"msg_date": "Wed, 31 Jul 2002 14:36:43 -0300",
"msg_from": "\"Jeff MacDonald\" <jeff@tsunamicreek.com>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Andrew Sullivan wrote:\n\n> On Wed, Jul 31, 2002 at 02:08:33PM -0300, Marc G. Fournier wrote:\n> > On Wed, 31 Jul 2002, Tom Lane wrote:\n>\n> > > One reason for wanting to integrate libpqxx is that I don't think we'll\n> > > find out anything about its portability until we get a lot of people\n> > > trying to build it. If it's a separate distro that won't happen quickly.\n> >\n> > Who cares? Those that need a C++ interface will know where to find it,\n> > and will report bugs that they have ... why should it be tested on every\n> > platform when we *might* only have those on the Linux platform using it?\n>\n> This seems a bad argument. You can't say \"we support interface xyz\"\n> and never test it on anything except i80x86 Linux. Somebady comes\n> along and tries to make it go on Solaris, and it doesn't work: poof,\n> the cross-platform reputation that you and other have worked hard to\n> burnish goes away. Never mind that it's only a client library.\n\nThis is my point, actually ... there are *two* things we should be\nguarantee'ng to work cross-platform: the server and libpq ... (note that\nwith 'the server', I'm including the administrative commands and scripts,\nlike psql and initdb) ...\n\nTake a look at libpq++ as a perfect example ... we've been distributing\nthat forever, but Tom himself states its 'old and crufty' ... but its also\nthe \"officially supported version\", so its what ppl generally use ...\n\nWe should be focusing on \"the server\", not the \"clients\" ...\n\nAnother point ... we have a load of stuff in contrib ... originally,\ncontrib was meant basically as a temporary storage while we decide if we\nput it into the mainstream, and its grown into \"if we have nowhere else to\nput it, shove it in here and forget it\" ... how many ppl know if all of\nthose that are in there even *work*? We know they compile, but do they\nactually work?\n\n> Besides, more generally, Postgres already has a reputation as being\n> difficult to install. The proposal to separate out all the \"non-basics\"\n> (I'm not even sure how one would draw that line: maybe a server-only\n> package and a client-library package run through GBorg?) would mean that\n> anyone wanting to do something moderately complicated would have a yet\n> higher hurdle. Isn't that a problem?\n\nLike what? I work at a local University and am slowly getting PgSQL used\nfor more and more things ... I have one server that is the database\nserver, but everything else connects to that ...\n\nAs it is now, I have to download the whole distribution, configure the\nwhole distribution, compile it and then install .. which, of course,\ninstalls a bunch of stuff that I just don't need (initdb, psql, libpq++,\netc, etc) ... all I need is libpq.a ...\n\nHow many thousands of web sites out there don't offer PgSQL due to teh\nhassle? Everyone is arguing 'why mysql vs pgsql?' ... if we had a simple\n'libpq.tar.gz' that could be downloaded, nice and small, then we've just\nmade enabling PgSQL by default in mod_php4 brain dead ...\n\n",
"msg_date": "Wed, 31 Jul 2002 15:11:40 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 02:36:43PM -0300, Jeff MacDonald wrote:\n> \n> When you install freebsd or linux, is it a problem that all the\n> perl modules you need have to fetched from cpan ? why can't they\n> call just be part of the OS ?'\n\nWell, not just part of the OS, but part of Perl. And after all, Perl\n_does_ include a fabulous variety of built-in modules.\n\n> likewise with dns servers, samba, apache etc.. this is a bit of a stretched\n> example\n> but the point is the same.\n\nActually, the comparison is apt. There's a reason people suggest\nusing your distribution's PHP or Zope or what-have-you packages,\nrather than installing from source: an inexperienced user with these\npackages could easily spend several days trying to figure out all the\nbits to install. Obviously, such people are new users, and a\nlearning curve is expected. But given recent hand-wringing about the\nrelative \"mind-share\" of Postgres &c., it seems perverse to make a\nnew user have to find out (probably by asking on a mailing list) that\nbasic stuff like client libraries are a whole separate package, which\nneeds to be dealt with separately. It _would_ be nice, though, to be\nable to get just the client stuff for sure. And maybe the separation\nis worth it; I just want to be sure that people know the effect on\nusers.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 31 Jul 2002 14:12:31 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 03:11:40PM -0300, Marc G. Fournier wrote:\n\n> hassle? Everyone is arguing 'why mysql vs pgsql?' ... if we had a simple\n> 'libpq.tar.gz' that could be downloaded, nice and small, then we've just\n> made enabling PgSQL by default in mod_php4 brain dead ...\n\nSorry, I think I wasn't making myself clear. I think that's a\nsplendid idea. But I'm not sure it's worth paying for it by making\nusers who want the whole thing download multiple packages. Maybe I'm\nalone in thinking that, however, and it's not like I feel terribly\nstrongly about it.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 31 Jul 2002 14:18:45 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "\n\nAndrew Sullivan wrote:\n\n> Sorry, I think I wasn't making myself clear. I think that's a\n> splendid idea. But I'm not sure it's worth paying for it by making\n> users who want the whole thing download multiple packages. Maybe I'm\n> alone in thinking that, however, and it's not like I feel terribly\n> strongly about it.\n\nHow much work is to make two packages - 'core' and 'complete'. Plus\nadditional package called 'util' = 'complete' minus 'core'.\n\n",
"msg_date": "Wed, 31 Jul 2002 20:26:24 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "> How many thousands of web sites out there don't offer PgSQL due to teh\n> hassle? Everyone is arguing 'why mysql vs pgsql?' ... if we had a simple\n> 'libpq.tar.gz' that could be downloaded, nice and small, then we've just\n> made enabling PgSQL by default in mod_php4 brain dead ...\n\nCase in point, I just installed FreeBSD 4.6 on a machine, i chose to install\nmod_php from /stand/sysinstall.\n\nIt ofcourse installed php, with mysql as a dependency, i was annoyed, but\nwhen\ni looked at what was actually installed, it was just \"mysql client\".\n\nThe actually server did not get installed at all.\n\nJeff.\n\n",
"msg_date": "Wed, 31 Jul 2002 15:58:38 -0300",
"msg_from": "\"Jeff MacDonald\" <jeff@tsunamicreek.com>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "Tom Lane wrote:\n> > Socket permissions - only install user can access db by default\n> \n> I do not agree with this goal.\n\nOK, this is TODO item:\n\n* Make single-user local access permissions the default by limiting\n permissions on the socket file (Peter E)\n\nRight now, we effectively install initdb as though we are creating a\nworld-writeable directory on the machine. (Sure, the directory is\nlocked down, but by setting PGUSER you can connect to the database as\nanyone.) I don't know any other software that does this, and I can't\nsee how we can justify the current behavior.\n\nAnother idea is to change pg_hba.conf to not default to 'trust' but then\nthe installing user is going to have to choose a password.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 15:28:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> > NAMEDATALEN - disk/performance penalty for increase, 64, 128?\n> > FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n> \n> At the moment I don't see a lot of solid evidence that increasing\n> NAMEDATALEN has any performance penalty. Someone reported about\n> a 10% slowdown on pgbench with NAMEDATALEN=128 ... but Neil Conway\n> tried to reproduce the result, and got about a 10% *speedup*.\n> Personally I think 10% is well within the noise spectrum for\n> pgbench, and so it's difficult to claim that we have established\n> any performance difference at all. I have not tried to measure\n> FUNC_MAX_ARGS differences.\n\nYes, we need someone to benchmark both the NAMEDATALEN and FUNC_MAX_ARGS\nto prove we are not causing performance problems. Once that is done,\nthe default limits can be easily increased. I was thinking 64 for\nNAMEDATALEN and 32 for FUNC_MAX_ARGS, effectively doubling both.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 15:30:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, we need someone to benchmark both the NAMEDATALEN and FUNC_MAX_ARGS\n> to prove we are not causing performance problems. Once that is done,\n> the default limits can be easily increased. I was thinking 64 for\n> NAMEDATALEN and 32 for FUNC_MAX_ARGS, effectively doubling both.\n\nThe SQL spec says NAMEDATALEN shall be 128 (or at least 128, too lazy\nto look). If we're gonna change it then I think we should really try\nto go to 128.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 15:37:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 03:30:30PM -0400, Bruce Momjian wrote:\n> Yes, we need someone to benchmark both the NAMEDATALEN and FUNC_MAX_ARGS\n> to prove we are not causing performance problems. Once that is done,\n> the default limits can be easily increased. I was thinking 64 for\n> NAMEDATALEN and 32 for FUNC_MAX_ARGS, effectively doubling both.\n\nIf we're going to change NAMEDATALEN, we should probably set it to 128,\nas that's what SQL99 requires.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Wed, 31 Jul 2002 15:38:38 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n> Socket permissions - only install user can access db by default\n>> \n>> I do not agree with this goal.\n\n> OK, this is TODO item:\n\n> * Make single-user local access permissions the default by limiting\n> permissions on the socket file (Peter E)\n\nYes, I know what the TODO item says, and I disagree with it.\n\nIf we make the default permissions 700, then it's impossible to access\nthe database unless you run as the database owner. This is not a\nsecurity improvement --- it's more like claiming that a Linux system\nwould be more secure if you got rid of ordinary users and did all your\nwork as root. We should *not* encourage people to operate that way.\n(It's certainly unworkable for RPM distributions anyway; only a user\nwho is hand-building a test installation under his own account would\npossibly think that this is a useful default.)\n\nI could see a default setup that made the permissions 770, allowing\naccess to anyone in the postgres group; that would at least bear some\nslight resemblance to a workable production setup. However, this\nassumes that the DBA has root privileges, else he'll not be able to\nadd/remove users from the postgres group. Also, on systems where users\nall belong to the same \"users\" group, 770 isn't really better than 777.\n\nThe bottom line here is that there isn't any default protection setup\nthat is really widely useful. Everyone's got to adjust the thing to\nfit their own circumstances. I'd rather see us spend more documentation\neffort on pointing this out and explaining the alternatives, and not\nthink that we can solve the problem by making the default installation\nso tight as to be useless.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 15:48:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "> Another idea is to change pg_hba.conf to not default to 'trust' but then\n> the installing user is going to have to choose a password.\n\nI like this approach. Set it to password (or md5) on local, and force\nthe request of a password during initdb.\n\nIf for some reason they forget their password, they simply bump it to\ntrust on local for the 1 minute it takes to change it back.\n\n\n",
"msg_date": "31 Jul 2002 15:54:36 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": ">> Another idea is to change pg_hba.conf to not default to 'trust' but then\n>> the installing user is going to have to choose a password.\n\nWell, initdb already has an option to request a password. It would\nperhaps make sense for initdb to alter the installed pg_hba.conf file\nto select local md5 mode instead of local trust mode when this option is\nspecified.\n\n> I like this approach. Set it to password (or md5) on local, and force\n> the request of a password during initdb.\n\nI don't like \"forcing\" people to do anything, especially not things that\naren't necessarily useful to them. On a single-user machine there is\nno advantage to using database passwords.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 16:10:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> > Point-in-time recovery - ready for 7.3?\n> \n> At the moment, it doesn't exist at all. If patches appear, we can\n> review 'em, but right now there is nothing to debate.\n\nYes, I listed it just to keep it on the radar.\n\n> > Win32 - timefame?\n> \n> I've seen nothing to make me think this will be ready for 7.3.\n\nSame.\n\n> > Schema handling - ready? interfaces? client apps?\n> \n> The backend will be ready (it's not quite yet). pg_dump is ready.\n> psql is very definitely not ready, nor is pgaccess. I don't know the\n> status for JDBC or ODBC; any comments? The other interface libraries\n> probably don't care.\n\nWe should generate a list of subitems here.\n\n> > Dependency - pg_dump auto-create dependencies for 7.2.X data?\n> \n> Huh?\n\nCan we create table/sequence dependency linking on loads; same with\nother dependencies? If not, we are going to have trouble with the\ndependency code only working some times. This could be a serious\nconfusion for users. We coded some of our stuff assuming the linkage\nwill always be present, but a load from 7.2 may not have it. What are\nthe ramifications?\n\n> > glibc and mktime() - fix?\n> \n> We need a fix for this. Dunno what to do about it.\n\nI have proposed a fix of placing a fixed mktime earlier in the link line\nbut no one has supplied a fixed mktime for me.\n\n> Other things on my radar screen:\n> \n> * I have about zero confidence in the recent tuple-header-size-reduction\n> patches.\n\nI have great confidence Manfred Koizar and his work. I know you want\nsome checks added to the code and he will do that when he returns. I\nwill mention this in the open items list.\n\n\timprove macros in new tuple header code\n\n> \n> * pg_conversion stuff --- do we understand this thing's behavior under\n> failure conditions? Does it work properly with namespaces and\n> dependencies?\n\nSeems Tatsuo says it is OK.\n\n> * pg_dumpall probably ought to dump database privilege settings, also\n> per-user and per-database GUC settings.\n\nAdded:\n\n\thave pg_dumpall dump out db privilege and per-user/db settings\n\n> \n> * BeOS and QNX4 ports are busted.\n\nAdded:\n\n\tfix BeOS and QNX4 ports\n> * The whole area of which implicit coercions should be allowed is a\n> serious problem that we have not spent enough time on. There are\n> a number of cases in which CVS tip is clearly broken compared to\n> prior releases.\n\nOh. I didn't know that. Added:\n\n\tfix implicit type coercions that are worse\n> \n> * Bear Giles' SSL patches seem to be causing unhappiness in some\n> quarters.\n\nI believe it is only the interfaces/ssl directory I created from his\npatch when I didn't know what to do with it. I wanted to remove it but\nsomeone said it was good stuff and we should give the author until beta\nto address it. Added to TODO:\n\n\tremove interfaces/ssl if not improved\n\n> \n> * libpqxx is not integrated into build process nor docs. It should\n> be integrated or reversed out before beta.\n\nAdded:\n\n\tintegrate or remove new libpqxx\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 16:43:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Neil Conway wrote:\n> On Wed, Jul 31, 2002 at 03:30:30PM -0400, Bruce Momjian wrote:\n> > Yes, we need someone to benchmark both the NAMEDATALEN and FUNC_MAX_ARGS\n> > to prove we are not causing performance problems. Once that is done,\n> > the default limits can be easily increased. I was thinking 64 for\n> > NAMEDATALEN and 32 for FUNC_MAX_ARGS, effectively doubling both.\n> \n> If we're going to change NAMEDATALEN, we should probably set it to 128,\n> as that's what SQL99 requires.\n\nOK, updated to show only 128. Do we need more performance testing? I\nam unclear on that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 16:53:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > > Dependency - pg_dump auto-create dependencies for 7.2.X data?\n> >\n> > Huh?\n> \n> Taking a bunch of CREATE CONSTRAINT TRIGGERS and turning them into the\n> proper pg_constraint entries...\n\nDescription updated:\n\n\tDependency - have pg_dump auto-create dependencies when loading\n\t7.2.X data?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 16:54:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > I agree that if we could quickly come to a resolution about how this\n> > ought to work, there's plenty of time to go off and implement it. But\n> > (1) we failed to come to a consensus before, so I'm not optimistic\n> > than one will suddenly emerge now; (2) we've got a ton of other issues\n> > that we *need* to deal with before beta. This one does not strike me\n> > as a must-fix, and so I'm loathe to spend much development time on it\n> > when there are so many open issues.\n> \n> afaict someone else volunteered to do the work. There is no lack of\n> consensus that this is a useful feature, at least among those who take\n> responsibility to package PostgreSQL for particular platforms. How about\n> letting them specify the requirements and if an acceptable solution\n> emerges soon, we'll have it for 7.3...\n\nAdded to open items:\n\n\tallow specification of configuration files in a different directory\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 16:56:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Neil Conway wrote:\n> On Tue, Jul 30, 2002 at 11:50:38PM -0400, Bruce Momjian wrote:\n> > NAMEDATALEN - disk/performance penalty for increase, 64, 128?\n> \n> In my personal testing, I've been unable to observe a significant\n> performance impact (as Tom mentioned, I tried getting some profiling\n> data with gprof + pgbench, and found that increasing NAMEDATALEN made\n> things *faster*). Whether that is enough of an endorsement to make\n> the change for 7.3, I'm not sure...\n\nOK, do we need to test further or just bump it to 128?\n\n> > FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n> \n> Until someone takes the time to determine what the performance\n> implications of this change will be, I don't think we should\n> change this. Given that no one has done any testing, I'm not\n> convinced that there's a lot of demand for this anyway.\n\nI am going to list only 32 as an option. I think it needs to be at\nleast that high for OpenACS.\n\n> > Point-in-time recovery - ready for 7.3?\n> > Reindex/btree shrinkage - does reindex need work, can btree be shrunk?\n> \n> I think both of these should probably wait for 7.4\n\nProbably. We will just keep them on the radar.\n\n> > Schema handling - ready? interfaces? client apps?\n> \n> Do we want all client interfaces / admin apps to be aware of schemas in\n> time for beta 1, or is the plan to fix these during the beta cycle?\n\nIdeally, before beta or people can't really test them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 16:58:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 31 Jul 2002, Neil Conway wrote:\n> \n> > On Wed, Jul 31, 2002 at 02:01:43AM -0300, Marc G. Fournier wrote:\n> > > add in 'fix pg_hba.conf / password issues' to that too :)\n> >\n> > I doubt that will make 7.3 -- the proposals I've seen on this topic\n> > require some reasonably complex additions to the authentication\n> > system. We also still need to hash out which design we're going\n> > to implement. Given that it's pretty esoteric, I'd prefer this\n> > wait for 7.4\n> \n> Then, the current changes *should* be removed, as we have no idea how many\n> sites out there we are going to break without that functionality ... I\n> know I personally have 200+ servers that will all break as soon as I move\n> to v7.3 with it as is :(\n\nOK, I have thought about this. First, a possible solution would be to\nhave a GUC variable that prepends the dbname to all username\nspecifications, so the username becomes dbname.username. When you\nCREATE USER \"test\", it actually does CREATE USER \"dbname.test\". Same\nwith ALTER/DROP user and lookups in pg_hba.conf and authentication. \nBasically it gives us a per-db user namespace. Only the superuser has a\nnon-db qualified name. (Actually, createuser script would fail because\nit connects only to template1. You would have to use psql and CREATE\nUSER. Probably other things would fail too.)\n\nAs for 7.3, maybe we can get that done in time of everyone likes it. If\nwe can't, what do we do? Do we re-add the secondary password file stuff\nthat most people don't like? My big question is how many other\nPostgreSQL users figured out they could use the secondary password file\nfor username/db restrictions? I never thought of it myself. Maybe I\nshould ask on general.\n\nMarc, you do have a workaround for 7.3 using your IP's, right, or is\nthere a problem with the password having to be the same for different\nhosts with the same username? If Marc is the only one, and he has a\nworkaround, we may just go ahead and leave it for 7.4.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 17:05:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nAdded to open items list:\n\n\thandle lack of secondary passwords\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> \n> add in 'fix pg_hba.conf / password issues' to that too :)\n> \n> On Tue, 30 Jul 2002, Bruce Momjian wrote:\n> \n> >\n> > Here are the open items for 7.3. We have one more month to address them\n> > before beta.\n> >\n> > ---------------------------------------------------------------------------\n> >\n> > P O S T G R E S Q L\n> >\n> > 7 . 3 O P E N I T E M S\n> >\n> >\n> > Current at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n> >\n> > Source Code Changes\n> > -------------------\n> > Socket permissions - only install user can access db by default\n> > \tunix_socket_permissions in postgresql.conf\n> > NAMEDATALEN - disk/performance penalty for increase, 64, 128?\n> > FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n> > Point-in-time recovery - ready for 7.3?\n> > Allow easy display of usernames in a group (pg_hba.conf uses groups now)\n> > Reindex/btree shrinkage - does reindex need work, can btree be shrunk?\n> > DROP COLUMN - ready?\n> > CLUSTER - ready?\n> > display locks - ready?\n> > Win32 - timefame?\n> > Prepared statements - ready?\n> > Schema handling - ready? interfaces? client apps?\n> > Dependency - pg_dump auto-create dependencies for 7.2.X data?\n> > glibc and mktime() - fix?\n> > Functions Returning Sets - done?\n> > ecpg and bison issues - solved?\n> >\n> >\n> > Documentation Changes\n> > ---------------------\n> >\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 17:05:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\n> > psql is very definitely not ready, nor is pgaccess.\n\nI could not really trace who said this.\n\nTo my understanding nobody is currently testing how pgaccess is dealing\nwith 7.3 Am I wrong?\n\nMost problems we try to address now are related to pgaccess working on\nmost platforms (Brett fights with the dlls, there are some Mac OS X\nefforts) and improve the usability (help upon connection failure, etc.)\n\nThere are many new features people write, but this has not much to do\nwith 7.3 in a direct way.\n\nNow in addition to the bugzilla and the developers@pgaccess.org mailing\nlist, there is also a wiki\n\nhttp://www.pgaccess.org/wiki/\n\nas a channel for filing ideas and wishes.\n\nPlease, feel free to use it.\n\nIavor\n\n",
"msg_date": "Wed, 31 Jul 2002 23:11:48 +0200",
"msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> > Socket permissions - only install user can access db by default\n> >> \n> >> I do not agree with this goal.\n> \n> > OK, this is TODO item:\n> \n> > * Make single-user local access permissions the default by limiting\n> > permissions on the socket file (Peter E)\n> \n> Yes, I know what the TODO item says, and I disagree with it.\n> \n> If we make the default permissions 700, then it's impossible to access\n> the database unless you run as the database owner. This is not a\n> security improvement --- it's more like claiming that a Linux system\n> would be more secure if you got rid of ordinary users and did all your\n> work as root. We should *not* encourage people to operate that way.\n> (It's certainly unworkable for RPM distributions anyway; only a user\n> who is hand-building a test installation under his own account would\n> possibly think that this is a useful default.)\n\nI hope they would loosen the default in postgresql.conf rather than\nhaving everyone come in as the same user. By the time you create new\nuser accounts, it is trivial to modify postgresql.conf.\n\n> I could see a default setup that made the permissions 770, allowing\n> access to anyone in the postgres group; that would at least bear some\n> slight resemblance to a workable production setup. However, this\n> assumes that the DBA has root privileges, else he'll not be able to\n> add/remove users from the postgres group. Also, on systems where users\n> all belong to the same \"users\" group, 770 isn't really better than 777.\n\nYes, groups are nice, but in most cases with a group 'users', it is the\nsame as world-writable.\n\n> The bottom line here is that there isn't any default protection setup\n> that is really widely useful. Everyone's got to adjust the thing to\n> fit their own circumstances. I'd rather see us spend more documentation\n> effort on pointing this out and explaining the alternatives, and not\n> think that we can solve the problem by making the default installation\n> so tight as to be useless.\n\nI think we are much safer shipping as secure and asking people to loosen\nit if they want wider access. I can imagine a Bugtrack item for\nPostgreSQL where they report we ship wide-open for local users. They\nhave already reported we don't encrypt our passwords, and we are dealing\nwith that. You can say that we tell people to change the default, but\nif we install that way, they have a legitimate grip, and PostgreSQL has a\nperception problem.\n\nThe default unix permissions are world-readable, owner-writable. We\nship with world-read/write. I know of _no_ other software that does\nthat and I can't see how we get away with it. I will also add that I am\nthe biggest proponent of tightening things up and on one else seems to\nbe as concerned about it. I am not sure why.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 17:18:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> >> Another idea is to change pg_hba.conf to not default to 'trust' but then\n> >> the installing user is going to have to choose a password.\n> \n> Well, initdb already has an option to request a password. It would\n> perhaps make sense for initdb to alter the installed pg_hba.conf file\n> to select local md5 mode instead of local trust mode when this option is\n> specified.\n> \n> > I like this approach. Set it to password (or md5) on local, and force\n> > the request of a password during initdb.\n> \n> I don't like \"forcing\" people to do anything, especially not things that\n> aren't necessarily useful to them. On a single-user machine there is\n> no advantage to using database passwords.\n\nYes, on a single-user machine, socket permissions are a better approach.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 17:22:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Andrew Sullivan wrote:\n> Actually, the comparison is apt. There's a reason people suggest\n> using your distribution's PHP or Zope or what-have-you packages,\n> rather than installing from source: an inexperienced user with these\n> packages could easily spend several days trying to figure out all the\n> bits to install. Obviously, such people are new users, and a\n> learning curve is expected. But given recent hand-wringing about the\n> relative \"mind-share\" of Postgres &c., it seems perverse to make a\n> new user have to find out (probably by asking on a mailing list) that\n> basic stuff like client libraries are a whole separate package, which\n> needs to be dealt with separately. It _would_ be nice, though, to be\n> able to get just the client stuff for sure. And maybe the separation\n> is worth it; I just want to be sure that people know the effect on\n> users.\n\nI have already provide Marc with a script needed to make an\ninterfaces-only tarball. I was about 1/10th the size of the full\ntarball.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 17:30:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "Tom Lane writes:\n\n> > Socket permissions - only install user can access db by default\n>\n> I do not agree with this goal.\n\nIt is my understanding that there is currently a lot of criticism that the\ndefault setup is open to all local users. This is nearly the same as\nhaving the data area files themselves world-writable by default.\n\nMaybe changing the default socket permissions isn't the appropriate\nmeasure, but I feel something ought to be done.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 00:04:13 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "> Mentioning that on -hackers would have been nice -- I've spent a while\n> this week hacking autoconf / Makefiles to integrate libpqxx...\n>\n> The problem I have with removing libpqxx is that libpq++ is a far\n> inferior C++ interface. If we leave libpq++ as the only C++ interface\n> distributed with PostgreSQL, there will be a tendancy for people\n> using PostgreSQL & C++ to use the C++ support included with the\n> distribution. Similarly, the Perl interface included with\n> PostgreSQL is widely regarded as inferior to DBD::Pg.\n\nI think that if someone is actually working on libpqxx integration - then\nyeah, leave it in...\n\nChris\n\n",
"msg_date": "Thu, 1 Aug 2002 09:36:54 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Bruce Momjian wrote:\n\n> OK, I have thought about this. First, a possible solution would be to\n> have a GUC variable that prepends the dbname to all username\n> specifications, so the username becomes dbname.username. When you\n> CREATE USER \"test\", it actually does CREATE USER \"dbname.test\". Same\n> with ALTER/DROP user and lookups in pg_hba.conf and authentication.\n> Basically it gives us a per-db user namespace. Only the superuser has a\n> non-db qualified name. (Actually, createuser script would fail because\n> it connects only to template1. You would have to use psql and CREATE\n> USER. Probably other things would fail too.)\n\nSounds like a good solution that eliminates Tom's idea of going with\n'local to database' pg_shadow files ... I like it ...\n\n> As for 7.3, maybe we can get that done in time of everyone likes it.\n> If we can't, what do we do? Do we re-add the secondary password file\n> stuff that most people don't like? My big question is how many other\n> PostgreSQL users figured out they could use the secondary password file\n> for username/db restrictions? I never thought of it myself. Maybe I\n> should ask on general.\n\nHow many ppl that aren't subscribed to any of the lists are using this and\nare going to get burned royal when they upgrade to v7.3 without\nunderstanding it is gone?\n\n> Marc, you do have a workaround for 7.3 using your IP's, right, or is\n> there a problem with the password having to be the same for different\n> hosts with the same username? If Marc is the only one, and he has a\n> workaround, we may just go ahead and leave it for 7.4.\n\nNo, unfortunately, I have at least one specific application that needs\nthis :( IPs help reduce the impact of losing this, but with the growing\ndifficultly and cost of acquiring new IPs, the IPs don't help much either\n:(\n\n\n",
"msg_date": "Wed, 31 Jul 2002 22:46:07 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> > As for 7.3, maybe we can get that done in time of everyone likes it.\n> > If we can't, what do we do? Do we re-add the secondary password file\n> > stuff that most people don't like? My big question is how many other\n> > PostgreSQL users figured out they could use the secondary password file\n> > for username/db restrictions? I never thought of it myself. Maybe I\n> > should ask on general.\n>\n> How many ppl that aren't subscribed to any of the lists are using this and\n> are going to get burned royal when they upgrade to v7.3 without\n> understanding it is gone?\n\nI agree with Marc here - compatibility in this area might just be very\nimportant (compared with some other lesser areas of incompatibility...)\n\nChris\n\n",
"msg_date": "Thu, 1 Aug 2002 10:07:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 05:05:35PM -0400, Bruce Momjian wrote:\n> OK, I have thought about this. First, a possible solution would be to\n> have a GUC variable that prepends the dbname to all username\n> specifications, so the username becomes dbname.username. When you\n> CREATE USER \"test\", it actually does CREATE USER \"dbname.test\". Same\n> with ALTER/DROP user and lookups in pg_hba.conf and authentication. \n> Basically it gives us a per-db user namespace. Only the superuser has a\n> non-db qualified name.\n\nWhat about the following situation:\n\n - 3 databases: 'devel', 'staging', and 'production'\n\n - one user, 'httpd', which needs access to all 3 databases but\n doesn't own any of them\n\n - I create the 'httpd' user when I'm connected to, say, template1\n\n - I issue a command that changes the httpd user in some way (e.g.\n drops the user, alters the user, etc.) -- what happens?\n\nAlso, what happens if I enable the GUC var, create a bunch of different\nusers/databases, and then disable it again?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Wed, 31 Jul 2002 23:38:34 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Neil Conway wrote:\n> On Wed, Jul 31, 2002 at 05:05:35PM -0400, Bruce Momjian wrote:\n> > OK, I have thought about this. First, a possible solution would be to\n> > have a GUC variable that prepends the dbname to all username\n> > specifications, so the username becomes dbname.username. When you\n> > CREATE USER \"test\", it actually does CREATE USER \"dbname.test\". Same\n> > with ALTER/DROP user and lookups in pg_hba.conf and authentication. \n> > Basically it gives us a per-db user namespace. Only the superuser has a\n> > non-db qualified name.\n> \n> What about the following situation:\n> \n> - 3 databases: 'devel', 'staging', and 'production'\n> \n> - one user, 'httpd', which needs access to all 3 databases but\n> doesn't own any of them\n> \n> - I create the 'httpd' user when I'm connected to, say, template1\n> \n> - I issue a command that changes the httpd user in some way (e.g.\n> drops the user, alters the user, etc.) -- what happens?\n\nI am going to require the admin to prepend the dbname. GUC controls\nwhether authentication/username map from just the client-supplied\nusername, or the client username prepended with the dbname.\n\n> \n> Also, what happens if I enable the GUC var, create a bunch of different\n> users/databases, and then disable it again?\n\nYou swap back and forth between users with prepended dbnames and those\nwithouth.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 23:40:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 11:40:43PM -0400, Bruce Momjian wrote:\n> > Also, what happens if I enable the GUC var, create a bunch of different\n> > users/databases, and then disable it again?\n> \n> You swap back and forth between users with prepended dbnames and those\n> withouth.\n\nAnd if I've created the user before I enabled the GUC var, how does\nauthentication work?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Thu, 1 Aug 2002 00:06:03 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Neil Conway wrote:\n> On Wed, Jul 31, 2002 at 11:40:43PM -0400, Bruce Momjian wrote:\n> > > Also, what happens if I enable the GUC var, create a bunch of different\n> > > users/databases, and then disable it again?\n> > \n> > You swap back and forth between users with prepended dbnames and those\n> > withouth.\n> \n> And if I've created the user before I enabled the GUC var, how does\n> authentication work?\n\nUser creation will not be effected. You have to prepend the dbname\nyourself. This will _now_ only effect modification of the user name as\npassed from the client.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 00:08:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 2002-08-01 at 02:05, Bruce Momjian wrote:\n> Marc G. Fournier wrote:\n> > On Wed, 31 Jul 2002, Neil Conway wrote:\n> > \n> > > On Wed, Jul 31, 2002 at 02:01:43AM -0300, Marc G. Fournier wrote:\n> > > > add in 'fix pg_hba.conf / password issues' to that too :)\n> > >\n> > > I doubt that will make 7.3 -- the proposals I've seen on this topic\n> > > require some reasonably complex additions to the authentication\n> > > system. We also still need to hash out which design we're going\n> > > to implement. Given that it's pretty esoteric, I'd prefer this\n> > > wait for 7.4\n> > \n> > Then, the current changes *should* be removed, as we have no idea how many\n> > sites out there we are going to break without that functionality ... I\n> > know I personally have 200+ servers that will all break as soon as I move\n> > to v7.3 with it as is :(\n> \n> OK, I have thought about this. First, a possible solution would be to\n> have a GUC variable that prepends the dbname to all username\n> specifications, so the username becomes dbname.username.\n\nWhen I first read Marc's post about this I also thought that the users\nwere partitioned by database, but further reading revealed that tis was\nnot the case - actually they were partitioned by _a_group_of_databases_,\nas each of his virtual hosts accesses on _at_least_ one but possibly\nmore databases using the same user (bruce ;).\n\nSo we would need some sort of database groups that share the same users.\n\nWe have to do something like this:\n\n real_user_name = mk_real_user_name(username,dbname) \n\nwhich uses some mapping table to find the real user that is trying to\nconnect to the database.\n\nThis name mangling should be done at connect time and kept out of\ndatabase, where each users name should always be fully resolved\n(bob@accounting.acme.com). \n\nThis may require raising the length of NAME type to be backwards\ncompatible. Or we migth just add USEDOMAIN column to uniquely identify\nthe user. so the above user would still have usename=bob but also\nusedomain=\"accounting.acme.com\".\n\n-----------\nHannu\n",
"msg_date": "01 Aug 2002 10:26:49 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> > * libpqxx is not integrated into build process nor docs. It should\n> > be integrated or reversed out before beta.\n> I've requestsed that Jeorgen(sp?) move this over to GBorg ... its\n> something that can, and should be, built seperately from the base\n> distribution, along with at least a dozen other things we have bloating\n> the distribution now :( but at least that one hasn't been integrated yet\n> ...\n\nActually, I'm not sure we should target one particular feature to be\nleft out, unless we have folks who are willing to do the planning,\ndesign, and implementation of a \"sliced and diced PostgreSQL\" in a\nconsistant and solid way.\n\nUntil we have folks who are excited enough about it to plan it out and\ndo the work, piecemeal rejection of components is not leading to a more\nsolid product.\n\nThe developers have made the commitment to have consistant and\nfunctional builds across all packages in the main distro. We have no\nmechanisms to do the same if the sources are coming from a bunch of\ndifferent areas.\n\nFrankly, a 9MB tarball is not in the bloat category in my book. Ask me\nabout the CORBA package I use that takes 3.5GB to build!!\n\n - Thomas\n",
"msg_date": "Wed, 31 Jul 2002 22:40:25 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Until we have folks who are excited enough about it to plan it out and\n> do the work, piecemeal rejection of components is not leading to a more\n> solid product.\n\nI'm lukewarm about whether to actually do the split or not ... but for\nsure I agree with Thomas' point here. We need a plan and careful\nimplementation, or a split-up will just make life worse.\n\nStuff that is in the tree tends to get maintained in passing. For\nexample, I've got some changes to contrib/dblink/ in my in-progress\nversion of Chris' DROP COLUMN patch, because a grep for references\nto rel->rd_att turned it up. If dblink weren't in our CVS it'd have\nbeen broken by DROP COLUMN, and who knows whether we'd catch that\nduring beta? I realize that Marc wasn't proposing splitting off any\nserver-side code, but I still want to tread carefully about breaking\nup the codebase.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 01:52:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items ) "
},
{
"msg_contents": "On Thu, 1 Aug 2002, Tom Lane wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > Until we have folks who are excited enough about it to plan it out and\n> > do the work, piecemeal rejection of components is not leading to a more\n> > solid product.\n>\n> I'm lukewarm about whether to actually do the split or not ... but for\n> sure I agree with Thomas' point here. We need a plan and careful\n> implementation, or a split-up will just make life worse.\n>\n> Stuff that is in the tree tends to get maintained in passing. For\n> example, I've got some changes to contrib/dblink/ in my in-progress\n> version of Chris' DROP COLUMN patch, because a grep for references\n> to rel->rd_att turned it up. If dblink weren't in our CVS it'd have\n> been broken by DROP COLUMN, and who knows whether we'd catch that\n> during beta? I realize that Marc wasn't proposing splitting off any\n> server-side code, but I still want to tread carefully about breaking\n> up the codebase.\n\nOkay, well, the way I'm working it through right now, I'm doing it in such\na way that unless you go mucking in the repository directly, it will be\ntransparent to the coders, as well as to the distribution as a whole ...\n\nIn fact, based on a comment that Thomas made in another email, I'll even\nfix up the whole 'cvs checkout pgsql' thing so that that goes back to its\nprevious incarnation of pulling everything, instead of needing to do\npgsql-all ...\n\nSo, from the 'client side', y'all will still see \"everything as one big\npackage\", while from the 'server side', I'll have the seperate modules\ntaht can be packaged independently ...\n\nNext, unless Peter knows how to do it already, I've gotta learn to make\nconfigure more intelligent, so that for all intents, the \"pieces\" look\nlike one package when building, not just when coding ...\n\n",
"msg_date": "Thu, 1 Aug 2002 04:15:06 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items ) "
},
{
"msg_contents": "Bruce Momjian dijo: \n\n> Here are the open items for 7.3. We have one more month to address them\n> before beta.\n\n> CLUSTER - ready?\n\nI'm just back. I'll have a look at the problem with the patch and\nresubmit.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Es filosofo el que disfruta con los enigmas\" (G. Coli)\n\n",
"msg_date": "Thu, 1 Aug 2002 03:55:46 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Le Mercredi 31 Juillet 2002 05:50, Bruce Momjian a écrit :\n> Here are the open items for 7.3. We have one more month to address them\n> before beta.\n\nIs CREATE OR REPLACE VIEW on the list?\n",
"msg_date": "Thu, 1 Aug 2002 10:43:18 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> I realize that Marc wasn't proposing splitting off any\n>> server-side code, but I still want to tread carefully about breaking\n>> up the codebase.\n\n> Okay, well, the way I'm working it through right now, I'm doing it in such\n> a way that unless you go mucking in the repository directly, it will be\n> transparent to the coders, as well as to the distribution as a whole ...\n\n> In fact, based on a comment that Thomas made in another email, I'll even\n> fix up the whole 'cvs checkout pgsql' thing so that that goes back to its\n> previous incarnation of pulling everything, instead of needing to do\n> pgsql-all ...\n\nOkay, that works for me --- that makes it just a packaging issue, and\nnot something that will hide stuff from people who want to look through\nthe whole tree.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 09:52:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items ) "
},
{
"msg_contents": "On Thu, 1 Aug 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> >> I realize that Marc wasn't proposing splitting off any\n> >> server-side code, but I still want to tread carefully about breaking\n> >> up the codebase.\n>\n> > Okay, well, the way I'm working it through right now, I'm doing it in such\n> > a way that unless you go mucking in the repository directly, it will be\n> > transparent to the coders, as well as to the distribution as a whole ...\n>\n> > In fact, based on a comment that Thomas made in another email, I'll even\n> > fix up the whole 'cvs checkout pgsql' thing so that that goes back to its\n> > previous incarnation of pulling everything, instead of needing to do\n> > pgsql-all ...\n>\n> Okay, that works for me --- that makes it just a packaging issue, and\n> not something that will hide stuff from people who want to look through\n> the whole tree.\n\nActually, it makes it a 'storage' issue on the CVS server itself, but\nmakes creating various packages easier ... I've pop'd off an email to the\nlibpqxx configure guys to get their standalone configure issues fixed (try\nrunning autogen.sh), after which I want to look into 'calling' the\nstandalone configure from the global one if --enable-libpqxx is called\n(which we can later default to 'on' if that should become the default) ...\n\n\n",
"msg_date": "Thu, 1 Aug 2002 11:05:54 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items ) "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> This name mangling should be done at connect time and kept out of\n> database, where each users name should always be fully resolved\n> (bob@accounting.acme.com). \n\nI really like Hannu's approach to this. It seems to solve Marc's\nproblem with a very simple, easily understood, easily implemented\nfeature. All we need is a postmaster configuration parameter that\n(when TRUE) causes the postmaster to convert the passed username\ninto 'username@databasename' before looking it up in pg_shadow.\n\n(Actually, what I'd prefer it do is try first for username, and\nthen username@databasename if plain username isn't found.)\n\nWith this approach, we have an underlying mechanism that supports\ninstallation-wide usernames, same as before, but with the flip of\na switch you can configure the system to support per-database\nusernames. It's not fancy, maybe, but it will get the job done\nwith an appropriate amount of effort.\n\nWe've had several proposals in this thread for complicated extensions\nto the user naming mechanism. I think that's overdesigning the feature,\nbecause we have *no* examples of real-world need for such things except\nfor Marc's situation. Let's keep it simple until we see real use cases\nthat can drive the design of something fancy.\n\n> This may require raising the length of NAME type to be backwards\n> compatible.\n\nRight, but we're planning to do that anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 10:17:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> Le Mercredi 31 Juillet 2002 05:50, Bruce Momjian a ?crit :\n> > Here are the open items for 7.3. We have one more month to address them\n> > before beta.\n> \n> Is CREATE OR REPLACE VIEW on the list?\n\nNo. It can still be done, but no one is working on it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 12:34:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wednesday 31 July 2002 04:56 pm, Bruce Momjian wrote:\n> Thomas Lockhart wrote:\n>> Tom Lane wrote:\n> > > I agree that if we could quickly come to a resolution about how this\n> > > ought to work, there's plenty of time to go off and implement it.\n\n> > afaict someone else volunteered to do the work. There is no lack of\n> > consensus that this is a useful feature, at least among those who take\n> > responsibility to package PostgreSQL for particular platforms. How about\n> > letting them specify the requirements and if an acceptable solution\n> > emerges soon, we'll have it for 7.3...\n\n> Added to open items:\n\n> \tallow specification of configuration files in a different directory\n\nThanks Bruce.\n\nI am going to review the previous thread and attempt to distill what can be \ndone. I will then post a summary of what I found, with potential commentary. \nIf a consensus is reached, I'd like to see the feature in 7.3.\n\nPeter had mentioned it as well; might want to see if he has done anything as \nyet with it.\n\nThat being said, a patch exists for 7.2beta to effect a version of this \nchange. I will also review this patch as I can and see what will be required \nto make this work in CURRENT.\n\nIMO, the key is that if the switch is not specified the current behavior is \ndefault. If specified, it will do its thing.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 1 Aug 2002 12:49:19 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > This name mangling should be done at connect time and kept out of\n> > database, where each users name should always be fully resolved\n> > (bob@accounting.acme.com). \n> \n> I really like Hannu's approach to this. It seems to solve Marc's\n> problem with a very simple, easily understood, easily implemented\n> feature. All we need is a postmaster configuration parameter that\n> (when TRUE) causes the postmaster to convert the passed username\n> into 'username@databasename' before looking it up in pg_shadow.\n\nYes, that is how the patch I submitted last night does it.\n\n> (Actually, what I'd prefer it do is try first for username, and\n> then username@databasename if plain username isn't found.)\n\nYes, that would be very easy to do _except_ for pg_hba.conf which does a\nfirst-match for username. We could get into trouble there by trying two\nversions of the same name. Comments?\n\n> With this approach, we have an underlying mechanism that supports\n> installation-wide usernames, same as before, but with the flip of\n> a switch you can configure the system to support per-database\n> usernames. It's not fancy, maybe, but it will get the job done\n> with an appropriate amount of effort.\n> \n> We've had several proposals in this thread for complicated extensions\n> to the user naming mechanism. I think that's overdesigning the feature,\n> because we have *no* examples of real-world need for such things except\n> for Marc's situation. Let's keep it simple until we see real use cases\n> that can drive the design of something fancy.\n\nAgreed.\n\n> \n> > This may require raising the length of NAME type to be backwards\n> > compatible.\n> \n> Right, but we're planning to do that anyway.\n\nYes, but that requires a protocol change, which we don't want to do for\n7.3. My fix is to just extend the username on the server side and\nappend the dbname if the switch is on.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 13:23:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> So, from the 'client side', y'all will still see \"everything as one big\n> package\", while from the 'server side', I'll have the seperate modules\n> taht can be packaged independently ...\n\nMarc, how are you dealing with libpq's access to the server include\nfiles?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 13:38:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Thu, 1 Aug 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > So, from the 'client side', y'all will still see \"everything as one big\n> > package\", while from the 'server side', I'll have the seperate modules\n> > taht can be packaged independently ...\n>\n> Marc, how are you dealing with libpq's access to the server include\n> files?\n\nI haven't touched libpq yet ... I'm talking with the libpqxx guys right\nnow concerning getting the standalone libpqxx to work, and will be working\non figuring out how to get the 'master configure' to make use of the\nstandalone configure once that is fixed ... I want to get one module to\nwork cleanly before looking at any others ...\n\n\n\n",
"msg_date": "Thu, 1 Aug 2002 14:49:45 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Thu, 1 Aug 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > > So, from the 'client side', y'all will still see \"everything as one big\n> > > package\", while from the 'server side', I'll have the seperate modules\n> > > taht can be packaged independently ...\n> >\n> > Marc, how are you dealing with libpq's access to the server include\n> > files?\n> \n> I haven't touched libpq yet ... I'm talking with the libpqxx guys right\n> now concerning getting the standalone libpqxx to work, and will be working\n> on figuring out how to get the 'master configure' to make use of the\n> standalone configure once that is fixed ... I want to get one module to\n> work cleanly before looking at any others ...\n\nBut isn't libpq++ just going to be part of interfaces?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 14:19:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Thu, 2002-08-01 at 16:17, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > This name mangling should be done at connect time and kept out of\n> > database, where each users name should always be fully resolved\n> > (bob@accounting.acme.com). \n> \n> I really like Hannu's approach to this. It seems to solve Marc's\n> problem with a very simple, easily understood, easily implemented\n> feature. All we need is a postmaster configuration parameter that\n> (when TRUE) causes the postmaster to convert the passed username\n> into 'username@databasename' before looking it up in pg_shadow.\n> \n> (Actually, what I'd prefer it do is try first for username, and\n> then username@databasename if plain username isn't found.)\n\nThis should not really be @databasename, but rather a @domainname as\nMark does in fact want to use the same user from some virtual host\n(==domain) for more than one database sometimes.\n\nUsing databasename as a domainname is just the quickest way to resolve\nthe domainname if no more info about it is given.\n\nThinking of the @xxx part as a domainname and not tying it to\ndatabasename would be beneficial in case we later want to use other\nkinds of domains (like NT, DNS/mail, YP or Kerberos domains for example)\n\nIf need arises we could later split out the @xxx part to \"usedomain\"\nfield and perhaps also add \"usedomainkind\" field in order to manage that\ninfo in databse instead of pg_hba.conf.\n\n-----------------\nHannu\n\n",
"msg_date": "01 Aug 2002 20:46:35 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 1 Aug 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Thu, 1 Aug 2002, Bruce Momjian wrote:\n> >\n> > > Marc G. Fournier wrote:\n> > > > So, from the 'client side', y'all will still see \"everything as one big\n> > > > package\", while from the 'server side', I'll have the seperate modules\n> > > > taht can be packaged independently ...\n> > >\n> > > Marc, how are you dealing with libpq's access to the server include\n> > > files?\n> >\n> > I haven't touched libpq yet ... I'm talking with the libpqxx guys right\n> > now concerning getting the standalone libpqxx to work, and will be working\n> > on figuring out how to get the 'master configure' to make use of the\n> > standalone configure once that is fixed ... I want to get one module to\n> > work cleanly before looking at any others ...\n>\n> But isn't libpq++ just going to be part of interfaces?\n\nHuh? Each module is to be designed as a standalone project/distribution\n... in order to appease those that don't feel that the change is worth it,\nI'm making, essentially, a 'meta-module' that will pull in the seperate\nmodules into what you've all gotten used to from a development standpoint\n...\n\nIf you checkout pgsql, you will see what you are used to seeing, locally\nstored in pgsql\n\nIf you checkout libpqxx, you will just get libpqxx, locally stored in\nlibpqxx\n\nIf you checkout interfaces, you will get all of the interfaces listed in a\n'meta module' consisting of the various interfaces, locally stored in a\ndirectory of pgsql/src/interfaces/*\n\nFor those that are used to cheking out pgsql, continue to do so ... for\nppl like Jergeon(sp?), he will checkout libpqxx itself and work on it as\nif it were a standalone project, but when we package up pgsql, it will get\npulled in along with everything else, so that for those that have nothing\nbetter to do then download everyhting and the kitchen sink, they can ...\n\nAt the same time as the distribution is made, a libpqxx.tar.gz will be\ncreated, that will be a self-contained source tree for just that, so that\nthose doing ports on the *BSDs or rpms/etc on Linux have pretty much\npre-made distros that they don't have to slice and dice (ie. for FreeBSD,\nyou'd do something like go into /usr/ports/database/pgsql-libpqxx, type\n'make' and it would automatically go out, download libpq.tar.gz, install\nit as a dependency and then grab and install the libpqxx file ... if you\nhad already installed libpq previously, for mod_php4, for example, then it\nwould just download and install the libpqxx tar file) ...\n\nUnless I break something along the way, as far as you are concerned,\nnothing has changed ... just keep checking out pgsql as you always have\n... but for those of us that pay for our bandwidth by the byte, we'll now\nbe able to download *only* what we require, saving us both time and money\n...\n\n\n\n",
"msg_date": "Thu, 1 Aug 2002 16:09:26 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> (Actually, what I'd prefer it do is try first for username, and\n>> then username@databasename if plain username isn't found.)\n\n> Yes, that would be very easy to do _except_ for pg_hba.conf which does a\n> first-match for username. We could get into trouble there by trying two\n> versions of the same name. Comments?\n\nHm. I think we'd have to switch around the order of stuff so that we\nlook at the flat-file copy of pg_shadow first. Then we'd know which\nflavor of name we have, and we can proceed with the pg_hba match.\n\nThe reason it's worth doing this is that 'postgres', for example, should\nbe an installation-wide username even when you're using db-local names\nfor ordinary users.\n\n> This may require raising the length of NAME type to be backwards\n> compatible.\n>> \n>> Right, but we're planning to do that anyway.\n\n> Yes, but that requires a protocol change, which we don't want to do for\n> 7.3.\n\nWhat? We've been discussing raising NAMEDATALEN for months, and no\none's claimed that it qualifies as a protocol version change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 16:01:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> Next, unless Peter knows how to do it already, I've gotta learn to make\n> configure more intelligent, so that for all intents, the \"pieces\" look\n> like one package when building, not just when coding ...\n\nIt is possible, but it's not going to work.\n\nThere is a lot of interdependent and shared C code that needs to be put\nsomewhere. The build infrastructure is not ready to handle missing sub-\nor superdirectories at all. Where is all the documentation going to go?\nHow are the installation instructions going to cope with the fact that no\none knows where everything is?\n\nThis whole things is a worthwhile idea, to some extent, but a lot more\nplanning needs to be done. In the meantime I humbly suggest you look for\na better package manager rather than letting it all out on us.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:06:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items ) "
},
{
"msg_contents": "Tom Lane writes:\n\n> We've had several proposals in this thread for complicated extensions\n> to the user naming mechanism. I think that's overdesigning the feature,\n> because we have *no* examples of real-world need for such things except\n> for Marc's situation. Let's keep it simple until we see real use cases\n> that can drive the design of something fancy.\n\nI don't buy this argument. The reason we have no examples is that people\nare happily using the feature and don't have any reason to tell the world\nabout it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:06:42 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Lamar Owen writes:\n\n> > \tallow specification of configuration files in a different directory\n\n> I am going to review the previous thread and attempt to distill what can be\n> done. I will then post a summary of what I found, with potential commentary.\n> If a consensus is reached, I'd like to see the feature in 7.3.\n\nThe end result of the discussion was that no one could come up with a\nbright idea to secure the configuration files without doing anything bogus\nduring installation.\n\nAnother issue, which becomes even more problematic if you factor in the\nWAL file location discussion, is that if we drive the location of the data\nfrom the configuration file instead of vice versa, we need to have initdb\nsmart enough to read those files.\n\nFinally, I recall that a major reason to have these files in a separate\nplace is to be able to share them. But that won't work because those\nfiles contain port numbers, data directory locations, etc. which can't be\nshared. That needs a better plan than possibly \"use command-line options\nto override\".\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:06:52 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> I haven't touched libpq yet ... I'm talking with the libpqxx guys right\n> now concerning getting the standalone libpqxx to work, and will be working\n> on figuring out how to get the 'master configure' to make use of the\n> standalone configure once that is fixed ... I want to get one module to\n> work cleanly before looking at any others ...\n\nI fail to understand how this mess is going to achieve anything. If you\nlike, you can assemble or split the modules into trees or tarballs after\nyou have them checked out, or even after you have downloaded and unpacked\nthem. But a source code repository is not a package manager.\n\nMoreover, I would really like it if there was *some* discussion before we\nare presented with done deals.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:36:37 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Thursday 01 August 2002 04:06 pm, Peter Eisentraut wrote:\n> Another issue, which becomes even more problematic if you factor in the\n> WAL file location discussion, is that if we drive the location of the data\n> from the configuration file instead of vice versa, we need to have initdb\n> smart enough to read those files.\n\nHmm, I hadn't thought about that -- but I like that idea. Not exclusive of \nthe existing way, either. But alongside it. More thought required.\n\n> Finally, I recall that a major reason to have these files in a separate\n> place is to be able to share them. But that won't work because those\n> files contain port numbers, data directory locations, etc. which can't be\n> shared. That needs a better plan than possibly \"use command-line options\n> to override\".\n\nNo, the major reason was to allow the config files to live in a different area \nthan the data files without symlink kludges. The reasons why an admin might \nwant this are manifold. The reason I want it is to simplify multiple \npostmasters in an RPM installation. \n\nYou can then blow away the whole PGDATA tree and start from scratch without \nlosing your config files.\n\nYou had an idea along these lines, and I was quite OK with the majority of it.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 1 Aug 2002 16:49:10 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> (Actually, what I'd prefer it do is try first for username, and\n> >> then username@databasename if plain username isn't found.)\n> \n> > Yes, that would be very easy to do _except_ for pg_hba.conf which does a\n> > first-match for username. We could get into trouble there by trying two\n> > versions of the same name. Comments?\n> \n> Hm. I think we'd have to switch around the order of stuff so that we\n> look at the flat-file copy of pg_shadow first. Then we'd know which\n> flavor of name we have, and we can proceed with the pg_hba match.\n> \n> The reason it's worth doing this is that 'postgres', for example, should\n> be an installation-wide username even when you're using db-local names\n> for ordinary users.\n\nYes, that's why my code had a special case for 'postgres' or whatever\nsuper-user name it was installed with. I think it is cleaner to just\nread the install username. Also, right now, pg_pwd only contains\nusernames that have passwords, not all of them.\n\n> > This may require raising the length of NAME type to be backwards\n> > compatible.\n> >> \n> >> Right, but we're planning to do that anyway.\n> \n> > Yes, but that requires a protocol change, which we don't want to do for\n> > 7.3.\n> \n> What? We've been discussing raising NAMEDATALEN for months, and no\n> one's claimed that it qualifies as a protocol version change.\n\nI thought they were talking about increasing the length of the user NAME\nthat comes of the wire. That is currently 32. I see now he was just\ntalking about NAMEDATALEN. Good thing we are prepending the database\nname after receiving the name.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 17:07:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > We've had several proposals in this thread for complicated extensions\n> > to the user naming mechanism. I think that's overdesigning the feature,\n> > because we have *no* examples of real-world need for such things except\n> > for Marc's situation. Let's keep it simple until we see real use cases\n> > that can drive the design of something fancy.\n> \n> I don't buy this argument. The reason we have no examples is that people\n> are happily using the feature and don't have any reason to tell the world\n> about it.\n\nWell, we are going to find out in 7.3 when the secondary password file\nis no longer supported.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 17:12:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Functions Returning Sets - done?\n\nThe basic capability is done, but a number of supporting capabilities \nremain. These are the ones I hope to have done for 7.3:\n\n- PL/pgSQL table function support: not started, but I may get help with\n this.\n- anonymous composite types: patch submitted\n- stand-alone composite types: proposal submitted\n- implicit stand-alone composite types on CREATE FUNCTION: proposal\n submitted\n- Move show_all_settings() from contrib/tablefunc to the backend and\n create a system view using the same method as Neil's pg_locks view.\n\nAdditional refinements (streaming vs tuplestore, rescan pushed from \nplanner to executor, etc) will be 7.4 items.\n\nAdditionally on my personal TODO for 7.3 are:\n- modify contrib/dblink to take advantage of table function and new\n composite type capabilities\n- submit string manipulation functions discussed with Thomas a few\n weeks ago --> replace(), to_hex(), extract_tok()\n\nJoe\n\n\n\n\n",
"msg_date": "Thu, 01 Aug 2002 16:27:06 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, Jul 31, 2002 at 02:08:33PM -0300, Marc G. Fournier wrote:\n> \n> Who cares? Those that need a C++ interface will know where to find it,\n> and will report bugs that they have ... why should it be tested on every\n> platform when we *might* only have those on the Linux platform using it?\n \nWell, portability's actually a lot better than that OS-wise. The thing\nto worry about is compilers. I've got some changes from Clinton James\nwaiting in the wings until libpqxx's status and development home are\nsettled. Those changes will make it compile on the latest version of\nVisual C++ (and I'll take some changes out again once they're no longer\nneeded for that purpose), and most of it seems to work.\n\n\n> What happens if/when libpqxx becomes the 'old, crufty interface' and\n> something newer and shinier comes along? Where do we draw the line at\n> what is in the distribution? For instance, why pgaccess vs a platform\n> independent version of PgAdmin vs PHPPgAdmin? Hell, IMHO, PHPPgAdmin\n> would most likely be more useful as more sites out there are running PHP\n> then likely have TCL installed ... but someone that is using TCL/AolServer\n> would definitely think otherwise ...\n \nLooking at it that way, it seems to me that the proper approach is to\ncut out all interfaces that don't talk to the backend themselves--e.g.\nthe ones that build on top of libpq, like libpq++ and libpqxx do.\n\nOf course my humble but thoroughly biased opinion is that libpq++ be\nmarked \"legacy.\"\n\n\n> By branching out the fat, we make it *easier* for someone to take on\n> development of it ... would libpqxx ever have been developed if Joergen\n> could have just worked on libpq++ in the first place, without having to\n> submit patches?\n \nYes. Now STOP BRUTALIZING MY NAME!\n\n\n...\n\nPhew. I feel better now. What was I saying?\n\nAh, yes. What you say pretty much describes how libpqxx came to be: get\na local copy of libpq++, try to fix it on a carte-blanche basis, find \nnothing salvageable, write from scratch building on libpq++'s experience.\n\nThat said, I do support the idea of separately administered projects for\nthe reasons you give. Looking at specifics first, the problem I'm stuck\nwith for now is that I can't really work on the thing until this point is\ndecided. Well I could, but not until the doctor lets me get back to it.\nWhich requires, among other things, that it not give me headaches. :-) \n\nFor the more general case, there's the problem of release management: who's\ngoing to be taking care of synchronizing releases? This may require some\nnew infrastructure, such as a mailing list dedicated to the process, or one\nrestricted to subproject maintainers. Or something.\n\nPerhaps the unbundling of subprojects justifies that version bump to 8.0 \nafter all...\n\n\nJeroen\n\n\n(Juliet Echo Romeo Oscar Echo November. Jeroen. No G. Note intricate \norder of vowels. Phonetic spelling in English would be Yeroon. Try to\nroll the \"r\" a little.)\n\n",
"msg_date": "Fri, 2 Aug 2002 01:45:00 +0200",
"msg_from": "jtv <jtv@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Fri, 2 Aug 2002, jtv wrote:\n\n> Looking at it that way, it seems to me that the proper approach is to\n> cut out all interfaces that don't talk to the backend themselves--e.g.\n> the ones that build on top of libpq, like libpq++ and libpqxx do.\n\nThis is what my opinion is ... what I'm setting up right now with CVS is\nmeant to be a middle ground ...\n\n> Of course my humble but thoroughly biased opinion is that libpq++ be\n> marked \"legacy.\"\n\nNo doubt, but, if we didn't \"push\" one interface over another, then it\nwould be up to the end-users themselves to decide which one to install ...\n\n> > By branching out the fat, we make it *easier* for someone to take on\n> > development of it ... would libpqxx ever have been developed if Joergen\n> > could have just worked on libpq++ in the first place, without having to\n> > submit patches?\n>\n> Yes.\n\nOkay, then let's word it another way ... if libpq++ *wasn't* in the\nrepository and part of the distribution, would you have a) started working\non it sooner? b) would you have made it public earlier? c) would ppl\nhave started to use it and stop'd using libpq++?\n\nBasically, with libpq++ in the distribution, we are endorsing its use, so\nif we didn't put libpqxx into the repository, would ppl switch from the\n'endorsed' to the 'unendorsed' version?\n\nBy having libpq++ in the repository, we are adding weight to it that it\nwouldn't have if it were outside of the repository, making it more\ndifficult for 'alternatives' to pop in ...\n\n> For the more general case, there's the problem of release management: who's\n> going to be taking care of synchronizing releases? This may require some\n> new infrastructure, such as a mailing list dedicated to the process, or one\n> restricted to subproject maintainers. Or something.\n\nWell, for now, I'd say keep the status quo of just using -hackers ...\n\n\n",
"msg_date": "Thu, 1 Aug 2002 21:07:46 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Thu, Aug 01, 2002 at 09:07:46PM -0300, Marc G. Fournier wrote:\n> \n> > Of course my humble but thoroughly biased opinion is that libpq++ be\n> > marked \"legacy.\"\n> \n> No doubt, but, if we didn't \"push\" one interface over another, then it\n> would be up to the end-users themselves to decide which one to install ...\n \nIn theory, yes. In this case, however, I see two arguments for making\nthe distinction anyway:\n\n1. Some people won't want to go to the trouble of comparing available \ninterfaces, so they may default to libpq++ because it's what they found\nfirst, or because they find mentions of it as the official C++ interface.\nI think it would be a shame to have new users default to libpq++ \"by \naccident.\" I think many users would prefer to rely on the PostgreSQL \nteam's judgment--as they do by choosing Postgres in the first place.\n\n2. I get the impression that libpq++ sort of got stuck before it was\ncompleted. For the time being libpqxx appears to have better maintenance\nprospects. Users will want to be aware of this before making a choice.\n\n\n> Okay, then let's word it another way ... if libpq++ *wasn't* in the\n> repository and part of the distribution, would you have a) started working\n> on it sooner? b) would you have made it public earlier? c) would ppl\n> have started to use it and stop'd using libpq++?\n \nI'm not sure there's much point to going into a single example in detail,\nbut for completeness' sake I'll just answer these:\n\na) Yes.\nb) No, because in my case I was encouraged by team members' endorsement of\nfirst my suggestions for libpq++, and later a full replacement. Without\nthat, and without an active libpq++ maintainer around, libpqxx might never \nhave gotten off the ground.\nc) I'd like to think so, yes--but exposure would have been harder.\n\n\n> Basically, with libpq++ in the distribution, we are endorsing its use, so\n> if we didn't put libpqxx into the repository, would ppl switch from the\n> 'endorsed' to the 'unendorsed' version?\n> \n> By having libpq++ in the repository, we are adding weight to it that it\n> wouldn't have if it were outside of the repository, making it more\n> difficult for 'alternatives' to pop in ...\n \nI definitely agree here. See above.\n\n\n> > For the more general case, there's the problem of release management: who's\n> > going to be taking care of synchronizing releases? This may require some\n> > new infrastructure, such as a mailing list dedicated to the process, or one\n> > restricted to subproject maintainers. Or something.\n\nThis reminds me of another potential complication: how would unbundling\naffect the licensing situation? Mixing and matching components is good\nin many ways, but it could complicate the situation for end-users--who\nprobably like to be able to rely on the team's judgment on this issue as \nwell.\n\nI feel compelled at this point to admit that I prefer the GPL. This is\na personal preference, which I set aside because I wanted and expected \nlibpqxx to become the standard C++ interface. Had these interfaces not\nbeen bundled, I would have had less incentive to conform to Postgres'\nlicensing conditions. I think having a different license would have\nmade everyone's life a little harder.\n\n\nJeroen\n\n(and yes, I'm trying to repair my From: lines!)\n\n",
"msg_date": "Fri, 2 Aug 2002 19:53:48 +0200",
"msg_from": "\"Jeroen T. Vermeulen\" <jtv@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Trim the Fat (Was: Re: Open 7.3 items )"
},
{
"msg_contents": "On Tue, Jul 30, 2002 at 11:50:38PM -0400, Bruce Momjian wrote:\n> ecpg and bison issues - solved?\n\nNot solved yet. And it's just a matter of time until we run into it with\nthe main parser grammar file as well. Bison upstream is working on\nremoving all those short ints, but I have yet to receive a version that\ncompiles ecpg grammar correctly.\n\nNo idea, if this will be fixed in the next month.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 11 Aug 2002 12:36:34 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> On Tue, Jul 30, 2002 at 11:50:38PM -0400, Bruce Momjian wrote:\n>> ecpg and bison issues - solved?\n\n> Not solved yet. And it's just a matter of time until we run into it with\n> the main parser grammar file as well.\n\nYeah, I've been worrying about that too. Any idea how close we are to\ntrouble in the main grammar?\n\n> Bison upstream is working on\n> removing all those short ints, but I have yet to receive a version that\n> compiles ecpg grammar correctly.\n\nIf no solution is forthcoming, we might have to adopt plan B: find\nanother parser-generator tool. Whilst googling for bison info I came\nacross \"Why Bison is Becoming Extinct\"\n\thttp://www.acm.org/crossroads/xrds7-5/bison.html\nwhich is a tad amusing at least. Now, it's anyone's guess whether any\nof the tools he suggests are actually ready for prime time; they might\nhave practical limits much worse than bison's. But I got awfully\nfrustrated yesterday trying (once again) to get bison to allow a\nschema-qualified type name in the syntax <typename> <literal string>.\nI'm just about ready to consider alternatives.\n\nPlan C would be to devote some work to minimizing the number of states\nin the main grammar (I assume it's number of states that's the problem).\nI doubt anyone's ever tried, so there might be enough low-hanging fruit\nto get ecpg off the hook for awhile.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Aug 2002 11:12:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Sun, Aug 11, 2002 at 11:12:57AM -0400, Tom Lane wrote:\n> > Not solved yet. And it's just a matter of time until we run into it with\n> > the main parser grammar file as well.\n> \n> Yeah, I've been worrying about that too. Any idea how close we are to\n> trouble in the main grammar?\n\nNo idea. The ecpg grammar in the main tree has about 530 rules, while my\nactual version is at nearly 550. The main grammar should be at about\n440. So there's some room left.\n\n> Plan C would be to devote some work to minimizing the number of states\n> in the main grammar (I assume it's number of states that's the problem).\n> I doubt anyone's ever tried, so there might be enough low-hanging fruit\n> to get ecpg off the hook for awhile.\n\nActually I already ate the really low-hanging fruit. :-)\n\nI did spend some time to reduce the states, albeit surely not to the\nextent possible, but still it will mean quite some work I'm afraid.\n\nMichael\n\nP.S.: No repsonse by bison upstream yet, but I think he's on vacation\nthis week.\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 15 Aug 2002 16:28:29 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "Hi,\n\nI see different results in Oracle and postgres for same outer join queries.\nHere are the details.\n\nI have the following tables in our pg db\n\ntable: yuva_test1\nyt1_id\t\tyt1_name\tyt1_descr\n1\t\t1-name1\t1-desc1\n2\t\t1-name2\t1-desc2\n3\t\t1-name3\t1-desc3\n4\t\t1-name4\t1-desc4\n5\t\t1-name5\t1-desc5\n6\t\t1-name6\t1-desc6\n\ntable: yuva_test2\nyt2_id\t\tyt2_name\tyt2_descr\n2\t\t2-name2\t2-desc2\n3\t\t2-name3\t2-desc3\n4\t\t2-name4\t2-desc4\n\nWhen I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr from\nyuva_test1 left outer join yuva_test2 on yt1_id=yt2_id and yt2_name =\n'2-name2'\" on postgres database I get the following results\n\nyt1_name\tyt1_descr\tyt2_name\tyt2_descr\n1-name1\t1-descr1\n1-name2\t1-descr2\t2-name2\t2-descr2\n1-name3\t1-descr3\n1-name4\t1-descr4\n1-name5\t1-descr5\n1-name6\t1-descr6\n\nBut when I tried the same on Oracle(8.1.7) (the query is \"select yt1_name,\nyt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2 where\nyt1_id=yt2_id(+) and yt2_name = '2-name2'') I get the following results\n\nyt1_name\tyt1_descr\tyt2_name\tyt2_descr\n1-name2\t1-descr2\t2-name2\t2-descr2\n\nWhy postgres is giving? which is standard? is it a bug? or is it the way\npostgres is implemented? Could some one help me?\n\nNote: at the end of my mail is script to create tables and data in postgres.\n\nThanks\nYuva\nSr. Java Developer\nwww.ebates.com\n\n============================================================\nScripts:\nCREATE TABLE \"yuva_test1\" (\n \"yt1_id\" numeric(16, 0), \n \"yt1_name\" varchar(16) NOT NULL, \n \"yt1_descr\" varchar(32)\n) WITH OIDS;\n\nCREATE TABLE \"yuva_test2\" (\n \"yt2_id\" numeric(16, 0), \n \"yt2_name\" varchar(16) NOT NULL, \n \"yt2_descr\" varchar(32)\n) WITH OIDS;\n\ninsert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (1, '1-name1',\n'1-descr1');\ninsert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (2, '1-name2',\n'1-descr2');\ninsert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (3, '1-name3',\n'1-descr3');\ninsert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (4, '1-name4',\n'1-descr4');\ninsert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (5, '1-name5',\n'1-descr5');\ninsert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (6, '1-name6',\n'1-descr6');\n\ninsert into yuva_test2 (yt2_id, yt2_name, yt2_descr) values (2, '2-name2',\n'2-descr2');\ninsert into yuva_test2 (yt2_id, yt2_name, yt2_descr) values (3, '2-name3',\n'2-descr3');\ninsert into yuva_test2 (yt2_id, yt2_name, yt2_descr) values (4, '2-name4',\n'2-descr4');\n============================================================\n",
"msg_date": "Tue, 30 Jul 2002 20:53:06 -0700",
"msg_from": "Yuva Chandolu <ychandolu@ebates.com>",
"msg_from_op": true,
"msg_subject": "Outer join differences"
},
{
"msg_contents": "Yuva Chandolu <ychandolu@ebates.com> writes:\n> I see different results in Oracle and postgres for same outer join queries.\n\nI believe you are sending your bug report to the wrong database.\n\n> When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr from\n> yuva_test1 left outer join yuva_test2 on yt1_id=yt2_id and yt2_name =\n> '2-name2'\" on postgres database I get the following results\n\n> yt1_name\tyt1_descr\tyt2_name\tyt2_descr\n> 1-name1\t1-descr1\n> 1-name2\t1-descr2\t2-name2\t2-descr2\n> 1-name3\t1-descr3\n> 1-name4\t1-descr4\n> 1-name5\t1-descr5\n> 1-name6\t1-descr6\n\n> But when I tried the same on Oracle(8.1.7) (the query is \"select yt1_name,\n> yt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2 where\n> yt1_id=yt2_id(+) and yt2_name = '2-name2'') I get the following results\n\n> yt1_name\tyt1_descr\tyt2_name\tyt2_descr\n> 1-name2\t1-descr2\t2-name2\t2-descr2\n\nAccording to the SQL spec, the output of a LEFT JOIN consists of those\njoined rows where the join condition is true, plus those rows of the\nleft table for which no right-table row produced a true join condition\n(substituting nulls for the right-table columns). Our output clearly\nconforms to the spec.\n\nI do not know what Oracle thinks is the correct output when one\ncondition is marked with (+) and the other is not --- it's not very\nobvious what that corresponds to in the spec's terminology. But I\nsuggest you take it up with them, not us.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 00:14:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Outer join differences "
},
{
"msg_contents": "\nOn Tue, 30 Jul 2002, Yuva Chandolu wrote:\n\n> Hi,\n>\n> I see different results in Oracle and postgres for same outer join queries.\n> Here are the details.\n\nThose probably aren't the same outer join queries.\n\n> When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr from\n> yuva_test1 left outer join yuva_test2 on yt1_id=yt2_id and yt2_name =\n> '2-name2'\" on postgres database I get the following results\n>\n\nBoth conditions are part of the join condition for the outer join.\n\n> But when I tried the same on Oracle(8.1.7) (the query is \"select yt1_name,\n> yt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2 where\n> yt1_id=yt2_id(+) and yt2_name = '2-name2'') I get the following result\n\nOne condition is the join condition and one is a general where condition I\nwould guess since only one has the (+)\n\nI think the equivalent query is\nselect yt1_name, yt1_descr, yt2_name, yt2_descr from yuva_test1 left outer\njoin yuva_test2 on yt1_id=yt2_id where yt2_name='2-name2'.\n\nNote of course that you're destroying the outer joinness by doing\nthat yt2_name='2-name2' since the rows with no matching yuva_test2\nwill not match that conditoin.\n\n",
"msg_date": "Tue, 30 Jul 2002 21:31:26 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Outer join differences"
},
{
"msg_contents": "> > When I run the query \"select yt1_name, yt1_descr, yt2_name,\n> yt2_descr from\n> > yuva_test1 left outer join yuva_test2 on yt1_id=yt2_id and yt2_name =\n> > '2-name2'\" on postgres database I get the following results\n\nProbaly if you change your postgres query to this, it will give the same\nanswer as Oracle:\n\nselect yt1_name, yt1_descr, yt2_name,\nyt2_descr from\nyuva_test1 left outer join yuva_test2 on yt1_id=yt2_id where yt2_name =\n'2-name2';\n\n??\n\nChris\n\n\n> > But when I tried the same on Oracle(8.1.7) (the query is\n> \"select yt1_name,\n> > yt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2 where\n> > yt1_id=yt2_id(+) and yt2_name = '2-name2'') I get the following result\n\nAnd maybe if you change the oracle query to this, it will give the same\nanswer as postgres:\n\nselect yt1_name,\nyt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2 where\nyt1_id=yt2_id(+) and yt2_name = '2-name2'(+);\n\nJust guessing tho.\n\nChris\n\n",
"msg_date": "Wed, 31 Jul 2002 12:35:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Outer join differences"
}
] |
[
{
"msg_contents": "Hi Tom,\n\nThanks for your prompt reply, after second thought(before receiving your\nreply) I realized that postgres is doing more logically - i.e if the outer\njoin condition returns false then replace by nulls for right table columns.\nWe may change our code accordingly :-(.\n\nThanks\nYuva\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Tuesday, July 30, 2002 9:15 PM\nTo: Yuva Chandolu\nCc: 'pgsql-hackers@postgresql.org'\nSubject: Re: [HACKERS] Outer join differences \n\n\nYuva Chandolu <ychandolu@ebates.com> writes:\n> I see different results in Oracle and postgres for same outer join\nqueries.\n\nI believe you are sending your bug report to the wrong database.\n\n> When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr from\n> yuva_test1 left outer join yuva_test2 on yt1_id=yt2_id and yt2_name =\n> '2-name2'\" on postgres database I get the following results\n\n> yt1_name\tyt1_descr\tyt2_name\tyt2_descr\n> 1-name1\t1-descr1\n> 1-name2\t1-descr2\t2-name2\t2-descr2\n> 1-name3\t1-descr3\n> 1-name4\t1-descr4\n> 1-name5\t1-descr5\n> 1-name6\t1-descr6\n\n> But when I tried the same on Oracle(8.1.7) (the query is \"select yt1_name,\n> yt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2 where\n> yt1_id=yt2_id(+) and yt2_name = '2-name2'') I get the following results\n\n> yt1_name\tyt1_descr\tyt2_name\tyt2_descr\n> 1-name2\t1-descr2\t2-name2\t2-descr2\n\nAccording to the SQL spec, the output of a LEFT JOIN consists of those\njoined rows where the join condition is true, plus those rows of the\nleft table for which no right-table row produced a true join condition\n(substituting nulls for the right-table columns). Our output clearly\nconforms to the spec.\n\nI do not know what Oracle thinks is the correct output when one\ncondition is marked with (+) and the other is not --- it's not very\nobvious what that corresponds to in the spec's terminology. But I\nsuggest you take it up with them, not us.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Jul 2002 21:29:48 -0700",
"msg_from": "Yuva Chandolu <ychandolu@ebates.com>",
"msg_from_op": true,
"msg_subject": "Re: Outer join differences "
}
] |
[
{
"msg_contents": "This is great, we thought we may go for code changes, we will go with this\nsolution instead.\n\nThanks\nYuva\n\n-----Original Message-----\nFrom: Stephan Szabo [mailto:sszabo@megazone23.bigpanda.com]\nSent: Tuesday, July 30, 2002 9:31 PM\nTo: Yuva Chandolu\nCc: 'pgsql-hackers@postgresql.org'\nSubject: Re: [HACKERS] Outer join differences\n\n\n\nOn Tue, 30 Jul 2002, Yuva Chandolu wrote:\n\n> Hi,\n>\n> I see different results in Oracle and postgres for same outer join\nqueries.\n> Here are the details.\n\nThose probably aren't the same outer join queries.\n\n> When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr from\n> yuva_test1 left outer join yuva_test2 on yt1_id=yt2_id and yt2_name =\n> '2-name2'\" on postgres database I get the following results\n>\n\nBoth conditions are part of the join condition for the outer join.\n\n> But when I tried the same on Oracle(8.1.7) (the query is \"select yt1_name,\n> yt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2 where\n> yt1_id=yt2_id(+) and yt2_name = '2-name2'') I get the following result\n\nOne condition is the join condition and one is a general where condition I\nwould guess since only one has the (+)\n\nI think the equivalent query is\nselect yt1_name, yt1_descr, yt2_name, yt2_descr from yuva_test1 left outer\njoin yuva_test2 on yt1_id=yt2_id where yt2_name='2-name2'.\n\nNote of course that you're destroying the outer joinness by doing\nthat yt2_name='2-name2' since the rows with no matching yuva_test2\nwill not match that conditoin.\n",
"msg_date": "Tue, 30 Jul 2002 21:46:57 -0700",
"msg_from": "Yuva Chandolu <ychandolu@ebates.com>",
"msg_from_op": true,
"msg_subject": "Re: Outer join differences"
},
{
"msg_contents": "> This is great, we thought we may go for code changes, we will go with this\n> solution instead.\n\nBut you did catch Stephan's point that an outer join is not required to\nproduce the result you apparently want? The equivalent inner join will\nbe at worst just as fast, and possibly faster, both for PostgreSQL and\nfor Oracle...\n\n - Thomas\n",
"msg_date": "Tue, 30 Jul 2002 23:18:41 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Outer join differences"
},
{
"msg_contents": "> > Here are the details.\n>\n> Those probably aren't the same outer join queries.\n\nI think you're right, these aren't the same, see below:\n\n>\n> > When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr\n> > from yuva_test1 left outer join yuva_test2 on yt1_id=yt2_id and yt2_name\n> > = '2-name2'\" on postgres database I get the following results\n>\n> Both conditions are part of the join condition for the outer join.\n>\n> > But when I tried the same on Oracle(8.1.7) (the query is \"select\n> > yt1_name, yt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2\n> > where yt1_id=yt2_id(+) and yt2_name = '2-name2'') I get the following\n> > result\n\nI think for Oracle the equivalent is:\nselect yt1_name,\n yt1_descr, \n yt2_name, \n yt2_descr \n from yuva_test1, \n yuva_test2 \n where yt2_id (+)= yt1_id=yt2_id\n and yt2_name (+)= '2-name2'\n\n",
"msg_date": "Wed, 31 Jul 2002 09:28:02 +0200",
"msg_from": "Mario Weilguni <mweilguni@sime.com>",
"msg_from_op": false,
"msg_subject": "Re: Outer join differences"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Tuesday, July 30, 2002 9:50 PM\n> To: Bruce Momjian\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Open 7.3 items \n[snip]\n\n> > Win32 - timefame?\n\nI may be able to contribute the Win32 stuff we have done here. (Not\nsure of it, but they do seem more open to the idea now). It's only for\n7.1.3, and so I am not sure how helpful it would be. There is also a\nbunch of stuff that looks like this in the code:\n\n#ifdef ICKY_WIN32_KLUDGE\n/* Bletcherous hack to make it work in Win32 goes here... */\n#else\n/* Normal code goes here... */\n#endif\n\nLet me know if you are interested.\n",
"msg_date": "Tue, 30 Jul 2002 21:56:28 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "\nIf you can contribute it, I think it would be valuable to the two other\nWin32 projects that are working on porting the 7.3 code to Win32.\n\nI don't think they will have any code ready for 7.3 but they may have a\nfew pieces they want to get in to make their 7.3 patching job easier,\nlike renaming macros or variables or something.\n\n\n---------------------------------------------------------------------------\n\nDann Corbit wrote:\n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: Tuesday, July 30, 2002 9:50 PM\n> > To: Bruce Momjian\n> > Cc: PostgreSQL-development\n> > Subject: Re: [HACKERS] Open 7.3 items \n> [snip]\n> \n> > > Win32 - timefame?\n> \n> I may be able to contribute the Win32 stuff we have done here. (Not\n> sure of it, but they do seem more open to the idea now). It's only for\n> 7.1.3, and so I am not sure how helpful it would be. There is also a\n> bunch of stuff that looks like this in the code:\n> \n> #ifdef ICKY_WIN32_KLUDGE\n> /* Bletcherous hack to make it work in Win32 goes here... */\n> #else\n> /* Normal code goes here... */\n> #endif\n> \n> Let me know if you are interested.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 16:47:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "wow=# update \\d dmoz\n Table \"dmoz\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\n name | text |\n path | ltree |\nIndexes: dmoz_id_idx unique btree (id),\n dmoz_path_idx gist (\"path\")\n\nwow-#\nwow-# ;\nERROR: parser: parse error at or near \"\"\n\nIs it normal behaviour? Its seems to me that isn't..\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Wed, 31 Jul 2002 16:54:27 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "Query parser?"
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> wow=# update \\d dmoz\n> Table \"dmoz\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> id | integer |\n> name | text |\n> path | ltree |\n> Indexes: dmoz_id_idx unique btree (id),\n> dmoz_path_idx gist (\"path\")\n\n> wow-#\n> wow-# ;\n> ERROR: parser: parse error at or near \"\"\n\n> Is it normal behaviour? Its seems to me that isn't..\n\nThis is the same as\n\t\\d dmoz\n\tupdate ;\n\nThe behavior seems reasonable to me. If psql's backslash commands\nflushed the query input buffer, we couldn't have any commands for\nquery-buffer editing.\n\nOne thing that does seem a little odd is:\n\nregression=# update;\nERROR: parser: parse error at or near \";\"\nregression=# update\nregression-# ;\nERROR: parser: parse error at or near \"\"\n\nInvestigation shows that psql includes the ';' in what it sends to\nthe backend in the first case, but not in the second. I'm not sure that\nthat rises to the level of a bug, but it seems odd.\n\nIt'd probably also be nice if the error message said\nERROR: parser: parse error at or near end of input\nrather than quoting a useless empty token. I will see if I can make\nthat happen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 10:31:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query parser? "
}
] |
[
{
"msg_contents": "\n> > Seems more accurate, but actually you may also have two or more\n> > conditional rules that cover all possibilities if taken together.\n> > Maybe\n> > ERROR: Cannot insert into a view\n> > You need an ON INSERT DO INSTEAD rule that matches your INSERT\n> > Which covers both cases.\n> \n> Actually not: the system insists that you provide an unconditional\n> DO INSTEAD rule. The other would require trying to prove (during\n> rule expansion) a theorem that the conditions of the available\n> conditional rules cover all possible cases.\n> \n> Alternatively we could move the test for insertion-into-a-view out of\n> the rewriter and into a low level of the executor, producing an error\n> message only if some inserted tuple actually gets past the rule\n> conditions. I don't much care for that answer because (a) it turns a\n> once-per-query overhead check into once-per-tuple overhead, and\n\nSince I see a huge benefit in allowing conditional rules for a view,\nI think it is worth finding a solution.\n\nThe current rewriter test could still catch the case where no instead rule \nexists at all.\n\nThe utility is \"Table Partitioning by expression\".\n\nBasically you have a union view like:\ncreate view history as\nselect * from history2000 where yearcol=2000\nunion all\nselect * from history2001 where yearcol=2001\n\nYou get the idea. \nNow you need conditional insert and update rules to act on the\ncorrect table.\n\nMaybe we would also need additional intelligence in the planner \nto eliminate the history2000 table in a select * from history where \nyearcol=2001.\n\nBut that is all you need for a really useful feature for large databases.\n\n> (b) if you fail to span the full space of possibilities in your rule\n> conditions, you might not find out about it until your application goes\n> belly-up in production. There's some version of Murphy's Law that says\n> rare conditions arise with very low probability during testing, and very\n> high probability as soon as you go live...\n\nThis is true for other db's table partitioning capabilities as well, and they\nstill implement the feature.\n\nAndreas\n",
"msg_date": "Wed, 31 Jul 2002 16:13:28 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Since I see a huge benefit in allowing conditional rules for a view,\n> I think it is worth finding a solution.\n\nWe do allow conditional rules for a view. You just have to write an\nunconditional one too (which can be merely DO INSTEAD NOTHING).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 10:37:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Wed, 31 Jul 2002, Zeugswetter Andreas SB SD wrote:\n\n> The utility is \"Table Partitioning by expression\".\n>\n> Basically you have a union view like:\n> create view history as\n> select * from history2000 where yearcol=2000\n> union all\n> select * from history2001 where yearcol=2001\n\nYou want to be careful with this sort of stuff, since the query planner\nsometimes won't do the view as efficiently as it would do the fully\nspecified equivalant query. I've posted about this here before.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 1 Aug 2002 10:27:51 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> You want to be careful with this sort of stuff, since the query planner\n> sometimes won't do the view as efficiently as it would do the fully\n> specified equivalant query. I've posted about this here before.\n\nPlease provide an example. AFAIK a view is a query macro, and nothing\nelse.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 00:00:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Thu, 1 Aug 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > You want to be careful with this sort of stuff, since the query planner\n> > sometimes won't do the view as efficiently as it would do the fully\n> > specified equivalant query. I've posted about this here before.\n>\n> Please provide an example. AFAIK a view is a query macro, and nothing\n> else.\n\nI already did provide an example, and you even replied to it. :-)\nSee the appended message.\n\nBTW, this page\n\n http://archives.postgresql.org/pgsql-general/2002-06/threads.php\n\ndoes not display in Navigator 4.78. Otherwise I would have provided a\nreference to the thread in the archive.\n\nMaybe we need a web based form for reporting problem pages in the archives.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 1 Aug 2002 13:17:07 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Thu, 1 Aug 2002, Tom Lane wrote:\n>> Curt Sampson <cjs@cynic.net> writes:\n> You want to be careful with this sort of stuff, since the query planner\n> sometimes won't do the view as efficiently as it would do the fully\n> specified equivalant query. I've posted about this here before.\n>> \n>> Please provide an example. AFAIK a view is a query macro, and nothing\n>> else.\n\n> I already did provide an example, and you even replied to it. :-)\n\nBut that isn't an \"equivalent query\". You've manually transformed\n SELECT * FROM (SELECT something UNION SELECT somethingelse) WHERE foo;\ninto\n (SELECT something WHERE foo) UNION (SELECT somethingelse WHERE foo);\nAs has been pointed out repeatedly, it's not entirely obvious whether\nthis is a valid transformation in the general case. (The knee-jerk\nreaction that it's obviously right should be held in check, since SQL's\nthree-valued notion of boolean logic tends to trip up the intuition.)\nIf you can provide a proof that it's always safe, or that it's safe\nunder such-and-such conditions, I'll see what I can do about making it\nhappen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 00:44:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Thu, 1 Aug 2002, Tom Lane wrote:\n\n> But that isn't an \"equivalent query\". You've manually transformed\n> SELECT * FROM (SELECT something UNION SELECT somethingelse) WHERE foo;\n> into\n> (SELECT something WHERE foo) UNION (SELECT somethingelse WHERE foo);\n\nRight.\n\n> As has been pointed out repeatedly, it's not entirely obvious whether\n> this is a valid transformation in the general case.\n\nRight. And I agreed that it as soon as you first pointed it out.\nAnd still do.\n\nBut the message I was replying to was a similar union query, and I was\nthinking that that person might be having a similar initial intuitive\nreaction, \"well, it looks kinda the same.\" I just wanted to note that\nyou need to check this stuff with explain, rather than blindly assuming\nyou know what's going on.\n\n> If you can provide a proof that it's always safe, or that it's safe\n> under such-and-such conditions, I'll see what I can do about making it\n> happen.\n\nIt's on my list of things to do, but not high enough that it's\nlikely I'll ever get to it. :-)\n\nBTW, if anybody can think of a way to make a view that really does\nrepresent my original query, I'd appreciate a hint.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 1 Aug 2002 13:56:55 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
}
] |
[
{
"msg_contents": "\n> > Since I see a huge benefit in allowing conditional rules for a view,\n> > I think it is worth finding a solution.\n> \n> We do allow conditional rules for a view. You just have to write an\n> unconditional one too (which can be merely DO INSTEAD NOTHING).\n\nHmm, but you cannot then trow an error, but that is prbbly a minor issue.\nGood that we can do Table Partitioning :-)\n\nAndreas\n",
"msg_date": "Wed, 31 Jul 2002 18:16:05 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Rules and Views "
}
] |
[
{
"msg_contents": "> > Schema handling - ready? interfaces? client apps?\n> \n> The backend will be ready (it's not quite yet). pg_dump is ready.\n> psql is very definitely not ready, nor is pgaccess. I don't know the\n> status for JDBC or ODBC; any comments? The other interface libraries\n> probably don't care.\n> \n> > Dependency - pg_dump auto-create dependencies for 7.2.X data?\n> \n> Huh?\n\nThere's still a problem with restoring blobs in a certain circumstance-- the\nattached script (run as the pg superuser) shows that there's either an\ninconsistency or a misunderstanding (on my part), resulting in a failed\npg_restore. It seems that pg_restore isn't necessarily reconnecting as\nsuperuser after restoring a user owned table and before trying to restore\npg_largeobject.\n\nThis came to light specifically because I was trying to restore from a 7.2.1\ndump file into the 7.3dev server, but this script is using all 7.3dev tools,\nrefreshed from cvs this morning.\n\n-ron",
"msg_date": "Wed, 31 Jul 2002 09:55:16 -0700",
"msg_from": "Ron Snyder <snyder@roguewave.com>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items "
}
] |
[
{
"msg_contents": "Yuva,\n\nThe results make sense to me. The left outer join functionality in Postgres\nis explained as follows:\n\nLEFT OUTER JOIN returns all rows in the qualified Cartesian product (i.e.,\nall combined rows that pass its ON condition), plus one copy of each row in\nthe left-hand table for which there was no right-hand row that passed the ON\ncondition. This left-hand row is extended to the full width of the joined\ntable by inserting NULLs for the right-hand columns. Note that only the\nJOIN's own ON or USING condition is considered while deciding which rows\nhave matches. Outer ON or WHERE conditions are applied afterwards. \nSo, in your postgres statement, you are retrieving all rows from yuva_test1,\nand the one row from yuva_test2 that satisfied the \"where\" criteria that\nyt2_name = '2-name2'.\nIn Oracle, though, since your outer join is on yuva_test2, you would need\nto specify an outer join on the criterion \"yt2_name = '2-name2''\" by saying\n\"yt2_name (+) = '2-name2''\" to limit the resultset.\nHope this helps\nJill\n\n> -----Original Message-----\n> From: \tYuva Chandolu \n> Sent:\tTuesday, July 30, 2002 8:53 PM\n> To:\t'pgsql-hackers@postgresql.org'\n> Subject:\tOuter join differences\n> \n> Hi,\n> \n> I see different results in Oracle and postgres for same outer join\n> queries. Here are the details.\n> \n> I have the following tables in our pg db\n> \n> table: yuva_test1\n> yt1_id\t\tyt1_name\tyt1_descr\n> 1\t\t1-name1\t1-desc1\n> 2\t\t1-name2\t1-desc2\n> 3\t\t1-name3\t1-desc3\n> 4\t\t1-name4\t1-desc4\n> 5\t\t1-name5\t1-desc5\n> 6\t\t1-name6\t1-desc6\n> \n> table: yuva_test2\n> yt2_id\t\tyt2_name\tyt2_descr\n> 2\t\t2-name2\t2-desc2\n> 3\t\t2-name3\t2-desc3\n> 4\t\t2-name4\t2-desc4\n> \n> When I run the query \"select yt1_name, yt1_descr, yt2_name, yt2_descr from\n> yuva_test1 left outer join yuva_test2 on yt1_id=yt2_id and yt2_name =\n> '2-name2'\" on postgres database I get the following results\n> \n> yt1_name\tyt1_descr\tyt2_name\tyt2_descr\n> 1-name1\t1-descr1\n> 1-name2\t1-descr2\t2-name2\t2-descr2\n> 1-name3\t1-descr3\n> 1-name4\t1-descr4\n> 1-name5\t1-descr5\n> 1-name6\t1-descr6\n> \n> But when I tried the same on Oracle(8.1.7) (the query is \"select yt1_name,\n> yt1_descr, yt2_name, yt2_descr from yuva_test1, yuva_test2 where\n> yt1_id=yt2_id(+) and yt2_name = '2-name2'') I get the following results\n> \n> yt1_name\tyt1_descr\tyt2_name\tyt2_descr\n> 1-name2\t1-descr2\t2-name2\t2-descr2\n> \n> Why postgres is giving? which is standard? is it a bug? or is it the way\n> postgres is implemented? Could some one help me?\n> \n> Note: at the end of my mail is script to create tables and data in\n> postgres.\n> \n> Thanks\n> Yuva\n> Sr. Java Developer\n> www.ebates.com\n> \n> ============================================================\n> Scripts:\n> CREATE TABLE \"yuva_test1\" (\n> \"yt1_id\" numeric(16, 0), \n> \"yt1_name\" varchar(16) NOT NULL, \n> \"yt1_descr\" varchar(32)\n> ) WITH OIDS;\n> \n> CREATE TABLE \"yuva_test2\" (\n> \"yt2_id\" numeric(16, 0), \n> \"yt2_name\" varchar(16) NOT NULL, \n> \"yt2_descr\" varchar(32)\n> ) WITH OIDS;\n> \n> insert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (1, '1-name1',\n> '1-descr1');\n> insert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (2, '1-name2',\n> '1-descr2');\n> insert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (3, '1-name3',\n> '1-descr3');\n> insert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (4, '1-name4',\n> '1-descr4');\n> insert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (5, '1-name5',\n> '1-descr5');\n> insert into yuva_test1 (yt1_id, yt1_name, yt1_descr) values (6, '1-name6',\n> '1-descr6');\n> \n> insert into yuva_test2 (yt2_id, yt2_name, yt2_descr) values (2, '2-name2',\n> '2-descr2');\n> insert into yuva_test2 (yt2_id, yt2_name, yt2_descr) values (3, '2-name3',\n> '2-descr3');\n> insert into yuva_test2 (yt2_id, yt2_name, yt2_descr) values (4, '2-name4',\n> '2-descr4');\n> ============================================================\n",
"msg_date": "Wed, 31 Jul 2002 10:39:44 -0700",
"msg_from": "Jill Rabinowitz <jrabinowitz@ebates.com>",
"msg_from_op": true,
"msg_subject": "Re: Outer join differences"
}
] |
[
{
"msg_contents": "Bruce,\n\nplease find attached patch to current CVS ( contrib/ltree )\n\nChanges:\n\nJuly 31, 2002\n Now works on 64-bit platforms.\n Added function lca - lowest common ancestor\n Version for 7.2 is distributed as separate package -\n http://www.sai.msu.su/~megera/postgres/gist/ltree/ltree-7.2.tar.gz\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Wed, 31 Jul 2002 20:47:48 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Please, apply ltree patch"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nOleg Bartunov wrote:\n> Bruce,\n> \n> please find attached patch to current CVS ( contrib/ltree )\n> \n> Changes:\n> \n> July 31, 2002\n> Now works on 64-bit platforms.\n> Added function lca - lowest common ancestor\n> Version for 7.2 is distributed as separate package -\n> http://www.sai.msu.su/~megera/postgres/gist/ltree/ltree-7.2.tar.gz\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 17:55:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please, apply ltree patch"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nOleg Bartunov wrote:\n> Bruce,\n> \n> please find attached patch to current CVS ( contrib/ltree )\n> \n> Changes:\n> \n> July 31, 2002\n> Now works on 64-bit platforms.\n> Added function lca - lowest common ancestor\n> Version for 7.2 is distributed as separate package -\n> http://www.sai.msu.su/~megera/postgres/gist/ltree/ltree-7.2.tar.gz\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Aug 2002 01:02:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please, apply ltree patch"
}
] |
[
{
"msg_contents": "> As for 7.3, maybe we can get that done in time of everyone \n> likes it. If\n> we can't, what do we do? Do we re-add the secondary password \n> file stuff\n> that most people don't like? My big question is how many other\n> PostgreSQL users figured out they could use the secondary \n> password file\n> for username/db restrictions? I never thought of it myself. Maybe I\n> should ask on general.\n\nUnless I'm misunderstanding you, we use it and like it. We have several\nservers on one machine that all access the same password file (we have it\nsoftlinked). If we need to create a user that accesses only one cluster,\nthen they get added to the file and created in the specific cluster. If\nthat user then needs access to a different cluster, they just need to be\nadded to the new cluster.\n\nThe reason this is beneficial for us is because we then have the ability to\nhave postgres only user accounts, as well as accounts from YP. When the YP\nuser changes their unix password in YP, their postgres db account password\nchanges as well (via cronjob).\n\nThere are fewer passwords for them to manage in this way, but we still get\nthe benefit of greater separation between clusters.\n\nLet me know if you want more information about how we use it (or if I\nmisunderstood). What is it that people _don't_ like?\n\n-ron\n\n\n\n",
"msg_date": "Wed, 31 Jul 2002 14:29:32 -0700",
"msg_from": "Ron Snyder <snyder@roguewave.com>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Ron Snyder wrote:\n> > As for 7.3, maybe we can get that done in time of everyone \n> > likes it. If\n> > we can't, what do we do? Do we re-add the secondary password \n> > file stuff\n> > that most people don't like? My big question is how many other\n> > PostgreSQL users figured out they could use the secondary \n> > password file\n> > for username/db restrictions? I never thought of it myself. Maybe I\n> > should ask on general.\n> \n> Unless I'm misunderstanding you, we use it and like it. We have several\n> servers on one machine that all access the same password file (we have it\n> softlinked). If we need to create a user that accesses only one cluster,\n> then they get added to the file and created in the specific cluster. If\n> that user then needs access to a different cluster, they just need to be\n> added to the new cluster.\n> \n> The reason this is beneficial for us is because we then have the ability to\n> have postgres only user accounts, as well as accounts from YP. When the YP\n> user changes their unix password in YP, their postgres db account password\n> changes as well (via cronjob).\n> \n> There are fewer passwords for them to manage in this way, but we still get\n> the benefit of greater separation between clusters.\n> \n> Let me know if you want more information about how we use it (or if I\n> misunderstood). What is it that people _don't_ like?\n\nOK, how do secondary passwords work in pg_hba.conf. It requires\nclear-text 'password', right, because the password is already crypt-ed\nin the file.\n\nHere you are using it for something different, where one file is used\nfor multiple clusters. Interesting.\n\nThe current code allows you to point to a file for a list of users,\nwhich could be symlinked, so that is handled. The only part not handled\nis the password part.\n\nOne idea I had was to look for a colon in the username, and if I see\none, I assume everything after the colon is a password. Would that work\nfor you?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 17:40:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Bruce Momjian wrote:\n\n> One idea I had was to look for a colon in the username, and if I see\n> one, I assume everything after the colon is a password. Would that work\n> for you?\n\nThat would definitely work ... but I *really* like your GUC idea ... it\nwould allow ppl to change passwords using simple SQL statements remotely,\nwhich the \"old\" password stuff didn't allow for ...\n\n\n",
"msg_date": "Wed, 31 Jul 2002 22:48:56 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 2002-08-01 at 06:48, Marc G. Fournier wrote:\n> On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> \n> > One idea I had was to look for a colon in the username, and if I see\n> > one, I assume everything after the colon is a password. Would that work\n> > for you?\n> \n> That would definitely work ... but I *really* like your GUC idea ... it\n> would allow ppl to change passwords using simple SQL statements remotely,\n> which the \"old\" password stuff didn't allow for ...\n\nI think that the users domain should be kept separate from username if\nat all possible. This is how all modern authentication systems work.\n\n-------------\nHannu\n\n",
"msg_date": "01 Aug 2002 10:35:13 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "> OK, how do secondary passwords work in pg_hba.conf. It requires\n> clear-text 'password', right, because the password is already crypt-ed\n> in the file.\n\nI presume that you're referring to passwords being transmitted clear text? \n\n> One idea I had was to look for a colon in the username, and if I see\n> one, I assume everything after the colon is a password. \n> Would that work\n> for you?\n\nIt would as long as there was an assumption (or method to specify) that the\nstuff after the colon is a crypt()ed password. Our method to generate the\npassword file is to 'ypcat passwd > /db/etc/password; cat\n/db/etc/pg-only-passwords >> /db/etc/password'. We could very easily only\npull only the fields we care about from our yp passwd file.\n\nI suppose I should also mention that we're not wedded to this method-- we've\njust found it convenient. If we needed to script something else up to\nconnect to the databases and set passwords, we could do that too, it would\njust be a bit more work.\n\n-ron\n\n",
"msg_date": "Wed, 31 Jul 2002 15:11:08 -0700",
"msg_from": "Ron Snyder <snyder@roguewave.com>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Ron Snyder wrote:\n> > OK, how do secondary passwords work in pg_hba.conf. It requires\n> > clear-text 'password', right, because the password is already crypt-ed\n> > in the file.\n> \n> I presume that you're referring to passwords being transmitted clear text? \n\nYes, is that your pg_hba.conf line? 'password' is insecure over\nnetworks you don't trust.\n\n> > One idea I had was to look for a colon in the username, and if I see\n> > one, I assume everything after the colon is a password. \n> > Would that work\n> > for you?\n> \n> It would as long as there was an assumption (or method to specify) that the\n> stuff after the colon is a crypt()ed password. Our method to generate the\n\nIt would be whatever password is specified on the pg_hba.conf line,\n'password', 'crypt', or 'md5'.\n\n> password file is to 'ypcat passwd > /db/etc/password; cat\n> /db/etc/pg-only-passwords >> /db/etc/password'. We could very easily only\n> pull only the fields we care about from our yp passwd file.\n> \n> I suppose I should also mention that we're not wedded to this method-- we've\n> just found it convenient. If we needed to script something else up to\n> connect to the databases and set passwords, we could do that too, it would\n> just be a bit more work.\n\nOK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 18:46:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "Hi,\n\nWe had seen the following exception when we tried for a heavy query(around\n10000 to 20000 in result is possible)\n\nAn I/O error occured while reading from backend - Exception:\njava.net.SocketException: socket closed: Bad file number\nStack Trace:\n\njava.net.SocketException: socket closed: Bad file number\n at java.net.SocketInputStream.socketRead(Native Method)\n at java.net.SocketInputStream.read(SocketInputStream.java:90)\n at java.io.BufferedInputStream.fill(BufferedInputStream.java:186)\n at java.io.BufferedInputStream.read(BufferedInputStream.java:204)\n at org.postgresql.PG_Stream.ReceiveChar(PG_Stream.java:141)\n at org.postgresql.core.QueryExecutor.execute(QueryExecutor.java:68)\n at org.postgresql.Connection.ExecSQL(Connection.java:398)\n at org.postgresql.jdbc2.Statement.execute(Statement.java:130)\n at org.postgresql.jdbc2.Statement.executeQuery(Statement.java:54)\n at\norg.postgresql.jdbc2.PreparedStatement.executeQuery(PreparedStatement\n.java:99)\n at\ncom.ebates.payments.PaymentsDatabaseHandler.getPendingUserRebates(Pay\nmentsDatabaseHandler.java:751)\n\nAny idea what level it is happenning? database? or the jdbc driver? We do\nNOT have this problem when we were on Oracle.\n\nThanks\nYuva\n\n",
"msg_date": "Wed, 31 Jul 2002 15:47:28 -0700",
"msg_from": "Yuva Chandolu <ychandolu@ebates.com>",
"msg_from_op": true,
"msg_subject": "IO error - please help"
}
] |
[
{
"msg_contents": "> \n> Yes, is that your pg_hba.conf line? 'password' is insecure over\n> networks you don't trust.\n\nYes, we're using 'password password' in our pg_hba.conf file. I trust my\nnetwork (so far).\n\n-ron\n\n",
"msg_date": "Wed, 31 Jul 2002 16:06:42 -0700",
"msg_from": "Ron Snyder <snyder@roguewave.com>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Ron Snyder wrote:\n> > \n> > Yes, is that your pg_hba.conf line? 'password' is insecure over\n> > networks you don't trust.\n> \n> Yes, we're using 'password password' in our pg_hba.conf file. I trust my\n> network (so far).\n\nThat is another major limitation to secondary password files. In fact,\nmd5 will not even work because we assume the username is used as the\nsalt for the md5 encryption. We don't store the salt as part of the\nencrypted password like crypt does. \n\nThis was another reason secondary password files were discouraged.\n\nLet me look at adding the colon password capability and see what it\nlooks like.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 21:05:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Bruce Momjian wrote:\n\n> Ron Snyder wrote:\n> > >\n> > > Yes, is that your pg_hba.conf line? 'password' is insecure over\n> > > networks you don't trust.\n> >\n> > Yes, we're using 'password password' in our pg_hba.conf file. I trust my\n> > network (so far).\n>\n> That is another major limitation to secondary password files. In fact,\n> md5 will not even work because we assume the username is used as the\n> salt for the md5 encryption. We don't store the salt as part of the\n> encrypted password like crypt does.\n>\n> This was another reason secondary password files were discouraged.\n\ndiscouraged?? where? :)\n\n\n",
"msg_date": "Wed, 31 Jul 2002 22:50:34 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> \n> > Ron Snyder wrote:\n> > > >\n> > > > Yes, is that your pg_hba.conf line? 'password' is insecure over\n> > > > networks you don't trust.\n> > >\n> > > Yes, we're using 'password password' in our pg_hba.conf file. I trust my\n> > > network (so far).\n> >\n> > That is another major limitation to secondary password files. In fact,\n> > md5 will not even work because we assume the username is used as the\n> > salt for the md5 encryption. We don't store the salt as part of the\n> > encrypted password like crypt does.\n> >\n> > This was another reason secondary password files were discouraged.\n> \n> discouraged?? where? :)\n\nWell. I meant that they had very limited usefulness. You had to trust\nyour network.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 22:37:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> >\n> > > Ron Snyder wrote:\n> > > > >\n> > > > > Yes, is that your pg_hba.conf line? 'password' is insecure over\n> > > > > networks you don't trust.\n> > > >\n> > > > Yes, we're using 'password password' in our pg_hba.conf file. I trust my\n> > > > network (so far).\n> > >\n> > > That is another major limitation to secondary password files. In fact,\n> > > md5 will not even work because we assume the username is used as the\n> > > salt for the md5 encryption. We don't store the salt as part of the\n> > > encrypted password like crypt does.\n> > >\n> > > This was another reason secondary password files were discouraged.\n> >\n> > discouraged?? where? :)\n>\n> Well. I meant that they had very limited usefulness. You had to trust\n> your network.\n\nthat is the case for alot of software, and alot of networks nowadays are\nmoving towards encrypted at the switch level, so the local network itself\nis considered to be 'secure' ...\n\nBut, personally, you sooooooo sold me on that GUC thing that if we could\nimplement that in time for v7.3, I think alot of ppl would find that\n*quite* valuable ...\n\n\n",
"msg_date": "Wed, 31 Jul 2002 23:44:33 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > > On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> > >\n> > > > Ron Snyder wrote:\n> > > > > >\n> > > > > > Yes, is that your pg_hba.conf line? 'password' is insecure over\n> > > > > > networks you don't trust.\n> > > > >\n> > > > > Yes, we're using 'password password' in our pg_hba.conf file. I trust my\n> > > > > network (so far).\n> > > >\n> > > > That is another major limitation to secondary password files. In fact,\n> > > > md5 will not even work because we assume the username is used as the\n> > > > salt for the md5 encryption. We don't store the salt as part of the\n> > > > encrypted password like crypt does.\n> > > >\n> > > > This was another reason secondary password files were discouraged.\n> > >\n> > > discouraged?? where? :)\n> >\n> > Well. I meant that they had very limited usefulness. You had to trust\n> > your network.\n> \n> that is the case for alot of software, and alot of networks nowadays are\n> moving towards encrypted at the switch level, so the local network itself\n> is considered to be 'secure' ...\n> \n> But, personally, you sooooooo sold me on that GUC thing that if we could\n> implement that in time for v7.3, I think alot of ppl would find that\n> *quite* valuable ...\n> \n\nI am working on it now. I decided against doing any kind of database\nprepending at the user level. You create the user as 'dbname.username'.\nThat is clearer, rather than prepending based on the db you are\nconnected to. The only code change is in the postmaster authentication\nlookup and ownership setting from the backend connection.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 22:48:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> >\n> > > Marc G. Fournier wrote:\n> > > > On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> > > >\n> > > > > Ron Snyder wrote:\n> > > > > > >\n> > > > > > > Yes, is that your pg_hba.conf line? 'password' is insecure over\n> > > > > > > networks you don't trust.\n> > > > > >\n> > > > > > Yes, we're using 'password password' in our pg_hba.conf file. I trust my\n> > > > > > network (so far).\n> > > > >\n> > > > > That is another major limitation to secondary password files. In fact,\n> > > > > md5 will not even work because we assume the username is used as the\n> > > > > salt for the md5 encryption. We don't store the salt as part of the\n> > > > > encrypted password like crypt does.\n> > > > >\n> > > > > This was another reason secondary password files were discouraged.\n> > > >\n> > > > discouraged?? where? :)\n> > >\n> > > Well. I meant that they had very limited usefulness. You had to trust\n> > > your network.\n> >\n> > that is the case for alot of software, and alot of networks nowadays are\n> > moving towards encrypted at the switch level, so the local network itself\n> > is considered to be 'secure' ...\n> >\n> > But, personally, you sooooooo sold me on that GUC thing that if we could\n> > implement that in time for v7.3, I think alot of ppl would find that\n> > *quite* valuable ...\n> >\n>\n> I am working on it now. I decided against doing any kind of database\n> prepending at the user level. You create the user as 'dbname.username'.\n> That is clearer, rather than prepending based on the db you are\n> connected to. The only code change is in the postmaster authentication\n> lookup and ownership setting from the backend connection.\n\nOkay, just a couple of questions ... if there any way of provide\n'superuse' access a user of the database for creating new users? Say one\ncreates a dbname.pgsql account, could it be given 'create user' privileges\nfor other users with a prefix of dbname.*?\n\nand, what happens if one doesn't specify dbname.*? does that user become\n'global', or have access to nothing?\n\n",
"msg_date": "Thu, 1 Aug 2002 00:07:55 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> > I am working on it now. I decided against doing any kind of database\n> > prepending at the user level. You create the user as 'dbname.username'.\n> > That is clearer, rather than prepending based on the db you are\n> > connected to. The only code change is in the postmaster authentication\n> > lookup and ownership setting from the backend connection.\n> \n> Okay, just a couple of questions ... if there any way of provide\n> 'superuse' access a user of the database for creating new users? Say one\n> creates a dbname.pgsql account, could it be given 'create user' privileges\n> for other users with a prefix of dbname.*?\n\nUh, that will be tough.\n\nSuper-user account will not be qualified by dbname for simplicity. \n\n> and, what happens if one doesn't specify dbname.*? does that user become\n> 'global', or have access to nothing?\n\nAccess to nothing. I could actually try to quality by dbname.username,\nthen fall back to just username, but that seems insecure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 23:17:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > > I am working on it now. I decided against doing any kind of database\n> > > prepending at the user level. You create the user as 'dbname.username'.\n> > > That is clearer, rather than prepending based on the db you are\n> > > connected to. The only code change is in the postmaster authentication\n> > > lookup and ownership setting from the backend connection.\n> >\n> > Okay, just a couple of questions ... if there any way of provide\n> > 'superuse' access a user of the database for creating new users? Say one\n> > creates a dbname.pgsql account, could it be given 'create user' privileges\n> > for other users with a prefix of dbname.*?\n>\n> Uh, that will be tough.\n>\n> Super-user account will not be qualified by dbname for simplicity.\n>\n> > and, what happens if one doesn't specify dbname.*? does that user become\n> > 'global', or have access to nothing?\n>\n> Access to nothing. I could actually try to quality by dbname.username,\n> then fall back to just username, but that seems insecure.\n\nNo, that's cool ... just questions I thought of ...\n\nOkay ... hmmm ... just making sure that I understand ... I setup a server,\nwhen does this dbname.* come into play? Only if I enable password/md5 in\npg_hba.conf for a specific database? all others would still use a plain\n'username' still works? or are you getting rid of the 'global usernames'\naltogether (which is cool too, just want to clarify) ...\n\n\n\n",
"msg_date": "Thu, 1 Aug 2002 00:27:43 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> > Access to nothing. I could actually try to quality by dbname.username,\n> > then fall back to just username, but that seems insecure.\n> \n> No, that's cool ... just questions I thought of ...\n\nOK.\n\n> Okay ... hmmm ... just making sure that I understand ... I setup a server,\n> when does this dbname.* come into play? Only if I enable password/md5 in\n> pg_hba.conf for a specific database? all others would still use a plain\n> 'username' still works? or are you getting rid of the 'global usernames'\n> altogether (which is cool too, just want to clarify) ...\n\nThere will be a GUC param db_user_namespace which will turn it on/off\nfor all access to the cluster _except_ for the super-user.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 23:31:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > > Access to nothing. I could actually try to quality by dbname.username,\n> > > then fall back to just username, but that seems insecure.\n> >\n> > No, that's cool ... just questions I thought of ...\n>\n> OK.\n>\n> > Okay ... hmmm ... just making sure that I understand ... I setup a server,\n> > when does this dbname.* come into play? Only if I enable password/md5 in\n> > pg_hba.conf for a specific database? all others would still use a plain\n> > 'username' still works? or are you getting rid of the 'global usernames'\n> > altogether (which is cool too, just want to clarify) ...\n>\n> There will be a GUC param db_user_namespace which will turn it on/off\n> for all access to the cluster _except_ for the super-user.\n\nOkay ... cluster == database server, or a subset of databases within the\nserver? I know what I think of as a cluster, and somehow I suspect this\nhas to do with the new schema stuff, which means I *really* have to find\ntime to do some catch-up reading ;) need more hours in day, days in week\n;(\n\n",
"msg_date": "Thu, 1 Aug 2002 00:58:07 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > > > Access to nothing. I could actually try to quality by dbname.username,\n> > > > then fall back to just username, but that seems insecure.\n> > >\n> > > No, that's cool ... just questions I thought of ...\n> >\n> > OK.\n> >\n> > > Okay ... hmmm ... just making sure that I understand ... I setup a server,\n> > > when does this dbname.* come into play? Only if I enable password/md5 in\n> > > pg_hba.conf for a specific database? all others would still use a plain\n> > > 'username' still works? or are you getting rid of the 'global usernames'\n> > > altogether (which is cool too, just want to clarify) ...\n> >\n> > There will be a GUC param db_user_namespace which will turn it on/off\n> > for all access to the cluster _except_ for the super-user.\n> \n> Okay ... cluster == database server, or a subset of databases within the\n> server? I know what I think of as a cluster, and somehow I suspect this\n> has to do with the new schema stuff, which means I *really* have to find\n> time to do some catch-up reading ;) need more hours in day, days in week\n\nCluster is db server in this case.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 31 Jul 2002 23:59:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> >\n> > > Marc G. Fournier wrote:\n> > > > > Access to nothing. I could actually try to quality by dbname.username,\n> > > > > then fall back to just username, but that seems insecure.\n> > > >\n> > > > No, that's cool ... just questions I thought of ...\n> > >\n> > > OK.\n> > >\n> > > > Okay ... hmmm ... just making sure that I understand ... I setup a server,\n> > > > when does this dbname.* come into play? Only if I enable password/md5 in\n> > > > pg_hba.conf for a specific database? all others would still use a plain\n> > > > 'username' still works? or are you getting rid of the 'global usernames'\n> > > > altogether (which is cool too, just want to clarify) ...\n> > >\n> > > There will be a GUC param db_user_namespace which will turn it on/off\n> > > for all access to the cluster _except_ for the super-user.\n> >\n> > Okay ... cluster == database server, or a subset of databases within the\n> > server? I know what I think of as a cluster, and somehow I suspect this\n> > has to do with the new schema stuff, which means I *really* have to find\n> > time to do some catch-up reading ;) need more hours in day, days in week\n>\n> Cluster is db server in this case.\n\n'K, cool, thanks :)\n\nOkay, final request .. how hard would it be to pre-pend the current\ndatabase name if GUC value is on? ie. if I'm in db1 and run CREATE USER,\nit will add db1. to the username if I hadn't already? Sounds to me it\nwould be simple to do, and it would \"fix\" the point I made about being\nable to have a db \"owner\" account with create user privileges (ie. if I'm\nin db1 and run CREATE USER db2.bruce, it should reject that unless I've\ngot create database prileges *and* create user) ...\n\nOther then that, most elegant solution, IMHO :)\n\n",
"msg_date": "Thu, 1 Aug 2002 01:08:10 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> Okay, final request .. how hard would it be to pre-pend the current\n> database name if GUC value is on? ie. if I'm in db1 and run CREATE USER,\n> it will add db1. to the username if I hadn't already? Sounds to me it\n> would be simple to do, and it would \"fix\" the point I made about being\n> able to have a db \"owner\" account with create user privileges (ie. if I'm\n> in db1 and run CREATE USER db2.bruce, it should reject that unless I've\n> got create database prileges *and* create user) ...\n\nOK, let me get the easy part working first.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 00:09:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "OK, I have attached a patch for testing. Sample output is:\n\n\t$ sql -U guest test\n\tpsql: FATAL: user \"test.guest\" does not exist\n\t$ createuser test.guest\n\tShall the new user be allowed to create databases? (y/n) n\n\tShall the new user be allowed to create more new users? (y/n) n\n\tCREATE USER\n\t#$ sql -U guest test\n\tWelcome to psql, the PostgreSQL interactive terminal.\n\t\n\tType: \\copyright for distribution terms\n\t \\h for help with SQL commands\n\t \\? for help on internal slash commands\n\t \\g or terminate with semicolon to execute query\n\t \\q to quit\n\t\n\ttest=> \n\nThe patch is quite small. All it does is prepend the database name to\nthe user name supplied with the connection request when\ndb_user_namespace is true.\n\nThis is not ready for application. I can find no way from the\npostmaster to determine if the user is the super-user and hence bypass\nthe database prepending. I was going to do that _only_ for the username\nwho created the installation for initdb. Maybe I have to dump that name\nout to a file and read it in from the postmaster. Other ideas?\n\nIt also needs documentation.\n\nI am unsure about auto-prepending the dbname for CREATE USER and other\ncases. That could get confusing, especially because createuser accesses\ntemplate1, and we would have to handle all other username mentions, like\nin GRANT. We may be better just leaving it along and telling admins\nthey have to quality the username in those cases.\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > > On Wed, 31 Jul 2002, Bruce Momjian wrote:\n> > >\n> > > > Marc G. Fournier wrote:\n> > > > > > Access to nothing. I could actually try to quality by dbname.username,\n> > > > > > then fall back to just username, but that seems insecure.\n> > > > >\n> > > > > No, that's cool ... just questions I thought of ...\n> > > >\n> > > > OK.\n> > > >\n> > > > > Okay ... hmmm ... just making sure that I understand ... I setup a server,\n> > > > > when does this dbname.* come into play? Only if I enable password/md5 in\n> > > > > pg_hba.conf for a specific database? all others would still use a plain\n> > > > > 'username' still works? or are you getting rid of the 'global usernames'\n> > > > > altogether (which is cool too, just want to clarify) ...\n> > > >\n> > > > There will be a GUC param db_user_namespace which will turn it on/off\n> > > > for all access to the cluster _except_ for the super-user.\n> > >\n> > > Okay ... cluster == database server, or a subset of databases within the\n> > > server? I know what I think of as a cluster, and somehow I suspect this\n> > > has to do with the new schema stuff, which means I *really* have to find\n> > > time to do some catch-up reading ;) need more hours in day, days in week\n> >\n> > Cluster is db server in this case.\n> \n> 'K, cool, thanks :)\n> \n> Okay, final request .. how hard would it be to pre-pend the current\n> database name if GUC value is on? ie. if I'm in db1 and run CREATE USER,\n> it will add db1. to the username if I hadn't already? Sounds to me it\n> would be simple to do, and it would \"fix\" the point I made about being\n> able to have a db \"owner\" account with create user privileges (ie. if I'm\n> in db1 and run CREATE USER db2.bruce, it should reject that unless I've\n> got create database prileges *and* create user) ...\n> \n> Other then that, most elegant solution, IMHO :)\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/libpq/auth.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/libpq/auth.c,v\nretrieving revision 1.82\ndiff -c -r1.82 auth.c\n*** src/backend/libpq/auth.c\t20 Jun 2002 20:29:28 -0000\t1.82\n--- src/backend/libpq/auth.c\t1 Aug 2002 05:13:35 -0000\n***************\n*** 117,123 ****\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n--- 117,123 ----\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_DATABASE_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n***************\n*** 290,296 ****\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\n--- 290,296 ----\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_DATABASE_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\nIndex: src/backend/postmaster/postmaster.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/postmaster/postmaster.c,v\nretrieving revision 1.281\ndiff -c -r1.281 postmaster.c\n*** src/backend/postmaster/postmaster.c\t13 Jul 2002 01:02:14 -0000\t1.281\n--- src/backend/postmaster/postmaster.c\t1 Aug 2002 05:13:37 -0000\n***************\n*** 192,197 ****\n--- 192,199 ----\n bool\t\tHostnameLookup;\t\t/* for ps display */\n bool\t\tShowPortNumber;\n bool\t\tLog_connections = false;\n+ bool\t\tDb_user_namespace = false;\n+ \n \n /* Startup/shutdown state */\n static pid_t StartupPID = 0,\n***************\n*** 1156,1161 ****\n--- 1158,1173 ----\n \t/* Check a user name was given. */\n \tif (port->user[0] == '\\0')\n \t\telog(FATAL, \"no PostgreSQL user name specified in startup packet\");\n+ \n+ \t/* Prefix database name for per-db user namespace */\n+ \t/* XXX look up super-user name from postmaster */\n+ \tif (Db_user_namespace && strcmp(port->user, \"postgres\"))\n+ \t{\n+ \t\tchar hold_user[SM_DATABASE_USER];\n+ \t\tsnprintf(hold_user, SM_DATABASE_USER, \"%s.%s\", port->database,\n+ \t\t\t\t port->user);\n+ \t\tstrcpy(port->user, hold_user);\n+ \t}\n \n \t/*\n \t * If we're going to reject the connection due to database state, say\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/misc/guc.c,v\nretrieving revision 1.76\ndiff -c -r1.76 guc.c\n*** src/backend/utils/misc/guc.c\t30 Jul 2002 16:20:03 -0000\t1.76\n--- src/backend/utils/misc/guc.c\t1 Aug 2002 05:13:40 -0000\n***************\n*** 481,486 ****\n--- 481,490 ----\n \t\t{ \"transform_null_equals\", PGC_USERSET }, &Transform_null_equals,\n \t\tfalse, NULL, NULL\n \t},\n+ \t{\n+ \t\t{ \"db_user_namespace\", PGC_SIGHUP }, &Db_user_namespace,\n+ \t\tfalse, NULL, NULL\n+ \t},\n \n \t{\n \t\t{ NULL, 0 }, NULL, false, NULL, NULL\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.42\ndiff -c -r1.42 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t30 Jul 2002 04:24:54 -0000\t1.42\n--- src/backend/utils/misc/postgresql.conf.sample\t1 Aug 2002 05:13:40 -0000\n***************\n*** 112,118 ****\n #\n #\tMessage display\n #\n- \n #server_min_messages = notice\t# Values, in order of decreasing detail:\n \t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n \t\t\t\t# info, notice, warning, error, log, fatal,\n--- 112,117 ----\n***************\n*** 200,202 ****\n--- 199,202 ----\n #sql_inheritance = true\n #transform_null_equals = false\n #statement_timeout = 0\t\t\t\t# 0 is disabled\n+ #db_user_namespace = false\nIndex: src/include/libpq/libpq-be.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/libpq/libpq-be.h,v\nretrieving revision 1.32\ndiff -c -r1.32 libpq-be.h\n*** src/include/libpq/libpq-be.h\t20 Jun 2002 20:29:49 -0000\t1.32\n--- src/include/libpq/libpq-be.h\t1 Aug 2002 05:13:40 -0000\n***************\n*** 59,65 ****\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\n--- 59,65 ----\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_DATABASE_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\n***************\n*** 72,78 ****\n \tSSL\t\t *ssl;\n \tX509\t *peer;\n \tchar\t\tpeer_dn[128 + 1];\n! \tchar\t\tpeer_cn[SM_USER + 1];\n \tunsigned long count;\n #endif\n } Port;\n--- 72,78 ----\n \tSSL\t\t *ssl;\n \tX509\t *peer;\n \tchar\t\tpeer_dn[128 + 1];\n! \tchar\t\tpeer_cn[SM_DATABASE_USER + 1];\n \tunsigned long count;\n #endif\n } Port;\nIndex: src/include/libpq/pqcomm.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/libpq/pqcomm.h,v\nretrieving revision 1.64\ndiff -c -r1.64 pqcomm.h\n*** src/include/libpq/pqcomm.h\t20 Jun 2002 20:29:49 -0000\t1.64\n--- src/include/libpq/pqcomm.h\t1 Aug 2002 05:13:40 -0000\n***************\n*** 114,119 ****\n--- 114,121 ----\n #define SM_DATABASE\t\t64\n /* SM_USER should be the same size as the others. bjm 2002-06-02 */\n #define SM_USER\t\t\t32\n+ /* We prepend database name if db_user_namespace true. */\n+ #define SM_DATABASE_USER (SM_DATABASE+SM_USER)\n #define SM_OPTIONS\t\t64\n #define SM_UNUSED\t\t64\n #define SM_TTY\t\t\t64\n***************\n*** 124,135 ****\n--- 126,139 ----\n {\n \tProtocolVersion protoVersion;\t\t/* Protocol version */\n \tchar\t\tdatabase[SM_DATABASE];\t/* Database name */\n+ \t\t\t\t/* Db_user_namespace prepends dbname */\n \tchar\t\tuser[SM_USER];\t/* User name */\n \tchar\t\toptions[SM_OPTIONS];\t/* Optional additional args */\n \tchar\t\tunused[SM_UNUSED];\t\t/* Unused */\n \tchar\t\ttty[SM_TTY];\t/* Tty for debug output */\n } StartupPacket;\n \n+ extern bool Db_user_namespace;\n \n /* These are the authentication requests sent by the backend. */",
"msg_date": "Thu, 1 Aug 2002 01:25:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, I have attached a patch for testing. Sample output is:\n>\n> \t$ sql -U guest test\n> \tpsql: FATAL: user \"test.guest\" does not exist\n> \t$ createuser test.guest\n\nI will object to any scheme that makes any characters in the user name\nmagic. Two reasons: First, do it right, make a separate column.\nSecond, several tools use URI syntax to specify data sources. This will\nbreak any feature that relies on being able to put special characters into\nthe user name.\n\nThe right solution to having database-local user names is putting extra\ninformation into pg_shadow regarding which database this user applies to.\nIt could be an array or some separate \"authentication domain\" thing.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:05:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > OK, I have attached a patch for testing. Sample output is:\n> >\n> > \t$ sql -U guest test\n> > \tpsql: FATAL: user \"test.guest\" does not exist\n> > \t$ createuser test.guest\n> \n> I will object to any scheme that makes any characters in the user name\n> magic. Two reasons: First, do it right, make a separate column.\n> Second, several tools use URI syntax to specify data sources. This will\n> break any feature that relies on being able to put special characters into\n> the user name.\n> \n> The right solution to having database-local user names is putting extra\n> information into pg_shadow regarding which database this user applies to.\n> It could be an array or some separate \"authentication domain\" thing.\n\nOK, if you object, you can say goodbye to this feature for 7.3. I can\nsupply the patch to Marc and anyone else who wants it but I am not\ninclined nor convinced we need that level of work for this feature.\n\nSo we end up with nothing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 17:11:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 1 Aug 2002, Bruce Momjian wrote:\n\n> Peter Eisentraut wrote:\n> > Bruce Momjian writes:\n> >\n> > > OK, I have attached a patch for testing. Sample output is:\n> > >\n> > > \t$ sql -U guest test\n> > > \tpsql: FATAL: user \"test.guest\" does not exist\n> > > \t$ createuser test.guest\n> >\n> > I will object to any scheme that makes any characters in the user name\n> > magic. Two reasons: First, do it right, make a separate column.\n> > Second, several tools use URI syntax to specify data sources. This will\n> > break any feature that relies on being able to put special characters into\n> > the user name.\n> >\n> > The right solution to having database-local user names is putting extra\n> > information into pg_shadow regarding which database this user applies to.\n> > It could be an array or some separate \"authentication domain\" thing.\n>\n> OK, if you object, you can say goodbye to this feature for 7.3. I can\n> supply the patch to Marc and anyone else who wants it but I am not\n> inclined nor convinced we need that level of work for this feature.\n>\n> So we end up with nothing.\n\nStupid qustion .. but why can't you just add a 'domain' column to\npg_passwd/pg_shadow so that its stored as two fields instead of one?\nWhich I believe is what Pter is/was suggesting ...\n\n\n",
"msg_date": "Thu, 1 Aug 2002 18:13:52 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> > > I will object to any scheme that makes any characters in the user name\n> > > magic. Two reasons: First, do it right, make a separate column.\n> > > Second, several tools use URI syntax to specify data sources. This will\n> > > break any feature that relies on being able to put special characters into\n> > > the user name.\n> > >\n> > > The right solution to having database-local user names is putting extra\n> > > information into pg_shadow regarding which database this user applies to.\n> > > It could be an array or some separate \"authentication domain\" thing.\n> >\n> > OK, if you object, you can say goodbye to this feature for 7.3. I can\n> > supply the patch to Marc and anyone else who wants it but I am not\n> > inclined nor convinced we need that level of work for this feature.\n> >\n> > So we end up with nothing.\n> \n> Stupid qustion .. but why can't you just add a 'domain' column to\n> pg_passwd/pg_shadow so that its stored as two fields instead of one?\n> Which I believe is what Pter is/was suggesting ...\n\nRight now, pg_pwd only dumps users with passwords, and as I remember, it\nis only accessed when the protocol needs to lookup a password. It\nwasn't designed for anything more advanced. If you want separate\ncolumns, you have to dump out everyone, and modify CREATE USER,\ncreateuser, ALTER USER, ... to handle those new domain names, and you\nhave to make this API visible to everyone even if they are not using\ndomains. That's where things really get ugly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 17:20:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 2002-08-01 at 23:20, Bruce Momjian wrote:\n> Marc G. Fournier wrote:\n> > > > I will object to any scheme that makes any characters in the user name\n> > > > magic. Two reasons: First, do it right, make a separate column.\n> > > > Second, several tools use URI syntax to specify data sources. This will\n> > > > break any feature that relies on being able to put special characters into\n> > > > the user name.\n\nThis should be settable using a GUC variable (in postgresql.conf as it\nmakes no sense once you are connected).\n\n> > > > The right solution to having database-local user names is putting extra\n> > > > information into pg_shadow regarding which database this user applies to.\n> > > > It could be an array or some separate \"authentication domain\" thing.\n> > >\n> > > OK, if you object, you can say goodbye to this feature for 7.3. I can\n> > > supply the patch to Marc and anyone else who wants it but I am not\n> > > inclined nor convinced we need that level of work for this feature.\n> > >\n> > > So we end up with nothing.\n> > \n> > Stupid qustion .. but why can't you just add a 'domain' column to\n> > pg_passwd/pg_shadow so that its stored as two fields instead of one?\n> > Which I believe is what Pter is/was suggesting ...\n> \n> Right now, pg_pwd only dumps users with passwords, and as I remember, it\n> is only accessed when the protocol needs to lookup a password. It\n> wasn't designed for anything more advanced. If you want separate\n> columns, you have to dump out everyone, and modify CREATE USER,\n> createuser, ALTER USER, ... to handle those new domain names, and you\n> have to make this API visible to everyone even if they are not using\n> domains. That's where things really get ugly.\n\nActually _not_ modifying the commands (and thus leaving the\npg_shadow.usedomain column empty) will give us exactly the old\nbehaviour. For advanced uses it should be an acceptable interim solution\nto have the superuser update the pg_shadow manually.\n\nBut if noone has time to work on it more than just mangling usernames at\nconnect time, that should also be ok for 7.3. We just have to document\nit and warn of a new change to real domain users in 7.4 (or later).\n\n--------------\nHannu\n\n",
"msg_date": "02 Aug 2002 12:56:12 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, if you object, you can say goodbye to this feature for 7.3. I can\n> supply the patch to Marc and anyone else who wants it but I am not\n> inclined nor convinced we need that level of work for this feature.\n\nThe right solution, IMO, is to resurrect the feature we had and think\nabout a fully-featured solution for the next release. Or try to sell the\nproposed solutions as fully-featured . . .\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 6 Aug 2002 23:17:50 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nIt had such limited usefulness ('password' only, only crypted-hashed\npasswords in the file) that it doesn't make much sense to resurect it.\n\nTo directly address your point, I don't think this new feature will be\nused enough to add the capability to the user admin commands.\n\nI know you object, so I am going to ask for a vote.\n\nOK, here is the request for vote. Do we want:\n\n\t1) the old secondary passwords re-added\n\t2) the new prefixing of the database name to the username when enabled\n\t3) do nothing\n\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > OK, if you object, you can say goodbye to this feature for 7.3. I can\n> > supply the patch to Marc and anyone else who wants it but I am not\n> > inclined nor convinced we need that level of work for this feature.\n> \n> The right solution, IMO, is to resurrect the feature we had and think\n> about a fully-featured solution for the next release. Or try to sell the\n> proposed solutions as fully-featured . . .\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 21:09:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> OK, here is the request for vote. Do we want:\n\n> \t2) the new prefixing of the database name to the username when enabled\n\nI vote 2.\n\n",
"msg_date": "06 Aug 2002 21:21:55 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Bruce Momjian wrote:\n\n>\n> It had such limited usefulness ('password' only, only crypted-hashed\n> passwords in the file) that it doesn't make much sense to resurect it.\n\nIt had limited usefulness to you ... but how many sites out there are\ngoing to break when they try to upgraded without it there? I do agree\nthat it needs to improved / replaced, but without a suitable replacement\nin place, the old should be resurrected until such a suitable one is in\nplace ...\n\n> I know you object, so I am going to ask for a vote.\n\nHow can you request a vote of such a limited audience? *Adding*\nfunctionality is easy ... removing functionality with at least a release\nfor-warning is easy ... removing a feature without any forewarning is akin\nto cutting our own throats ...\n\n> OK, here is the request for vote. Do we want:\n>\n> \t1) the old secondary passwords re-added\n> \t2) the new prefixing of the database name to the username when enabled\n> \t3) do nothing\n\nIf 2 can be done in such a way to be transparent, as well as to allow a\ndatabase owner to be able to create users for his/her database, then I\nthink it would be great ... and would far exceed what we have now ...\n\nIf you can't do 2 as a complete solution, which, IMHO, includes a db owner\nbeing able to create db.users for his own database, then my vote is for 1\n... if 2 can be done completely, then I vote for 2, as it would definitely\nbe much more useful ...\n\nHrmmm ... I was just thinking of another scenario where such a feature\nwould be great ... educational. The ability to setup a database server,\nbut to give a professor a database for a course that he could create\n'accounts' for each of the students ...\n\n",
"msg_date": "Tue, 6 Aug 2002 22:24:16 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "I would personally like to see 2, however, Marc is correct IMHO. I cast\nmy vote using the qualifiers that Marc laid out below.\n\nGreg\n\n\nOn Tue, 2002-08-06 at 20:24, Marc G. Fournier wrote:\n> On Tue, 6 Aug 2002, Bruce Momjian wrote:\n> \n> >\n> > It had such limited usefulness ('password' only, only crypted-hashed\n> > passwords in the file) that it doesn't make much sense to resurect it.\n> \n> It had limited usefulness to you ... but how many sites out there are\n> going to break when they try to upgraded without it there? I do agree\n> that it needs to improved / replaced, but without a suitable replacement\n> in place, the old should be resurrected until such a suitable one is in\n> place ...\n> \n> > I know you object, so I am going to ask for a vote.\n> \n> How can you request a vote of such a limited audience? *Adding*\n> functionality is easy ... removing functionality with at least a release\n> for-warning is easy ... removing a feature without any forewarning is akin\n> to cutting our own throats ...\n> \n> > OK, here is the request for vote. Do we want:\n> >\n> > \t1) the old secondary passwords re-added\n> > \t2) the new prefixing of the database name to the username when enabled\n> > \t3) do nothing\n> \n> If 2 can be done in such a way to be transparent, as well as to allow a\n> database owner to be able to create users for his/her database, then I\n> think it would be great ... and would far exceed what we have now ...\n> \n> If you can't do 2 as a complete solution, which, IMHO, includes a db owner\n> being able to create db.users for his own database, then my vote is for 1\n> ... if 2 can be done completely, then I vote for 2, as it would definitely\n> be much more useful ...\n> \n> Hrmmm ... I was just thinking of another scenario where such a feature\n> would be great ... educational. The ability to setup a database server,\n> but to give a professor a database for a course that he could create\n> 'accounts' for each of the students ...\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster",
"msg_date": "06 Aug 2002 20:30:32 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> How can you request a vote of such a limited audience? *Adding*\n> functionality is easy ... removing functionality with at least a release\n> for-warning is easy ... removing a feature without any forewarning is akin\n> to cutting our own throats ...\n\n\nYea, but it was such an ugly feature and I honestly thought no one was\nusing it. In fact, you aren't even using it in the indended way of\nsharing /etc/passwd. You are using it to implement a different\ncapability that I never even imagined. :-)\n\n> \n> > OK, here is the request for vote. Do we want:\n> >\n> > \t1) the old secondary passwords re-added\n> > \t2) the new prefixing of the database name to the username when enabled\n> > \t3) do nothing\n> \n> If 2 can be done in such a way to be transparent, as well as to allow a\n> database owner to be able to create users for his/her database, then I\n> think it would be great ... and would far exceed what we have now ...\n> \n> If you can't do 2 as a complete solution, which, IMHO, includes a db owner\n> being able to create db.users for his own database, then my vote is for 1\n> ... if 2 can be done completely, then I vote for 2, as it would definitely\n> be much more useful ...\n\nWell, as it currently stands in the patch, a db owner can create any\nuser they want, including users for just their dbs. However, remember\nthat Once someone can create a user, they can create a superuser, so\nsecurity for those folks is impossible. The patch does not prevent them\nfrom creating user for other databases, if that is what you wanted, but\ndid your previous solution allow this?\n\n\n> \n> Hrmmm ... I was just thinking of another scenario where such a feature\n> would be great ... educational. The ability to setup a database server,\n> but to give a professor a database for a course that he could create\n> 'accounts' for each of the students ...\n\nYep, with no conflicting names.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 21:50:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, here is the request for vote. Do we want:\n> \n> \t1) the old secondary passwords re-added\n> \t2) the new prefixing of the database name to the username when enabled\n> \t3) do nothing\n\nI'd vote #3, for the following reasons:\n\n - The functionality that Marc is worried about (in effect,\n\t\t allowing multiple database users with the same name) is\n\t\t pretty obscure, and the implementation is even more so. I\n\t\t doubt whether there is *anyone* other than Marc actually\n\t\t using it (if that's not the case, please speak up).\n\n Given that it was completely undocumented and a pretty clear\n abuse of the existing code, I don't think it's unreasonable\n for us to break backward compatibility on this issue.\n\n - The old way of doing things is broken, for reasons Bruce has\n elaborated on. Unless there's a compelling reason why we\n *need* this feature in the standard distribution, I'd rather\n we not go back to the old way of doing things.\n\n - I'm not perfectly happy with the scheme Bruce suggested as\n an interim fix (#2). If we're going to implement this\n feature, let's do it properly. In particular, I'm not\n convinced that this feature is urgently needed enough to\n justify a short-term kludge, and I dislike using a GUC\n variable to toggle between two quite different\n authentication processes.\n\nSo I'd say leave things as they are. One thing I'd like to see anyway\nis a more prominent listing of the client-visible incompatibilities in\nthe release notes -- I'd be content to add an entry to that list for\nthe 7.3 release and talk about a more elaborate scheme during the 7.4\ndevelopment cycle.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "06 Aug 2002 22:25:38 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Bruce Momjian wrote:\n\n> > How can you request a vote of such a limited audience? *Adding*\n> > functionality is easy ... removing functionality with at least a release\n> > for-warning is easy ... removing a feature without any forewarning is akin\n> > to cutting our own throats ...\n>\n>\n> Yea, but it was such an ugly feature and I honestly thought no one was\n> using it. In fact, you aren't even using it in the indended way of\n> sharing /etc/passwd. You are using it to implement a different\n> capability that I never even imagined. :-)\n\nCan you point me to where this documentation is on its intended use?\n*raised eyebrow* Just bcause you couldn't imagine it being used the way I\nam, doesn't mean that wasn't what it was intended for :)\n\n> Well, as it currently stands in the patch, a db owner can create any\n> user they want, including users for just their dbs. However, remember\n> that Once someone can create a user, they can create a superuser, so\n> security for those folks is impossible. The patch does not prevent them\n> from creating user for other databases, if that is what you wanted, but\n> did your previous solution allow this?\n\nBut, the patch should ... how hard is it to add code in that says \"if\nconnected to db1 *and* have creat user privs, then allow create of\ndb1.<username>\"?\n\nPersonally, from using cyrus-imapd for much much too long, I think what\nwe're looking at is 'realms' ... if 'enable_realms' is enabled in\npostmaster.conf, then a user creatd wile connetd to db1 shuld have db1\nappended automagically ...\n\nthen again, i do think its \"a Bad Thing\" to have this enable/disableable,\nsince it will cause some serious confusion ... its kinda like everyone's\nargument against Thomas' recent patch about XLOG ... what if you forget?\n\nit should be an initdb option (--enable-realms) so that its a\none-time-only decision when you create the database instance, not\nsomething that you can flip on/off ... default would be disabled, to\nreflect current behaviour (minus the password file) ...\n\nor, another option would be 'CREATE DATABASE <DB> WITH REALMS', so that\nyou could have some with, some without ... so, if a DATABASE was creatd\nwith REALMS, a flag would be set in pg_database stating that only those\nusers with db. prefix have access to that database ...\n\nthen again, another neat thing would be he ability to 'group' databases\n... CREATE DATABASE <DB> IN GROUP <dbgroup>, so that users would be named\ndbgroup.* and would b able to login to any database within that group ...\n\nbut those are just ideas thrown out ... IMHO, critical for v7.3, if we\ndon't revert the patch, is to have *either* '--enable-realms' to set an\ninstance in that mode, *or* have it on a per database basis ... I think\nhaving it as an on/off setting in postmaster.conf is just askng for\ntrouble ...\n\n",
"msg_date": "Wed, 7 Aug 2002 01:27:05 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, here is the request for vote. Do we want:\n\n> \t1) the old secondary passwords re-added\n> \t2) the new prefixing of the database name to the username when enabled\n> \t3) do nothing\n\nI vote for 2b), username@database ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 00:43:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n>>OK, here is the request for vote. Do we want:\n> \n> \n>>\t1) the old secondary passwords re-added\n>>\t2) the new prefixing of the database name to the username when enabled\n>>\t3) do nothing\n> \n> \n> I vote for 2b), username@database ...\n> \n\nI like that too -- and it has the added benefit of being similar to \nOracle (username@tns_servicename; tns_servicename is really just a \npointer to the IP/port of a specific Oracle database).\n\nJoe\n\n",
"msg_date": "Tue, 06 Aug 2002 21:49:38 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> - The functionality that Marc is worried about (in effect,\n> \t\t allowing multiple database users with the same name) is\n> \t\t pretty obscure, and the implementation is even more so. I\n> \t\t doubt whether there is *anyone* other than Marc actually\n> \t\t using it (if that's not the case, please speak up).\n\nI would use database specific users for a similar area -- shared\nhosting. But, could live with a longer (128 byte) namedatalen to allow\na unique user%domain.\n\n",
"msg_date": "07 Aug 2002 08:16:06 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n\n> > - The functionality that Marc is worried about (in effect,\n> > \t\t allowing multiple database users with the same name) is\n> > \t\t pretty obscure, and the implementation is even more so. I\n> > \t\t doubt whether there is *anyone* other than Marc actually\n> > \t\t using it (if that's not the case, please speak up).\n> \n> I would use database specific users for a similar area -- shared\n> hosting.\n\nI agree that the functionality Marc is looking for is useful -- I'm\njust saying that I would bet that *no one* is using the current\nimplementation of it in PostgreSQL (i.e. so I don't see the need to\nkeep backward compatibility, or the harm in removing the feature for\nthe next release until a better solution is designed & implemented).\n\n> But, could live with a longer (128 byte) namedatalen to allow\n> a unique user%domain.\n\nThat seems like a serviceable solution to me -- it seems quite easy to\nimplement this functionality outside the database proper (at least\nuntil a proper solution is devised). Keep in mind that the current\nFE/BE protocol limits database and user names to 64 characters.\nThat's another thing I'd like to fix in 7.4.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "07 Aug 2002 09:29:46 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\n> > But, could live with a longer (128 byte) namedatalen to allow\n> > a unique user%domain.\n> \n> That seems like a serviceable solution to me -- it seems quite easy to\n> implement this functionality outside the database proper (at least\n> until a proper solution is devised). Keep in mind that the current\n> FE/BE protocol limits database and user names to 64 characters.\n> That's another thing I'd like to fix in 7.4.\n\nAw shoot. 64 characters isn't enough to hold a good chunk of our\nclients domain names let alone usernames in front. I'm not looking\nforward to trimming domains either.\n\nI hope 7.4 that a protocol change for 7.4 is warranted. Looks like\nthere are a fair number of things in that area.\n\n",
"msg_date": "07 Aug 2002 09:37:03 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Keep in mind that the current\n> FE/BE protocol limits database and user names to 64 characters.\n\nThat seems to be a good reason to combine the two on the postmaster\nside, a la Bruce's proposed patch. If the client side does it then\nthe \"user@database\" has to all fit in 64 characters.\n\n> That's another thing I'd like to fix in 7.4.\n\nYup. Do we have a list going of the things we want to fix in the next\nprotocol change? Offhand I remember\n\n* redesign startup packet to eliminate fixed field widths\n* fix COPY protocol to allow resync after errors, support binary data\n* less brain-dead protocol for fast-path function calls\n* allow NOTIFY to include parameters\n* richer ERROR reports (error codes, other stuff)\n\nand I'm sure there's more. None of this stuff seems to be in the TODO\nlist though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 10:14:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Tuesday 06 August 2002 09:24 pm, Marc G. Fournier wrote:\n> On Tue, 6 Aug 2002, Bruce Momjian wrote:\n> > It had such limited usefulness ('password' only, only crypted-hashed\n> > passwords in the file) that it doesn't make much sense to resurect it.\n\n> It had limited usefulness to you ... but how many sites out there are\n> going to break when they try to upgraded without it there? I do agree\n> that it needs to improved / replaced, but without a suitable replacement\n> in place, the old should be resurrected until such a suitable one is in\n> place ...\n\nWhile it appears I'll be outvoted on this issue, and even though I agree that \nthe existing functionality is broken, and even though I am not using the \nfunctionality, I am reminded of the overall policy that we have historically \nhad about removing even broken features. Fair Warning must be given. If that \npolicy is going to be changed, then it needs to be applied with equal vigor \nto all affected cases.\n\nEven if Marc is the only one using this feature, we should follow established \npolicy -- that is, after all, what policy is for. To me it seems it is being \nyanked gratuitously without fair warning. If every question is answered on a \ncase-by-case basis like this, we will descend to anarchy, I'm afraid. And, \nBruce, I even agree with your reasons -- I just disagree with the method.\n\nIs it going to cause a major problem for it to remain one release cycle while \nsomeone works on a suitable replacement, with the warning in the release \nnotes that while this feature is there for backwards compatibility that it \nwill be yanked at the next release? And I'm not talking about a minor \nproblem like 'more people will start using it' -- I'm talking 'if it stays we \nwill be in danger of massive data corruption or exposure' -- of course, \ndocumenting that there is a degree of exposure of data if not set up in an \nexacting method, as Marc seems to have done.\n\nSome may say Marc has fair warning now -- but does anyone know for sure that \nNO ONE ELSE in the whole world isn't using this feature? Marc is more in the \nknow than most, granted -- but if he found this use for the feature others \nmay have as well that we don't even know about.\n\nBut if the feature is not going to remain it needs to be prominently \ndocumented as being removed in the release notes.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 7 Aug 2002 10:43:20 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Do we have a list going of the things we want to fix in the next\n> protocol change? Offhand I remember\n> \n> * redesign startup packet to eliminate fixed field widths\n> * fix COPY protocol to allow resync after errors, support binary data\n> * less brain-dead protocol for fast-path function calls\n> * allow NOTIFY to include parameters\n> * richer ERROR reports (error codes, other stuff)\n\nSome kind of parameter binding or improved support for prepareable\nstatements would require changes to the FE/BE protocol -- being able\nto accept parameters without passing them through the parser, for\nexample.\n\nAllowing clients to cleanly determine the current transaction state\nwill require FE/BE protocol changes, I think. (Or at least, that's my\nvague recollection of the discussion on -hackers from a couple months ago).\n\nThat's all I can think of -- there's probably more stuff...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "07 Aug 2002 11:20:19 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Some kind of parameter binding or improved support for prepareable\n> statements would require changes to the FE/BE protocol -- being able\n> to accept parameters without passing them through the parser, for\n> example.\n\nRight. This is nearly the same, perhaps could be made actually the\nsame, as a fast-path function call.\n\nThe existing FPF call mechanism only supports binary data, but I think\nit would be useful to allow either binary data or ASCII data in both FPF\nand prepared-statement cases. The ASCII path would require invoking a\ndatatype's conversion function on the backend side, but you'd still get\nto skip the SQL statement parsing/planning overhead.\n\n(Wanders away wondering whether COPY might not be made to fit into this\nsame mold...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 11:29:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Thu, 2002-08-08 at 03:27, Marc G. Fournier wrote:\n> On Wed, 7 Aug 2002, Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > OK, here is the request for vote. Do we want:\n> > >\n> > > > \t1) the old secondary passwords re-added\n> > > > \t2) the new prefixing of the database name to the username when enabled\n> > > > \t3) do nothing\n> > >\n> > > I vote for 2b), username@database ...\n> >\n> > Yes, the format was going to be my second vote, dbname.username or\n> > username@dbname. Guess I will not need that vote. ;-)\n> \n> Actually, I kinda like dbname.username myself ... it means that wne you do\n> a SELECT of the pg_shadow file, it can be sorted in a useful manner (ie.\n> grouped by database)\n\nuse a view :\n\ncreate view pg_shadow_with_domain as\n select\n usename as fullname,\n case when (strpos(usename,'@') > 0)\n then substr(usename,1,strpos(usename,'@')-1)\n else usename\n end as usename,\n case when (strpos(usename,'@') > 0)\n then substr(usename,strpos(usename,'@')+1)\n else ''\n end as usedomain,\n usesysid,\n usecreatedb,\n usetrace,\n usesuper,\n usecatupd,\n passwd,\n valuntil\n from pg_shadow;\n\nand sort as you wish ;)\n\nFor example, to get all bruces in all domains starting with an 'acc'\njust do\n\nselect *\n from pg_shadow_with_domain \n where usename = 'bruce'\n and domain like 'acc%' ;\n\n------------------\nHannu\n\n",
"msg_date": "08 Aug 2002 02:12:02 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, here is the request for vote. Do we want:\n> \n> > \t1) the old secondary passwords re-added\n> > \t2) the new prefixing of the database name to the username when enabled\n> > \t3) do nothing\n> \n> I vote for 2b), username@database ...\n\nYes, the format was going to be my second vote, dbname.username or\nusername@dbname. Guess I will not need that vote. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Aug 2002 18:02:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 7 Aug 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > OK, here is the request for vote. Do we want:\n> >\n> > > \t1) the old secondary passwords re-added\n> > > \t2) the new prefixing of the database name to the username when enabled\n> > > \t3) do nothing\n> >\n> > I vote for 2b), username@database ...\n>\n> Yes, the format was going to be my second vote, dbname.username or\n> username@dbname. Guess I will not need that vote. ;-)\n\nActually, I kinda like dbname.username myself ... it means that wne you do\na SELECT of the pg_shadow file, it can be sorted in a useful manner (ie.\ngrouped by database)\n\n\n",
"msg_date": "Wed, 7 Aug 2002 19:27:19 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> Some may say Marc has fair warning now -- but does anyone know\n> for sure that\n> NO ONE ELSE in the whole world isn't using this feature? Marc is\n> more in the\n> know than most, granted -- but if he found this use for the\n> feature others\n> may have as well that we don't even know about.\n>\n> But if the feature is not going to remain it needs to be prominently\n> documented as being removed in the release notes.\n\nAnd just remember all those reasons why people find MySQL easier to use than\nPostgres - the upgrade process...\n\nChris\n\n",
"msg_date": "Thu, 8 Aug 2002 10:26:31 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "LRE: Open 7.3 items"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Thu, 2002-08-08 at 03:27, Marc G. Fournier wrote:\n>> Actually, I kinda like dbname.username myself ... it means that wne you do\n>> a SELECT of the pg_shadow file, it can be sorted in a useful manner (ie.\n>> grouped by database)\n\nHmm, Marc's got a point there...\n\n> use a view :\n\nYeah, but it's painful to do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 22:31:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Ssshhhh....don't tell Curt that! ;)\n\nGreg\n\nOn Wed, 2002-08-07 at 21:31, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Thu, 2002-08-08 at 03:27, Marc G. Fournier wrote:\n> >> Actually, I kinda like dbname.username myself ... it means that wne you do\n> >> a SELECT of the pg_shadow file, it can be sorted in a useful manner (ie.\n> >> grouped by database)\n> \n> Hmm, Marc's got a point there...\n> \n> > use a view :\n> \n> Yeah, but it's painful to do that.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster",
"msg_date": "07 Aug 2002 21:58:10 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 2002-08-08 at 07:31, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Thu, 2002-08-08 at 03:27, Marc G. Fournier wrote:\n> >> Actually, I kinda like dbname.username myself ... it means that wne you do\n> >> a SELECT of the pg_shadow file, it can be sorted in a useful manner (ie.\n> >> grouped by database)\n> \n> Hmm, Marc's got a point there...\n> \n> > use a view :\n> \n> Yeah, but it's painful to do that.\n\nNot if the view is installed with the system.\n\nSo the plan could be:\n\n1 .give the new functionality in a \"light\" version - ie just checking at\nconnect time, full name must be used when creating user.\n\n2. modify pg_user to show it usename usedomain as two separate fields\nfor eas of use (join pg_user and pg_shadow on usesysid if you need to\nsee passwords)\n\n3. in version 7.4 modify CREATE USER and ALTER USER to save the domain\ninfo in pg_shadow.usedomain.\n\n-----------------\nHannu\n\n",
"msg_date": "08 Aug 2002 08:30:37 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> 2. modify pg_user to show it usename usedomain as two separate fields\n> for eas of use (join pg_user and pg_shadow on usesysid if you need to\n> see passwords)\n\nThis is already more mechanism than I wanted to buy into, and less\nforethought than I think we need. For example, is it a good idea if\npg_user shows usernames that cannot be identified with those shown by\nACL lists? If not, how will you modify ACL I/O formats? What about\nthe has_table_privilege functions?\n\nWhat I'm envisioning is an extremely limited facility that just maps\nconnection parameters into an internal username that is of the form\nusername@dbname or dbname.username. Trying to hide that internal\nusername for inside-the-database activities does not strike me as a\ngood plan.\n\nThis may prove to be just a stopgap measure that we'll replace down the\nroad (as indeed the secondary-passwords thing was just a stopgap, IMO).\nLet's not add features that will create extra compatibility problems\nif we abandon the whole approach later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Aug 2002 01:42:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "\nI would like to address this email. \n\nLamar is mentioning that it is unfair to remove a feature without\nwarning.\n\nLet me give a little history. The secondary password file was created\nat a time when we didn't encrypt with random salt over the wire, and we\nhad people who wanted to share their /etc/passwd file with PostgreSQL.\n\nLater, people wanted to use the secondary password file for just\nusernames, so you could list usernames in the file and limit db access\nby user. This is the current usage for 99% of secondary password users.\nThis capability is better served in 7.3 with the new USER column in\npg_shadow and the ability to specify filenames or groups in that file. \nKeeping the secondary password file to specify a user list while a new\nUSER column exists in 7.3 is just confusing to administrators. Our\npg_hba.conf system is pretty complex, so anything we can do to simplify\nhelps.\n\nNow, on to Marc's case, where he does use the file for usernames _and_\npasswords. However, he is using it only so he can have more than one\nperson with the same username and restrict access based on the password\nin the secondary password file. While this does work, my submitted\npatch makes this much easier and cleaner.\n\nMarc had mentioned that this should be an initdb flag. However, our\nstandard procedure is to put stuff in initdb only when it can't be\nchanged after initdb. While strange, this feature can be\nenabled-disabled after initdb. A quick update of pg_shadow can change\nusernames and you can go in and out of this mode.\n\nSomeone talked about pushing this capability into a separate pg_shadow\ncolumn, and making CREATE/ALTER user and createuser aware of this. \nWhile this can be done, and it sort of becomes user schemas, there isn't\nenough people wanting this to add complexity to those commands. A GUC\nflag will meet most peoples needs at this point.\n\nSome mentioned using user@dbname, though the idea of sorting made\nseveral recant their votes.\n\nSo, based on the voting, I think dbname.username is an agreed-upon\nfeature addition for 7.3. I will work on a final patch with\ndocumentation and post it to the patches list for more comment.\n\n---------------------------------------------------------------------------\n\nLamar Owen wrote:\n> On Tuesday 06 August 2002 09:24 pm, Marc G. Fournier wrote:\n> > On Tue, 6 Aug 2002, Bruce Momjian wrote:\n> > > It had such limited usefulness ('password' only, only crypted-hashed\n> > > passwords in the file) that it doesn't make much sense to resurect it.\n> \n> > It had limited usefulness to you ... but how many sites out there are\n> > going to break when they try to upgraded without it there? I do agree\n> > that it needs to improved / replaced, but without a suitable replacement\n> > in place, the old should be resurrected until such a suitable one is in\n> > place ...\n> \n> While it appears I'll be outvoted on this issue, and even though I agree that \n> the existing functionality is broken, and even though I am not using the \n> functionality, I am reminded of the overall policy that we have historically \n> had about removing even broken features. Fair Warning must be given. If that \n> policy is going to be changed, then it needs to be applied with equal vigor \n> to all affected cases.\n> \n> Even if Marc is the only one using this feature, we should follow established \n> policy -- that is, after all, what policy is for. To me it seems it is being \n> yanked gratuitously without fair warning. If every question is answered on a \n> case-by-case basis like this, we will descend to anarchy, I'm afraid. And, \n> Bruce, I even agree with your reasons -- I just disagree with the method.\n> \n> Is it going to cause a major problem for it to remain one release cycle while \n> someone works on a suitable replacement, with the warning in the release \n> notes that while this feature is there for backwards compatibility that it \n> will be yanked at the next release? And I'm not talking about a minor \n> problem like 'more people will start using it' -- I'm talking 'if it stays we \n> will be in danger of massive data corruption or exposure' -- of course, \n> documenting that there is a degree of exposure of data if not set up in an \n> exacting method, as Marc seems to have done.\n> \n> Some may say Marc has fair warning now -- but does anyone know for sure that \n> NO ONE ELSE in the whole world isn't using this feature? Marc is more in the \n> know than most, granted -- but if he found this use for the feature others \n> may have as well that we don't even know about.\n> \n> But if the feature is not going to remain it needs to be prominently \n> documented as being removed in the release notes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 10 Aug 2002 22:41:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Saturday 10 August 2002 10:41 pm, Bruce Momjian wrote:\n> Let me give a little history. The secondary password file was created\n> at a time when we didn't encrypt with random salt over the wire, and we\n> had people who wanted to share their /etc/passwd file with PostgreSQL.\n[snip]\n\n> So, based on the voting, I think dbname.username is an agreed-upon\n> feature addition for 7.3. I will work on a final patch with\n> documentation and post it to the patches list for more comment.\n\nI can live with this, if the documentation is prominently referred to in the \nchangelog.\n\nAs to the feature itself, I believe Bruce's proposed solution is the best, and \nbelieved that from the beginning -- I just wanted to deal with the 'fair \nwarning' issue alone.\n\nAs to fair warning: watch for the next RPM release. Fair Warning is being \ngiven that upgrades within the RPM context will not be supported in any form \nfor the final release of PostgreSQL 7.3. \n\nI had a 'd'oh' moment (and I don't watch the Simpsons....) when I realized \nthat I could quite easily prevent anyone from even attempting an RPM upgrade, \nunless that take matters into their own grubby little hands with special \nswitches to the rpm command line. \n\nIt will not be yanked this next set, but the following set will be \nunupgradable. Sorry, but the packaged kludge isn't reliable enough for the \nstate of PostgreSQL reliability, and I don't want the RPMset's shortcomings \n(due to the whole RPM mechanism forcing the issue) causing bad blood towards \nPostgreSQL in general. The Debian packages don't have much of the limitations \nand restrictions I have to deal with, and until a good upgrade utility is \navailable I'm just going to have to do this.\n\nI have been so swamped with Fortran work for work that I've not even looked at \nthe python code Hannu so kindly sent me, nor have I played any more with \npg_fsck. Groundwave propagation modeling in Fortran has been more \nimportant...\n\nLikewise, my focus as RPM maintainer is changing with this next release. \nSince the distributions, such as Red Hat, are doing a great job keeping up to \ndate, I'm going to not bother much with building RPMs that are, frankly, \nredundant at this point. Three years ago it wasn't this nice. Trond has \ndone a good job on the Red Hat bleeding edge front, Reinhard Max has done \nsimilarly for SuSE. There are PLD, Connectiva, TurboLinux, Caldera, and \nMandrake maintainers as well -- and they seem to be doing fine.\n\nI'm going to now go to the lagging plane -- building newer PostgreSQL for \nolder Red Hat (and maybe others, if I can get enough hard drives available). \nThe source RPM will still be useful to the newer distribution's maintainers \n-- but the requests I see more of on the lists is newer PostgreSQL on older \nlinux. So I'm going to try to rise to that occassion, and take this \nopportunity to apologize for not seeing it sooner.\n\nI welcome comments on this change of focus.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 12 Aug 2002 21:28:41 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Mon, 2002-08-12 at 21:28, Lamar Owen wrote:\n\n> I'm going to now go to the lagging plane -- building newer PostgreSQL for \n> older Red Hat (and maybe others, if I can get enough hard drives available). \n> The source RPM will still be useful to the newer distribution's maintainers \n> -- but the requests I see more of on the lists is newer PostgreSQL on older \n> linux. So I'm going to try to rise to that occassion, and take this \n> opportunity to apologize for not seeing it sooner.\n> \n> I welcome comments on this change of focus.\n\nEven though we run redhat on our systems, as close to stock as we can, I\nhave found that your RPMs build more reliably than Trond's.\n\nMy bad for being unable to diagnose the build problems with the RedHat\nSRPM, my double-bad for letting that failure prevent my reporting the\nissu to him. \n\nBut I for one will miss your lead on the bleeding edge of RPM\ndevelopment.\n\n--\nKarl DeBisschop\n\n\n",
"msg_date": "12 Aug 2002 21:51:11 -0400",
"msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Monday 12 August 2002 09:51 pm, Karl DeBisschop wrote:\n> On Mon, 2002-08-12 at 21:28, Lamar Owen wrote:\n> > I'm going to now go to the lagging plane -- building newer PostgreSQL for\n> > older Red Hat (and maybe others, if I can get enough hard drives\n> > available). The source RPM will still be useful to the newer\n> > distribution's maintainers -- but the requests I see more of on the lists\n> > is newer PostgreSQL on older linux. So I'm going to try to rise to that\n> > occassion, and take this opportunity to apologize for not seeing it\n> > sooner.\n\n> But I for one will miss your lead on the bleeding edge of RPM\n> development.\n\nOh, I've misstated, apparently. I'll continue on the 'bleeding edge' as far \nas versions of PostgreSQL are concerned -- I'm just shifting focus to \nproviding prebuilt binaries on older dists. As I do some other bleeding edge \nwork, I typically will make sure my source RPM's build on the latest and \ngreatest -- they just won't be optimized for it.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 12 Aug 2002 22:49:37 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> Some mentioned using user@dbname, though the idea of sorting made\n> several recant their votes.\n>\n> So, based on the voting, I think dbname.username is an agreed-upon\n> feature addition for 7.3. I will work on a final patch with\n> documentation and post it to the patches list for more comment.\n\nThe nice thing about using an @ sign, amongst being more consistent\nwith kerberos and email, is that it doesn't preclude the use of .'s in\na database name. For simplicity's sake, I'd really like to be able to\ncontinue issuing database names that are identical to the domain that\nthey serve and worry that relying on a \".\" will either make the use of\na dot in the username or database impossible. An @ sign, on the other\nhand, is the ubiquitously agreed upon username/host separator and\nmakes it all that much more consistent for users and administrators.\n\nUsername: john.doe\nDatabase: foo.com\npossible pg_shadow entry #1: john.doe.foo.com\npossible pg_shadow entry #2: john.doe@foo.com\n\nIf people are worried about the sorting, ORDER BY domain, username.\nMy $0.02. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Tue, 13 Aug 2002 14:25:53 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Sean Chittenden wrote:\n> > Some mentioned using user@dbname, though the idea of sorting made\n> > several recant their votes.\n> >\n> > So, based on the voting, I think dbname.username is an agreed-upon\n> > feature addition for 7.3. I will work on a final patch with\n> > documentation and post it to the patches list for more comment.\n> \n> The nice thing about using an @ sign, amongst being more consistent\n> with kerberos and email, is that it doesn't preclude the use of .'s in\n> a database name. For simplicity's sake, I'd really like to be able to\n> continue issuing database names that are identical to the domain that\n> they serve and worry that relying on a \".\" will either make the use of\n> a dot in the username or database impossible. An @ sign, on the other\n> hand, is the ubiquitously agreed upon username/host separator and\n> makes it all that much more consistent for users and administrators.\n> \n> Username: john.doe\n> Database: foo.com\n> possible pg_shadow entry #1: john.doe.foo.com\n> possible pg_shadow entry #2: john.doe@foo.com\n> \n> If people are worried about the sorting, ORDER BY domain, username.\n> My $0.02. -sc\n\nWell, they aren't separate fields so you can't ORDER BY domain. The dot\nwas used so it looks like a schema based on dbname.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 18:22:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> > > Some mentioned using user@dbname, though the idea of sorting made\n> > > several recant their votes.\n> > >\n> > > So, based on the voting, I think dbname.username is an agreed-upon\n> > > feature addition for 7.3. I will work on a final patch with\n> > > documentation and post it to the patches list for more comment.\n> > \n> > The nice thing about using an @ sign, amongst being more consistent\n> > with kerberos and email, is that it doesn't preclude the use of .'s in\n> > a database name. For simplicity's sake, I'd really like to be able to\n> > continue issuing database names that are identical to the domain that\n> > they serve and worry that relying on a \".\" will either make the use of\n> > a dot in the username or database impossible. An @ sign, on the other\n> > hand, is the ubiquitously agreed upon username/host separator and\n> > makes it all that much more consistent for users and administrators.\n> > \n> > Username: john.doe\n> > Database: foo.com\n> > possible pg_shadow entry #1: john.doe.foo.com\n> > possible pg_shadow entry #2: john.doe@foo.com\n> > \n> > If people are worried about the sorting, ORDER BY domain, username.\n> > My $0.02. -sc\n> \n> Well, they aren't separate fields so you can't ORDER BY domain. The dot\n> was used so it looks like a schema based on dbname.\n\nSorry, I know it's a single field and that there is no split()\nfunction (that I'm aware of), but that seems like such a small and\neasy to fix problem that I personally place a higher value on the more\nstandard nomeclature and use of an @ sign. I understand the value of\n. for schemas and whatnot, but isn't a user going to be in their own\nschema to begin with? As for the order by, I've got a list of users\nper \"account\" (sales account), so doing the order by is on two columns\nand the pg_shadow table is generated periodically from our inhouse\ntables. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Tue, 13 Aug 2002 17:09:34 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Sean Chittenden wrote:\n> > Well, they aren't separate fields so you can't ORDER BY domain. The dot\n> > was used so it looks like a schema based on dbname.\n> \n> Sorry, I know it's a single field and that there is no split()\n> function (that I'm aware of), but that seems like such a small and\n> easy to fix problem that I personally place a higher value on the more\n> standard nomeclature and use of an @ sign. I understand the value of\n> . for schemas and whatnot, but isn't a user going to be in their own\n> schema to begin with? As for the order by, I've got a list of users\n> per \"account\" (sales account), so doing the order by is on two columns\n> and the pg_shadow table is generated periodically from our inhouse\n> tables. -sc\n\nI have no personal preference between period and @ or whatever. See if\nyou can get some other votes for @ because most left @ when the ORDER BY\nidea came up from Marc.\n\nAs for it being a special character, it really isn't because the code\nprepends the database name and a period. It doesn't look to see if\nthere is a period in the already or anything.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 21:00:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "OK, here is the patch to implement db_user_namespace. It includes\ndocumentation.\n\nI had to add to initdb to create a file /data/PG_INSTALLER and have the\npostmaster read that on startup to determine the installing user.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> \n> I would like to address this email. \n> \n> Lamar is mentioning that it is unfair to remove a feature without\n> warning.\n> \n> Let me give a little history. The secondary password file was created\n> at a time when we didn't encrypt with random salt over the wire, and we\n> had people who wanted to share their /etc/passwd file with PostgreSQL.\n> \n> Later, people wanted to use the secondary password file for just\n> usernames, so you could list usernames in the file and limit db access\n> by user. This is the current usage for 99% of secondary password users.\n> This capability is better served in 7.3 with the new USER column in\n> pg_shadow and the ability to specify filenames or groups in that file. \n> Keeping the secondary password file to specify a user list while a new\n> USER column exists in 7.3 is just confusing to administrators. Our\n> pg_hba.conf system is pretty complex, so anything we can do to simplify\n> helps.\n> \n> Now, on to Marc's case, where he does use the file for usernames _and_\n> passwords. However, he is using it only so he can have more than one\n> person with the same username and restrict access based on the password\n> in the secondary password file. While this does work, my submitted\n> patch makes this much easier and cleaner.\n> \n> Marc had mentioned that this should be an initdb flag. However, our\n> standard procedure is to put stuff in initdb only when it can't be\n> changed after initdb. While strange, this feature can be\n> enabled-disabled after initdb. A quick update of pg_shadow can change\n> usernames and you can go in and out of this mode.\n> \n> Someone talked about pushing this capability into a separate pg_shadow\n> column, and making CREATE/ALTER user and createuser aware of this. \n> While this can be done, and it sort of becomes user schemas, there isn't\n> enough people wanting this to add complexity to those commands. A GUC\n> flag will meet most peoples needs at this point.\n> \n> Some mentioned using user@dbname, though the idea of sorting made\n> several recant their votes.\n> \n> So, based on the voting, I think dbname.username is an agreed-upon\n> feature addition for 7.3. I will work on a final patch with\n> documentation and post it to the patches list for more comment.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.124\ndiff -c -r1.124 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t12 Aug 2002 00:36:11 -0000\t1.124\n--- doc/src/sgml/runtime.sgml\t14 Aug 2002 01:30:15 -0000\n***************\n*** 1191,1196 ****\n--- 1191,1208 ----\n </varlistentry>\n \n <varlistentry>\n+ <term><varname>DB_USER_NAMESPACE</varname> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Prepends the database name and a period to the username when \n+ connecting to the database. This allows per-database users. \n+ The user who ran <command>initdb</> is excluded from this\n+ handling.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <indexterm>\n <primary>deadlock</primary>\n <secondary>timeout</secondary>\nIndex: src/backend/libpq/auth.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/libpq/auth.c,v\nretrieving revision 1.82\ndiff -c -r1.82 auth.c\n*** src/backend/libpq/auth.c\t20 Jun 2002 20:29:28 -0000\t1.82\n--- src/backend/libpq/auth.c\t14 Aug 2002 01:30:15 -0000\n***************\n*** 117,123 ****\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n--- 117,123 ----\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_DATABASE_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n***************\n*** 290,296 ****\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\n--- 290,296 ----\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_DATABASE_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\nIndex: src/backend/postmaster/postmaster.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/postmaster/postmaster.c,v\nretrieving revision 1.283\ndiff -c -r1.283 postmaster.c\n*** src/backend/postmaster/postmaster.c\t10 Aug 2002 20:29:18 -0000\t1.283\n--- src/backend/postmaster/postmaster.c\t14 Aug 2002 01:30:17 -0000\n***************\n*** 116,122 ****\n sigset_t\tUnBlockSig,\n \t\t\tBlockSig,\n \t\t\tAuthBlockSig;\n- \n #else\n int\t\t\tUnBlockSig,\n \t\t\tBlockSig,\n--- 116,121 ----\n***************\n*** 191,196 ****\n--- 190,197 ----\n bool\t\tHostnameLookup;\t\t/* for ps display */\n bool\t\tShowPortNumber;\n bool\t\tLog_connections = false;\n+ bool\t\tDb_user_namespace = false;\n+ \n \n /* Startup/shutdown state */\n static pid_t StartupPID = 0,\n***************\n*** 208,213 ****\n--- 209,216 ----\n \n bool ClientAuthInProgress = false;\t/* T during new-client authentication */\n \n+ static char InstallUser[SM_USER+1];\n+ \n /*\n * State for assigning random salts and cancel keys.\n * Also, the global MyCancelKey passes the cancel key assigned to a given\n***************\n*** 258,263 ****\n--- 261,267 ----\n static void SignalChildren(int signal);\n static int\tCountChildren(void);\n static bool CreateOptsFile(int argc, char *argv[]);\n+ static bool GetInstallUser(void);\n static pid_t SSDataBase(int xlop);\n void\n postmaster_error(const char *fmt,...)\n***************\n*** 690,695 ****\n--- 694,702 ----\n \tif (!CreateOptsFile(argc, argv))\n \t\tExitPostmaster(1);\n \n+ \tif (!GetInstallUser())\n+ \t\tExitPostmaster(1);\n+ \n \t/*\n \t * Set up signal handlers for the postmaster process.\n \t *\n***************\n*** 1161,1166 ****\n--- 1168,1182 ----\n \tif (port->user[0] == '\\0')\n \t\telog(FATAL, \"no PostgreSQL user name specified in startup packet\");\n \n+ \t/* Prefix database name for per-db user namespace */\n+ \tif (Db_user_namespace && strcmp(port->user, InstallUser))\n+ \t{\n+ \t\tchar hold_user[SM_DATABASE_USER];\n+ \t\tsnprintf(hold_user, SM_DATABASE_USER, \"%s.%s\", port->database,\n+ \t\t\t\t port->user);\n+ \t\tstrcpy(port->user, hold_user);\n+ \t}\n+ \n \t/*\n \t * If we're going to reject the connection due to database state, say\n \t * so now instead of wasting cycles on an authentication exchange.\n***************\n*** 2587,2597 ****\n \tif (FindExec(fullprogname, argv[0], \"postmaster\") < 0)\n \t\treturn false;\n \n! \tfilename = palloc(strlen(DataDir) + 20);\n \tsprintf(filename, \"%s/postmaster.opts\", DataDir);\n \n! \tfp = fopen(filename, \"w\");\n! \tif (fp == NULL)\n \t{\n \t\tpostmaster_error(\"cannot create file %s: %s\",\n \t\t\t\t\t\t filename, strerror(errno));\n--- 2603,2612 ----\n \tif (FindExec(fullprogname, argv[0], \"postmaster\") < 0)\n \t\treturn false;\n \n! \tfilename = palloc(strlen(DataDir) + 17);\n \tsprintf(filename, \"%s/postmaster.opts\", DataDir);\n \n! \tif ((fp = fopen(filename, \"w\")) == NULL)\n \t{\n \t\tpostmaster_error(\"cannot create file %s: %s\",\n \t\t\t\t\t\t filename, strerror(errno));\n***************\n*** 2614,2619 ****\n--- 2629,2669 ----\n \treturn true;\n }\n \n+ /*\n+ * Load install user so db_user_namespace can skip it.\n+ */\n+ static bool\n+ GetInstallUser(void)\n+ {\n+ \tchar\t *filename;\n+ \tFILE\t *fp;\n+ \n+ \tfilename = palloc(strlen(DataDir) + 14);\n+ \tsprintf(filename, \"%s/PG_INSTALLER\", DataDir);\n+ \n+ \tif ((fp = fopen(filename, \"r\")) == NULL)\n+ \t{\n+ \t\tpostmaster_error(\"cannot open file %s: %s\",\n+ \t\t\t\t\t\t filename, strerror(errno));\n+ \t\treturn false;\n+ \t}\n+ \n+ \tif (fgets(InstallUser, SM_USER+1, fp) == NULL)\n+ \t{\n+ \t\tpostmaster_error(\"cannot read file %s: %s\",\n+ \t\t\t\t\t\t filename, strerror(errno));\n+ \t\treturn false;\n+ \t}\n+ \n+ \t/* Trim off trailing newline */\n+ \tif (strchr(InstallUser, '\\n') != NULL)\n+ \t\t*strchr(InstallUser, '\\n') = '\\0';\n+ \t\n+ \tfclose(fp);\n+ \treturn true;\n+ }\n+ \n+ \t\n /*\n * This should be used only for reporting \"interactive\" errors (ie, errors\n * during startup. Once the postmaster is launched, use elog.\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/guc.c,v\nretrieving revision 1.79\ndiff -c -r1.79 guc.c\n*** src/backend/utils/misc/guc.c\t12 Aug 2002 00:36:11 -0000\t1.79\n--- src/backend/utils/misc/guc.c\t14 Aug 2002 01:30:20 -0000\n***************\n*** 482,487 ****\n--- 482,491 ----\n \t\t{ \"transform_null_equals\", PGC_USERSET }, &Transform_null_equals,\n \t\tfalse, NULL, NULL\n \t},\n+ \t{\n+ \t\t{ \"db_user_namespace\", PGC_SIGHUP }, &Db_user_namespace,\n+ \t\tfalse, NULL, NULL\n+ \t},\n \n \t{\n \t\t{ NULL, 0 }, NULL, false, NULL, NULL\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.44\ndiff -c -r1.44 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t12 Aug 2002 00:36:12 -0000\t1.44\n--- src/backend/utils/misc/postgresql.conf.sample\t14 Aug 2002 01:30:20 -0000\n***************\n*** 113,119 ****\n #\n #\tMessage display\n #\n- \n #server_min_messages = notice\t# Values, in order of decreasing detail:\n \t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n \t\t\t\t# info, notice, warning, error, log, fatal,\n--- 113,118 ----\n***************\n*** 201,203 ****\n--- 200,203 ----\n #sql_inheritance = true\n #transform_null_equals = false\n #statement_timeout = 0\t\t\t\t# 0 is disabled\n+ #db_user_namespace = false\nIndex: src/bin/initdb/initdb.sh\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/initdb/initdb.sh,v\nretrieving revision 1.165\ndiff -c -r1.165 initdb.sh\n*** src/bin/initdb/initdb.sh\t8 Aug 2002 19:39:05 -0000\t1.165\n--- src/bin/initdb/initdb.sh\t14 Aug 2002 01:30:20 -0000\n***************\n*** 603,608 ****\n--- 603,613 ----\n # Top level PG_VERSION is checked by bootstrapper, so make it first\n echo \"$short_version\" > \"$PGDATA/PG_VERSION\" || exit_nicely\n \n+ # Top level PG_INSTALLER is used by db_user_namespace to prevent username \n+ # mapping just for the install user.\n+ echo \"$POSTGRES_SUPERUSERNAME\" > \"$PGDATA/PG_INSTALLER\" || exit_nicely\n+ \n+ \n cat \"$POSTGRES_BKI\" \\\n | sed -e \"s/POSTGRES/$POSTGRES_SUPERUSERNAME/g\" \\\n -e \"s/ENCODING/$MULTIBYTEID/g\" \\\nIndex: src/include/libpq/libpq-be.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/libpq/libpq-be.h,v\nretrieving revision 1.32\ndiff -c -r1.32 libpq-be.h\n*** src/include/libpq/libpq-be.h\t20 Jun 2002 20:29:49 -0000\t1.32\n--- src/include/libpq/libpq-be.h\t14 Aug 2002 01:30:20 -0000\n***************\n*** 59,65 ****\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\n--- 59,65 ----\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_DATABASE_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\n***************\n*** 72,78 ****\n \tSSL\t\t *ssl;\n \tX509\t *peer;\n \tchar\t\tpeer_dn[128 + 1];\n! \tchar\t\tpeer_cn[SM_USER + 1];\n \tunsigned long count;\n #endif\n } Port;\n--- 72,78 ----\n \tSSL\t\t *ssl;\n \tX509\t *peer;\n \tchar\t\tpeer_dn[128 + 1];\n! \tchar\t\tpeer_cn[SM_DATABASE_USER + 1];\n \tunsigned long count;\n #endif\n } Port;\nIndex: src/include/libpq/pqcomm.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/libpq/pqcomm.h,v\nretrieving revision 1.65\ndiff -c -r1.65 pqcomm.h\n*** src/include/libpq/pqcomm.h\t12 Aug 2002 14:35:26 -0000\t1.65\n--- src/include/libpq/pqcomm.h\t14 Aug 2002 01:30:20 -0000\n***************\n*** 114,119 ****\n--- 114,121 ----\n #define SM_DATABASE\t\t64\n /* SM_USER should be the same size as the others. bjm 2002-06-02 */\n #define SM_USER\t\t\t32\n+ /* We prepend database name if db_user_namespace true. */\n+ #define SM_DATABASE_USER (SM_DATABASE+SM_USER)\n #define SM_OPTIONS\t\t64\n #define SM_UNUSED\t\t64\n #define SM_TTY\t\t\t64\n***************\n*** 124,135 ****\n--- 126,139 ----\n {\n \tProtocolVersion protoVersion;\t\t/* Protocol version */\n \tchar\t\tdatabase[SM_DATABASE];\t/* Database name */\n+ \t\t\t\t/* Db_user_namespace prepends dbname */\n \tchar\t\tuser[SM_USER];\t/* User name */\n \tchar\t\toptions[SM_OPTIONS];\t/* Optional additional args */\n \tchar\t\tunused[SM_UNUSED];\t\t/* Unused */\n \tchar\t\ttty[SM_TTY];\t/* Tty for debug output */\n } StartupPacket;\n \n+ extern bool Db_user_namespace;\n \n /* These are the authentication requests sent by the backend. */",
"msg_date": "Tue, 13 Aug 2002 21:36:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 2002-08-14 at 06:00, Bruce Momjian wrote:\n> Sean Chittenden wrote:\n> > > Well, they aren't separate fields so you can't ORDER BY domain. The dot\n> > > was used so it looks like a schema based on dbname.\n\nIMHO it should look like an user in domain ;)\n \n> > Sorry, I know it's a single field and that there is no split()\n> > function (that I'm aware of), but that seems like such a small and\n> > easy to fix problem that I personally place a higher value on the more\n> > standard nomeclature and use of an @ sign. I understand the value of\n> > . for schemas and whatnot, but isn't a user going to be in their own\n> > schema to begin with? As for the order by, I've got a list of users\n> > per \"account\" (sales account), so doing the order by is on two columns\n> > and the pg_shadow table is generated periodically from our inhouse\n> > tables. -sc\n> \n> I have no personal preference between period and @ or whatever. See if\n> you can get some other votes for @ because most left @ when the ORDER BY\n> idea came up from Marc.\n\nI still like @ . And I posted code that could be put in the pg_user view\nto split out domain you could ORDER BY.\n \n> As for it being a special character, it really isn't because the code\n> prepends the database name and a period. It doesn't look to see if\n> there is a period in the already or anything.\n-----------\nHannu\n\n",
"msg_date": "14 Aug 2002 07:42:49 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have no personal preference between period and @ or whatever. See if\n> you can get some other votes for @ because most left @ when the ORDER BY\n> idea came up from Marc.\n\nFWIW, I still lean to username@database, so I think we're roughly at a\ntie. It would be good to get more votes ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 00:11:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have no personal preference between period and @ or whatever. See if\n> > you can get some other votes for @ because most left @ when the ORDER BY\n> > idea came up from Marc.\n>\n> FWIW, I still lean to username@database, so I think we're roughly at a\n> tie. It would be good to get more votes ...\n\nSorry guys, I'm staying out of this one as my vote would be entirely\narbitrary...\n\nChris\n\n",
"msg_date": "Wed, 14 Aug 2002 12:15:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n>>I have no personal preference between period and @ or whatever. See if\n>>you can get some other votes for @ because most left @ when the ORDER BY\n>>idea came up from Marc.\n> \n> \n> FWIW, I still lean to username@database, so I think we're roughly at a\n> tie. It would be good to get more votes ...\n> \n\nI'm in favor of username@database too.\n\nJoe\n\n\n\n",
"msg_date": "Tue, 13 Aug 2002 21:45:37 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nOK, the vote is not shifting from '.' to '@'. Is that how we want to\ngo? I like the pg_user enhancement. Marc, comments? This was your baby.\n\n---------------------------------------------------------------------------\n\nHannu Krosing wrote:\n> On Wed, 2002-08-14 at 06:00, Bruce Momjian wrote:\n> > Sean Chittenden wrote:\n> > > > Well, they aren't separate fields so you can't ORDER BY domain. The dot\n> > > > was used so it looks like a schema based on dbname.\n> \n> IMHO it should look like an user in domain ;)\n> \n> > > Sorry, I know it's a single field and that there is no split()\n> > > function (that I'm aware of), but that seems like such a small and\n> > > easy to fix problem that I personally place a higher value on the more\n> > > standard nomeclature and use of an @ sign. I understand the value of\n> > > . for schemas and whatnot, but isn't a user going to be in their own\n> > > schema to begin with? As for the order by, I've got a list of users\n> > > per \"account\" (sales account), so doing the order by is on two columns\n> > > and the pg_shadow table is generated periodically from our inhouse\n> > > tables. -sc\n> > \n> > I have no personal preference between period and @ or whatever. See if\n> > you can get some other votes for @ because most left @ when the ORDER BY\n> > idea came up from Marc.\n> \n> I still like @ . And I posted code that could be put in the pg_user view\n> to split out domain you could ORDER BY.\n> \n> > As for it being a special character, it really isn't because the code\n> > prepends the database name and a period. It doesn't look to see if\n> > there is a period in the already or anything.\n> -----------\n> Hannu\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 00:45:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 2002-08-14 at 12:45, Sean Chittenden wrote:\n> > > > > Well, they aren't separate fields so you can't ORDER BY domain. The dot\n> > > > > was used so it looks like a schema based on dbname.\n> > \n> > IMHO it should look like an user in domain ;)\n> \n> Agreed, but there is something to be said for doing a sort of users\n> per domain. This wouldn't be an issue, I don't think, if there was a\n> split_before() and split_after() like functions.\n> \n> # SELECT split_before('user@domain.com','@'), split_after('user@domain.com', '@');\n> ?column? | ?column?\n> ----------+------------\n> user | domain.com\n> \n> What would you guys say to submissions for a patch that would add the\n> function listed above? \n\ncreate function split_before(text,text) returns text as '\n select case when (strpos($1,$2) > 0)\n then substr($1,1,strpos($1,$2)-1)\n else $1\n end as usename\n' language 'SQL';\n\ncreate function split_after(text,text) returns text as '\n select case when (strpos($1,$2) > 0)\n then substr($1,strpos($1,$2)+1)\n else ''''\n end as usedomain\n' language 'SQL' ;\n\nhannu=# select split_before('me@somewhere','@'),\nsplit_after('me@somewhere','@');\n split_before | split_after \n--------------+-------------\n me | somewhere\n(1 row)\n\n-------------\nHannu\n\n",
"msg_date": "14 Aug 2002 11:11:59 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> > > > Well, they aren't separate fields so you can't ORDER BY domain. The dot\n> > > > was used so it looks like a schema based on dbname.\n> \n> IMHO it should look like an user in domain ;)\n\nAgreed, but there is something to be said for doing a sort of users\nper domain. This wouldn't be an issue, I don't think, if there was a\nsplit_before() and split_after() like functions.\n\n# SELECT split_before('user@domain.com','@'), split_after('user@domain.com', '@');\n ?column? | ?column?\n----------+------------\n user | domain.com\n\nWhat would you guys say to submissions for a patch that would add the\nfunction listed above? Maybe just a function called get_user(text)\nand get_domain(text)? ::shrug:: Just some thoughts since there is\nvalidity to being able to parse/operate on this data efficiently. If\nthose functions existed, then I think everyone would be able to have\ntheir pie as they want it. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Wed, 14 Aug 2002 00:45:47 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> > > > > > Well, they aren't separate fields so you can't ORDER BY domain. The dot\n> > > > > > was used so it looks like a schema based on dbname.\n> > > \n> > > IMHO it should look like an user in domain ;)\n> > \n> > Agreed, but there is something to be said for doing a sort of users\n> > per domain. This wouldn't be an issue, I don't think, if there was a\n> > split_before() and split_after() like functions.\n> > \n> > # SELECT split_before('user@domain.com','@'), split_after('user@domain.com', '@');\n> > ?column? | ?column?\n> > ----------+------------\n> > user | domain.com\n> > \n> > What would you guys say to submissions for a patch that would add the\n> > function listed above? \n> \n> create function split_before(text,text) returns text as '\n> select case when (strpos($1,$2) > 0)\n> then substr($1,1,strpos($1,$2)-1)\n> else $1\n> end as usename\n> ' language 'SQL';\n> \n> create function split_after(text,text) returns text as '\n> select case when (strpos($1,$2) > 0)\n> then substr($1,strpos($1,$2)+1)\n> else ''''\n> end as usedomain\n> ' language 'SQL' ;\n> \n> hannu=# select split_before('me@somewhere','@'),\n> split_after('me@somewhere','@');\n> split_before | split_after \n> --------------+-------------\n> me | somewhere\n> (1 row)\n\nOh that was handy and fast! I didn't know of strpos(). Cool, who\nsays 'ya can't learn something every day? :~) Now with an alias or\nsubselect, it should be very easy to order users in a domain in any\nway that SQL allows. :~) Thanks Hannu. -sc\n\n-- \nSean Chittenden\n",
"msg_date": "Wed, 14 Aug 2002 02:03:42 -0700",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "I'm going to vote for either @ or %.\n\nOn Wed, 2002-08-14 at 00:11, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have no personal preference between period and @ or whatever. See if\n> > you can get some other votes for @ because most left @ when the ORDER BY\n> > idea came up from Marc.\n> \n> FWIW, I still lean to username@database, so I think we're roughly at a\n> tie. It would be good to get more votes ...\n\n\n",
"msg_date": "14 Aug 2002 08:08:46 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": " \n> OK, the vote is not shifting from '.' to '@'. Is that how we want to\n> go? I like the pg_user enhancement. Marc, comments? This was your\n> baby. \n> \n\nWould it be hard to setup an internal PG variable for the actual character \nto be used?\n",
"msg_date": "Wed, 14 Aug 2002 13:46:59 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Sean Chittenden wrote:\n> Agreed, but there is something to be said for doing a sort of users\n> per domain. This wouldn't be an issue, I don't think, if there was a\n> split_before() and split_after() like functions.\n> \n> # SELECT split_before('user@domain.com','@'), split_after('user@domain.com', '@');\n> ?column? | ?column?\n> ----------+------------\n> user | domain.com\n> \n> What would you guys say to submissions for a patch that would add the\n> function listed above? Maybe just a function called get_user(text)\n> and get_domain(text)? ::shrug:: Just some thoughts since there is\n> validity to being able to parse/operate on this data efficiently. If\n> those functions existed, then I think everyone would be able to have\n> their pie as they want it. -sc\n> \n\nI already have a function in contrib/dblink, currently called \ndblink_strtok(), which I was going to turn into a builtin function per \nrecent discussion (renamed of course). It would work for this but is \nmore general:\n\ndblink_strtok(text inputstring, text delimiter, int posn) RETURNS text\n\nInputs\n inputstring\n any string you want to parse a token out of;\n e.g. 'f=1&g=3&h=4'\n delimiter\n a single character to use as the delimiter;\n e.g. '&' or '='\n posn\n the position of the token of interest, 0 based;\n e.g. 1\n\nShould it be called splitstr() (similar to substr())?\n\nJoe\n\n",
"msg_date": "Wed, 14 Aug 2002 07:08:12 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, Aug 14, 2002 at 12:11:10AM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have no personal preference between period and @ or whatever. See if\n> > you can get some other votes for @ because most left @ when the ORDER BY\n> > idea came up from Marc.\n> \n> FWIW, I still lean to username@database, so I think we're roughly at a\n> tie. It would be good to get more votes ...\n\nMy non-coding vote goes to user@database, too.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 14 Aug 2002 10:23:24 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 2002-08-14 at 16:08, Joe Conway wrote:\n> I already have a function in contrib/dblink, currently called \n> dblink_strtok(), which I was going to turn into a builtin function per \n> recent discussion (renamed of course). It would work for this but is \n> more general:\n> \n> dblink_strtok(text inputstring, text delimiter, int posn) RETURNS text\n> \n> Inputs\n> inputstring\n> any string you want to parse a token out of;\n> e.g. 'f=1&g=3&h=4'\n> delimiter\n> a single character to use as the delimiter;\n> e.g. '&' or '='\n> posn\n> the position of the token of interest, 0 based;\n> e.g. 1\n> \n> Should it be called splitstr() (similar to substr())?\n\nWhat about functions\n\n1. split(text,text,int) returns text\n\n2. split(text,text) returns text[]\n\nand why not\n\n3. split(text,text,text) returns text\n\nwhich returns text from $1 delimited by $2 and $3\n\n-------------\nHannu\n\n",
"msg_date": "14 Aug 2002 17:27:37 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Tuesday 13 August 2002 07:21 pm, Sander Steffann wrote:\n> I think choosing . as the delimiter is a dangerous choice... People have\n> not expected it to be special until now, so maybe another character can be\n> chosen? I would suggest a colon if possible, so you would get dbname:user.\n> I don't expect that a lot of people use a colon as the dbname or username,\n> but I could be very wrong here.\n\nThe choices have been enumerated as . and @. I personally vote for either:\nuser@db\nOR\ndb!user\n(sorry, having been a UUCP node admin shows at times...) To my eyes the bang \nnotation is more of a 'divider' than the @. Unless there is some _really_ \ngood reason to not use !, that is. :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 14 Aug 2002 12:12:23 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "ngpg@grymmjack.com dijo: \n\n> > OK, the vote is not shifting from '.' to '@'. Is that how we want to\n> > go? I like the pg_user enhancement. Marc, comments? This was your\n> > baby. \n> \n> Would it be hard to setup an internal PG variable for the actual character \n> to be used?\n\nThat'd be good, because almost any character people wants to use as\ndelimiter is actually valid in database and user names. So giving\npeople a choice is a good thing.\n\nFor example someone may want to use email address as usernames, and that\nmesses up the splitting on @.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Cuando miro a alguien, mas me atrae como cambia que quien es\" (J. Binoche)\n\n",
"msg_date": "Wed, 14 Aug 2002 12:25:08 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nWe are clearly going for user@db now.\n\n---------------------------------------------------------------------------\n\nLamar Owen wrote:\n> On Tuesday 13 August 2002 07:21 pm, Sander Steffann wrote:\n> > I think choosing . as the delimiter is a dangerous choice... People have\n> > not expected it to be special until now, so maybe another character can be\n> > chosen? I would suggest a colon if possible, so you would get dbname:user.\n> > I don't expect that a lot of people use a colon as the dbname or username,\n> > but I could be very wrong here.\n> \n> The choices have been enumerated as . and @. I personally vote for either:\n> user@db\n> OR\n> db!user\n> (sorry, having been a UUCP node admin shows at times...) To my eyes the bang \n> notation is more of a 'divider' than the @. Unless there is some _really_ \n> good reason to not use !, that is. :-)\n> -- \n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 13:04:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I had to add to initdb to create a file /data/PG_INSTALLER and have the\n> postmaster read that on startup to determine the installing user.\n\nI object to treating one user specially. There should be a general\nmechanism, such as a separate column in pg_shadow.\n\nI also object to fixing the name during initdb. We just got rid of that\nrequirement.\n\nIf it mattered, I would also object to the choice of the file name.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 14 Aug 2002 19:16:46 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nOK, what I didn't want to do we to over-complexify something that is for\nonly a few users. In a way that user has to be special for this case\nbecause of the requirement that at least one person be able to connect\nwhen you flip that flag.\n\nAlso, I don't want to add a column to pg_shadow. Seems like overkill.\n\nPlease suggest another name for the file.\n\nBasically, I am not going to stop working on something when one person\nobjects or this will never get done, and I think we have had enough\nfeedback on this that people do want this done.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I had to add to initdb to create a file /data/PG_INSTALLER and have the\n> > postmaster read that on startup to determine the installing user.\n> \n> I object to treating one user specially. There should be a general\n> mechanism, such as a separate column in pg_shadow.\n> \n> I also object to fixing the name during initdb. We just got rid of that\n> requirement.\n> \n> If it mattered, I would also object to the choice of the file name.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 13:19:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nThis email brings up another issue I have seen recently. The use of the\nword \"object\", \"strongly object\", or \"*object*\" with stars is a very\nconfrontational way to express things. It does not foster discussion;\nit really puts your heal in the ground and presents a very unswerving\nattitude when it really isn't necessary nor valuable.\n\nIt is not just this email, but several people on this list who are doing\nthat now, and it is making for more negative discussions. Thomas has\nmentioned it too.\n\nAs I have said before, everyone gets one vote. It doesn't matter how\nhard to \"object\" to something. It is the force of your argument that\naffects the votes, not how strongly you express your dislike of\nsomething.\n\nOne effect of this environment is that you end up coding to avoid\n\"objections\" rather than coding to meet users needs. Certainly the\npeople who express objections are providing valuable feedback to help\nimprove patches/features, but it should be done in a way that doesn't\ngive the impression they are in a courtroom and when you post something\nincorrect, some lawyer is going to jump up and yell \"object\".\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I had to add to initdb to create a file /data/PG_INSTALLER and have the\n> > postmaster read that on startup to determine the installing user.\n> \n> I object to treating one user specially. There should be a general\n> mechanism, such as a separate column in pg_shadow.\n> \n> I also object to fixing the name during initdb. We just got rid of that\n> requirement.\n> \n> If it mattered, I would also object to the choice of the file name.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 13:35:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In a way that user has to be special for this case\n> because of the requirement that at least one person be able to connect\n> when you flip that flag.\n\nWhy does anyone need to be special? The behavior should be to try the\ngiven user name, and if that's not found then to try user@db. I see no\nneed to special-case any user.\n\n> Basically, I am not going to stop working on something when one person\n> objects or this will never get done,\n\nHe didn't say to stop working on it. He said to fix the misdesigned\nparts. And I quite agree that those parts are misdesigned.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 13:48:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "I believe the dictionary meaning of 'object' in this context would be 'a\ncause for concern or attention'. Each of Peters uses of the word is\nhighly appropriate, as he was concerned and I'd agree with the\nsentiments that those concepts needed attention.\n\nAnyway, object with stars and strongly object are definitely leaning\ntowards abuse of the word.\n\n\nOn Wed, 2002-08-14 at 13:35, Bruce Momjian wrote:\n> \n> This email brings up another issue I have seen recently. The use of the\n> word \"object\", \"strongly object\", or \"*object*\" with stars is a very\n\n> > > I had to add to initdb to create a file /data/PG_INSTALLER and have the\n> > > postmaster read that on startup to determine the installing user.\n> > \n> > I object to treating one user specially. There should be a general\n> > mechanism, such as a separate column in pg_shadow.\n> > \n> > I also object to fixing the name during initdb. We just got rid of that\n> > requirement.\n> > \n> > If it mattered, I would also object to the choice of the file name.\n\n\n\n",
"msg_date": "14 Aug 2002 14:03:25 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > In a way that user has to be special for this case\n> > because of the requirement that at least one person be able to connect\n> > when you flip that flag.\n> \n> Why does anyone need to be special? The behavior should be to try the\n> given user name, and if that's not found then to try user@db. I see no\n> need to special-case any user.\n\n\nOh, so try it with and without. I can do that, but it seems more of a\nsecurity problem where you were trying two names instead of one. Do\npeople like that? It is easy to do, except for the fact we have to\nmatch pg_hba.conf with a username, though we could do the double-test\nthere too, if that isn't too weird.\n\n> > Basically, I am not going to stop working on something when one person\n> > objects or this will never get done,\n> \n> He didn't say to stop working on it. He said to fix the misdesigned\n> parts. And I quite agree that those parts are misdesigned.\n\nI will fix them as long as the fixes don't generate new objections, like\nadding a new column to pg_shadow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 14:24:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Oh, so try it with and without. I can do that, but it seems more of a\n> security problem where you were trying two names instead of one. Do\n> people like that?\n\nThe nice thing about it is you can have any combination of people with\ninstallation-wide access (create them as joeblow) and people with\none-database access (create them as joeblow@joesdatabase). A special\ncase for only the postgres user is much less flexible.\n\n> It is easy to do, except for the fact we have to\n> match pg_hba.conf with a username, though we could do the double-test\n> there too, if that isn't too weird.\n\nIt'd probably be better to first look at the flat-file copy of pg_shadow\nto determine whether user or user@database is the form to use, and then\nrun through pg_hba.conf only once using the correct form. Otherwise\nthere are going to be all sorts of weird corner cases: user might match\na different pg_hba row than user@database does.\n\nAlso, if you do it this way then the substitution only has to be done in\none place: you can pass down the correct form to the backend, which'd\notherwise have to repeat the test to see which username is found.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 14:34:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Oh, so try it with and without. I can do that, but it seems more of a\n> > security problem where you were trying two names instead of one. Do\n> > people like that?\n> \n> The nice thing about it is you can have any combination of people with\n> installation-wide access (create them as joeblow) and people with\n> one-database access (create them as joeblow@joesdatabase). A special\n> case for only the postgres user is much less flexible.\n\nOh, yes, clearly a nice addition, but see below.\n\n> > It is easy to do, except for the fact we have to\n> > match pg_hba.conf with a username, though we could do the double-test\n> > there too, if that isn't too weird.\n> \n> It'd probably be better to first look at the flat-file copy of pg_shadow\n> to determine whether user or user@database is the form to use, and then\n> run through pg_hba.conf only once using the correct form. Otherwise\n> there are going to be all sorts of weird corner cases: user might match\n> a different pg_hba row than user@database does.\n\nProblem is that pg_shadow flat file _only_ has users with passwords. I\ndo a btree search of that file, but I am not sure I want to add a dump\nof _all_ users just to allow this. Do we?\n\n> Also, if you do it this way then the substitution only has to be done in\n> one place: you can pass down the correct form to the backend, which'd\n> otherwise have to repeat the test to see which username is found.\n\nYes, certainly a big win. What we _could_ do is to allow connections to\ntemplate1 be unsuffixed by the dbname, but that makes everyone\nconnecting to template1 have problems, and just seemed too weird.\n\nIdeas?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 14:38:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 2002-08-14 at 14:34, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Oh, so try it with and without. I can do that, but it seems more of a\n> > security problem where you were trying two names instead of one. Do\n> > people like that?\n> \n> The nice thing about it is you can have any combination of people with\n> installation-wide access (create them as joeblow) and people with\n> one-database access (create them as joeblow@joesdatabase). A special\n> case for only the postgres user is much less flexible.\n> \n> > It is easy to do, except for the fact we have to\n> > match pg_hba.conf with a username, though we could do the double-test\n> > there too, if that isn't too weird.\n> \n> It'd probably be better to first look at the flat-file copy of pg_shadow\n> to determine whether user or user@database is the form to use, and then\n> run through pg_hba.conf only once using the correct form. Otherwise\n> there are going to be all sorts of weird corner cases: user might match\n> a different pg_hba row than user@database does.\n> \n> Also, if you do it this way then the substitution only has to be done in\n> one place: you can pass down the correct form to the backend, which'd\n> otherwise have to repeat the test to see which username is found.\n\nIf there is a global 'user', then a database specific 'user@database'\nshould be rejected shouldn't it? Otherwise we wind up with two\npotential 'user@database' users (globals users are really user@<each\ndatabase>) but with a single ID.\n\n\n\n",
"msg_date": "14 Aug 2002 14:40:40 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wednesday 14 August 2002 02:38 pm, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > The nice thing about it is you can have any combination of people with\n> > installation-wide access (create them as joeblow) and people with\n> > one-database access (create them as joeblow@joesdatabase). A special\n> > case for only the postgres user is much less flexible.\n\n> > Also, if you do it this way then the substitution only has to be done in\n> > one place: you can pass down the correct form to the backend, which'd\n> > otherwise have to repeat the test to see which username is found.\n\n> Yes, certainly a big win. What we _could_ do is to allow connections to\n> template1 be unsuffixed by the dbname, but that makes everyone\n> connecting to template1 have problems, and just seemed too weird.\n\n> Ideas?\n\nAppending '@template1' to unadorned usernames, and giving inherited rights \nacross the installation to users with template1 rights? Then you have the \nunadorned 'lowen' becomes 'lowen@template1' -- but lowen@pari wouldn't have \naccess to template1, right? Or am I misunderstanding the feature?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 14 Aug 2002 14:51:36 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Appending '@template1' to unadorned usernames, and giving inherited rights \n> across the installation to users with template1 rights? Then you have the \n> unadorned 'lowen' becomes 'lowen@template1' -- but lowen@pari wouldn't have \n> access to template1, right?\n\nIf not, standard things like \"psql -l\" won't work for lowen@pari. I don't\nthink we can get away with a scheme that depends on disallowing access\nto template1 for most people.\n\nIt should also be noted that the whole point of this little project was\nto do something *simple* ... checking access to some other database to\ndecide what we will allow is getting a bit far afield from simple.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 15:04:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Problem is that pg_shadow flat file _only_ has users with passwords. I\n> do a btree search of that file, but I am not sure I want to add a dump\n> of _all_ users just to allow this. Do we?\n\nWhy not? Doesn't seem like a big penalty ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 15:05:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Wed, 14 Aug 2002, Tom Lane wrote:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Appending '@template1' to unadorned usernames, and giving inherited rights\n> > across the installation to users with template1 rights? Then you have the\n> > unadorned 'lowen' becomes 'lowen@template1' -- but lowen@pari wouldn't have\n> > access to template1, right?\n>\n> If not, standard things like \"psql -l\" won't work for lowen@pari. I don't\n> think we can get away with a scheme that depends on disallowing access\n> to template1 for most people.\n>\n> It should also be noted that the whole point of this little project was\n> to do something *simple* ... checking access to some other database to\n> decide what we will allow is getting a bit far afield from simple.\n\nHate to complicate things more, but back to a global username, say\nyou have user \"lowen\" that should have access to all databases. What\nhappens if there's already a lowen@somedb that's an unprivileged user.\nAssuming lowen is a db superuser, what happens in somedb? If there's\na global user \"lowen\" and you try to create a lowen@somedb later, will\nit be allowed?\n\nOne possible simplification would be to make the username the full\nusername \"lowen@somedb\", \"lowen\", ... Right now we can create a\n\"lowen@somedb\" and it's a different user than \"lowen\" and we can\nalready restrict a user to one database, can't we? Hmmm. Just\nchecked and I guess not - I thought we had a record type of \"user\".\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 14 Aug 2002 15:29:38 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Wednesday 14 August 2002 03:04 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Appending '@template1' to unadorned usernames, and giving inherited\n> > rights across the installation to users with template1 rights? Then you\n> > have the unadorned 'lowen' becomes 'lowen@template1' -- but lowen@pari\n> > wouldn't have access to template1, right?\n\n> If not, standard things like \"psql -l\" won't work for lowen@pari. I don't\n> think we can get away with a scheme that depends on disallowing access\n> to template1 for most people.\n\nOk, maybe I'm really off base, but if I connect to database pari as \nlowen@pari, isn't pg_database present there? I just tried here:\ncreatedb pari\npsql pari\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\npari=# select datname from pg_database;\n datname\n------------\n acs-test\n maillabels\n testing2\n template1\n template0\n pari\n(6 rows)\n\nSo AFAICT if I were psql I would parse the unadorned lowen as \n'lowen@template1' and connect to template1 if not otherwise specified. If \nthe fully qualified database user (FQDU) is present, parse the database name \nout and connect to that database, then issue the SQL to do the -l or \nwhatever. The @pari would just override the normal default of template1, \nright? So a 'psql -U lowen@pari -l ' would connect to database pari \n(subject to permissions) and select datname from pg_database there.\n\nWhat else am I missing, Tom? ISTM I don't need access to template1 -- \nalthough I wasn't necessarily suggesting eliminating that. I was more \nsuggesting:\nlowen@pari has read access to those parts of template1 necessary for normal \nfunctioning, full access (subject ot GRANT/REVOKE) of pari, and no access to \nother databases;\nlowen@template1 has access across the install (subject to GRANT/REVOKE, of \ncourse). lowen@template1 = lowen (unadorned). That was the answer, I \nthought, to the question Bruce had. There would be NO unadorned usernames \nthen, and no special handling EXCEPT of the template1 database, which is \nalready a special case.\n\nNow, can we support the idea of 'postgres@pari' being a superuser for pari but \nnot for the rest of the install? Meaning no CREATE DATABASE right, as that \nwould require write access to template1? That's OK I believe, as I would \nassume a 'tied to a database' superuser shouldn't be allowed to create a new \ndatabase to which he isn't going to have access..... The full ramifications \nof this structure could prove interesting.\n\nThe supersuperuser 'postgres' becomes postgres@template1 -- template1 becoming \nthe consistent default database (for connecting as well as user membership). \nAs anything added to template1 becomes part of any subsequently added \ndatabases, being a user in template1 becomes an installation-wide user.\n\nAnd the user never really has to explicitly state @template1 -- they could \njust leave off the @template1 and everything works as it does now.\n\nYes, there are complications, but not great ones, no?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 14 Aug 2002 15:31:07 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Problem is that pg_shadow flat file _only_ has users with passwords. I\n> > do a btree search of that file, but I am not sure I want to add a dump\n> > of _all_ users just to allow this. Do we?\n> \n> Why not? Doesn't seem like a big penalty ...\n\nWell, in most cases pg_pwd doesn't even get created unless someone has a\npassword. We would be creating that file in all cases, or at least in\nall cases wher db_user_namespace is set, and again, that is a SIGHUP\nparam, so you would need to make sure pg_pwd has the right contents if\nit was enabled during a sighup. Frankly, I would recommend a new file\nthat just contains user names and is always created.\n\nWe are basically heading down the road to complexity here.\n\nIn fact, pg_hba.conf is just a microcosm of how we are going to handle\npg_shadow matching. If we create dave@db1, then when dave tries to\nconnect to db1, he comes in as dave@db1, but when he goes to connect to\ndb2, if there is a plain 'dave', he will connect as 'dave' to db2, if\npossible.\n\nIf people are OK with that, then I can easily push the double-testing\ndown into the authentication system. It merely means testing the new\npg_hba.conf USER column for two values, and pg_shadow for two values,\nbut I would test with @db first.\n\nThe double testing just seems strange to me because it splits the user\nnamespace into two parts one with @ and one without, and conflicting\nuser parts in the two namespaces do interact when @db does not match. \nThat seems strange, but hey, if no one else thinks it is strange, it is\neasy to code. It is basically the same as testing pg_pwd, just doing it\nlater in the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 15:32:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wednesday 14 August 2002 03:29 pm, Vince Vielhaber wrote:\n> Hate to complicate things more, but back to a global username, say\n> you have user \"lowen\" that should have access to all databases. What\n> happens if there's already a lowen@somedb that's an unprivileged user.\n> Assuming lowen is a db superuser, what happens in somedb? If there's\n> a global user \"lowen\" and you try to create a lowen@somedb later, will\n> it be allowed?\n\nIf the user 'lowen' is then expanded to 'lowen@template1' it would be stored \nthat way -- and lowen@template1 is different from lowen@pari, for instance. \nThe lowen@template1 user could be a superuser and lowen@pari might not -- but \nthey become distinct users. Although I do understand the difficulty if the \nFQDU isn't stored in full in the appropriate places. So I guess the solution \nis that wherever a user name is to be stored, the fully qualified form must \nbe used and checked against, with @template1 being a 'this user is \neverywhere' shorthand.\n\nBut maybe I'm just misunderstanding the implementation.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 14 Aug 2002 15:36:00 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Lamar Owen wrote:\n> On Wednesday 14 August 2002 03:29 pm, Vince Vielhaber wrote:\n> > Hate to complicate things more, but back to a global username, say\n> > you have user \"lowen\" that should have access to all databases. What\n> > happens if there's already a lowen@somedb that's an unprivileged user.\n> > Assuming lowen is a db superuser, what happens in somedb? If there's\n> > a global user \"lowen\" and you try to create a lowen@somedb later, will\n> > it be allowed?\n> \n> If the user 'lowen' is then expanded to 'lowen@template1' it would be stored \n> that way -- and lowen@template1 is different from lowen@pari, for instance. \n> The lowen@template1 user could be a superuser and lowen@pari might not -- but \n> they become distinct users. Although I do understand the difficulty if the \n> FQDU isn't stored in full in the appropriate places. So I guess the solution \n> is that wherever a user name is to be stored, the fully qualified form must \n> be used and checked against, with @template1 being a 'this user is \n> everywhere' shorthand.\n\nYes, Vince is on to something with his quote above.\n\nIf we have users with and without @, we get into the situation where\nusers without @ may become users with @ when their usernames collide\nwith existing user/db combinations already created. The point is that\nthose two namespaces do collide and will cause confusion.\n\nThen you start to get into the situation where you always add @ and make\n@template1 a special case. However, remember that this flag can be\nturned on and off after initdb, so you need to be able to get in to set\nthings up without great complexity _and_ the @template1 would not be\npassed in from the client, if for no other reason that the username is\nonly 32 characters. It is the backend doing the flagging, and therefore\nthe user can't say 'I am dave@templatge1' vs 'I am dave@connectdb'.\n\nThis is how I got to the installuser hack in the first place. In fact,\neven the install user, typically 'postgres' has a problem because if you\ncreate 'postgres@db1', 'postgres' will have trouble connecing to db1 as\nthemselves. I think we can live with one user who is special/global, but\nnot more than one because of the confusion it would create.\n\nI can change the way this works, but we need a solution without holes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 15:49:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wed, 14 Aug 2002, Lamar Owen wrote:\n\n> On Wednesday 14 August 2002 03:29 pm, Vince Vielhaber wrote:\n> > Hate to complicate things more, but back to a global username, say\n> > you have user \"lowen\" that should have access to all databases. What\n> > happens if there's already a lowen@somedb that's an unprivileged user.\n> > Assuming lowen is a db superuser, what happens in somedb? If there's\n> > a global user \"lowen\" and you try to create a lowen@somedb later, will\n> > it be allowed?\n>\n> If the user 'lowen' is then expanded to 'lowen@template1' it would be stored\n> that way -- and lowen@template1 is different from lowen@pari, for instance.\n> The lowen@template1 user could be a superuser and lowen@pari might not -- but\n> they become distinct users. Although I do understand the difficulty if the\n> FQDU isn't stored in full in the appropriate places. So I guess the solution\n> is that wherever a user name is to be stored, the fully qualified form must\n> be used and checked against, with @template1 being a 'this user is\n> everywhere' shorthand.\n>\n> But maybe I'm just misunderstanding the implementation.\n\nI may be too, but what's wrong with just \"lowen\" being shorthand for\n'this user is everywhere'? Does it also mean that we'd have a user\npostgres@template1?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 14 Aug 2002 15:55:15 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wednesday 14 August 2002 03:55 pm, Vince Vielhaber wrote:\n> On Wed, 14 Aug 2002, Lamar Owen wrote:\n> > If the user 'lowen' is then expanded to 'lowen@template1' it would be\n> > stored that way -- and lowen@template1 is different from lowen@pari, for\n\n> > But maybe I'm just misunderstanding the implementation.\n>\n> I may be too, but what's wrong with just \"lowen\" being shorthand for\n> 'this user is everywhere'? Does it also mean that we'd have a user\n> postgres@template1?\n\nWE could still use the form without @template1, but the backend would assume \nthe @template1 user was being meant when the unqualified shorthand was used. \nSo the former plain 'postgres' user could still be such to us, to client \nprograms, etc, but the backend would assume that that meant \npostgres@template1 -- no namespace collision, and the special case is that \nanyone@template1 has the behavior the unadorned plain user now has.\n\nI do see Bruce's points, however.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 14 Aug 2002 16:34:13 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Wednesday 14 August 2002 03:49 pm, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > On Wednesday 14 August 2002 03:29 pm, Vince Vielhaber wrote:\n> > > Hate to complicate things more, but back to a global username, say\n> > > you have user \"lowen\" that should have access to all databases. What\n\n> > places. So I guess the solution is that wherever a user name is to be\n> > stored, the fully qualified form must be used and checked against, with\n> > @template1 being a 'this user is everywhere' shorthand.\n\n> Yes, Vince is on to something with his quote above.\n\n> If we have users with and without @, we get into the situation where\n> users without @ may become users with @ when their usernames collide\n> with existing user/db combinations already created. The point is that\n> those two namespaces do collide and will cause confusion.\n\nBut that's the exact problem I was trying to address -- as far as the backend \nis concerned, there isn't a user without @ -- the incoming connection from a \nuser without @ is translated into a connection coming from user@template1.\n\n> Then you start to get into the situation where you always add @ and make\n> @template1 a special case. However, remember that this flag can be\n> turned on and off after initdb, so you need to be able to get in to set\n> things up without great complexity _and_ the @template1 would not be\n> passed in from the client, if for no other reason that the username is\n> only 32 characters. It is the backend doing the flagging, and therefore\n> the user can't say 'I am dave@templatge1' vs 'I am dave@connectdb'.\n\nOk, how do I as a client specify the @dbname for the user? By the database \nI'm connecting to? That IS a wrinkle. But it does make sense, as lowen@pari \nwon't be able to connect to any other database, right? So, where's this new \nnotation going to get used, again?\n\nI must have misunderstood something.\n\nSo, if we have a namespace collision -- then we have to make the \nimplementation have the restriction that a global username can't exist as a \ndatabase-specific username -- but two or more database-specific usernames can \nbe the same. So, have a trigger on insertion of a user that checks for an \nexisting user attached to template1 (again, for consistency -- installation \nwide templates are in template1 -- installation-wide users should be too) -- \nand then aborts the CREATE USER if so.\n\n> This is how I got to the installuser hack in the first place. In fact,\n> even the install user, typically 'postgres' has a problem because if you\n> create 'postgres@db1', 'postgres' will have trouble connecing to db1 as\n> themselves. I think we can live with one user who is special/global, but\n> not more than one because of the confusion it would create.\n\nIf you say CREATE USER lowen@pari for the syntax, the create user trips the \ntrigger, which checks for lowen@template1 and aborts if so. CREATE USER \nlowen@template1 does the same, checking for ANY user lowen. Namespace \ncollision averted? CREATE USER lowen would be the same as CREATE USER \nlowen@connecteddb, so that the subsuperuser for connecteddb can just CREATE \nUSER without qualifying -- the command line createdb could take the @dbname \nargument, splitting it out and connecting to the proper database. This has \nramifications, I admit. And just saying that unqualified CREATE USER's \nshould create the user@template1 introduces its own problems.\n\n> I can change the way this works, but we need a solution without holes.\n\nTrigger on the holes. But if I can't (or shouldn't) be able to specify the \n@dbname from the client, there is GOING to be a namespace collision if \ninstallation-wide users of ANY name are allowed (which is what you've already \nsaid -- just repeating for emphasis). Or we will have to forbid the postgres \nuser from being reused -- trigger on CREATE USER and abort if user=postgres, \nI guess.\n\nNow as to the toggling of the feature -- what happens when you have lowen@pari \nand lowen@wgcr coexisting, and you turn off the feature? Which password \nbecomes valid for the resultant singular user lowen? IMHO, if two or more \nusers of the same name occurs, then you shouldn't be able to turn the feature \noff.\n\nI know you've already put alot of work into this, Bruce. But what if the \nfeature isn't toggled, but always there, just waiting to be exploited by \nCREATE USER user@db, with the default CREATE USER always putting the user \ninto association with the currently connected database? Is there bad \noverhead involved? Is it something that could break installations not using \nthe feature? Or should CREATE USER with an unqualified username default to \n@template1 (what I originally thought it should).\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 14 Aug 2002 17:01:06 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> So the former plain 'postgres' user could still be such to us, to client \n> programs, etc, but the backend would assume that that meant \n> postgres@template1 -- no namespace collision, and the special case is that \n> anyone@template1 has the behavior the unadorned plain user now has.\n\nThe trouble with that scheme is that there is zero interoperability\nbetween the plain-vanilla mode (postgres is postgres in pg_shadow) and\nthe @-mode (postgres is postgres@template1 in pg_shadow). Flip the\nconfiguration switch, in either direction, and you can't log in anymore.\nWe'd almost have to make it a frozen-at-initdb setting so that initdb\nwould know which form to put into pg_shadow for the superuser, and so\nthat entry wouldn't break thereafter.\n\nThe reason I like the \"lowen\" vs \"lowen@somedb\" pattern is that\ndatabase-global users can log in the same way whether the feature is\nturned on or not; this eliminates the getting-started problem, as well\nas the likelihood of shooting yourself in the foot.\n\nIt is true that if you have a global user lowen you'd want to avoid\ncreating any local users lowen@somedb, and that the existing code\nwouldn't be able to enforce that. We could possibly add a few lines\nto CREATE USER to warn about this mistake. (It should be a warning not\nan error, since if you have no intention of ever using the @-feature\nthen there's no reason to restrict your choice of usernames.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 17:44:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, what I didn't want to do we to over-complexify\n\nThat's reasonable, but not when you break other things along the way that\nwere themselves meant to decomplexify things.\n\n> something that is for only a few users.\n\nIf it's only for a few users, please send private patches to them. Face\nit, it's not going to happen. It's going to be in the release notes,\neveryone's going to see it, and there's going to be a Slashdot thread\nabout how \"they\" broke the password files. So let's design a feature for\neveryone.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 15 Aug 2002 00:02:58 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nOK, I have a new idea. Seems most don't like that 'postgres' is a\nspecial user in this context.\n\nHow about if we just document that they have to create a\npostgres@template1 user before flipping the switch. That way, there is\nno special user, no PG_INSTALLER file, and no double-tests for user\nnames.\n\nIt doesn't give us a global user, but frankly, it seems that such a\nsystem is never going to work reliably.\n\nTrying to prevent namespace conflicts by checking for users without @\nthat may match will make @ a special character in the user namespace,\nand people won't like that.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > So the former plain 'postgres' user could still be such to us, to client \n> > programs, etc, but the backend would assume that that meant \n> > postgres@template1 -- no namespace collision, and the special case is that \n> > anyone@template1 has the behavior the unadorned plain user now has.\n> \n> The trouble with that scheme is that there is zero interoperability\n> between the plain-vanilla mode (postgres is postgres in pg_shadow) and\n> the @-mode (postgres is postgres@template1 in pg_shadow). Flip the\n> configuration switch, in either direction, and you can't log in anymore.\n> We'd almost have to make it a frozen-at-initdb setting so that initdb\n> would know which form to put into pg_shadow for the superuser, and so\n> that entry wouldn't break thereafter.\n> \n> The reason I like the \"lowen\" vs \"lowen@somedb\" pattern is that\n> database-global users can log in the same way whether the feature is\n> turned on or not; this eliminates the getting-started problem, as well\n> as the likelihood of shooting yourself in the foot.\n> \n> It is true that if you have a global user lowen you'd want to avoid\n> creating any local users lowen@somedb, and that the existing code\n> wouldn't be able to enforce that. We could possibly add a few lines\n> to CREATE USER to warn about this mistake. (It should be a warning not\n> an error, since if you have no intention of ever using the @-feature\n> then there's no reason to restrict your choice of usernames.)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 18:28:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How about if we just document that they have to create a\n> postgres@template1 user before flipping the switch. That way, there is\n> no special user, no PG_INSTALLER file, and no double-tests for user\n> names.\n\n... and no useful superuser account; if you can't connect to anything\nexcept template1 then you ain't much of a superuser.\n\nTo get around that you'd have to create postgres@db1, postgres@db2,\npostgres@db3, etc etc. This would be a huge pain in the neck; I think\nit'd render the scheme impractical. (Keep in mind that anybody who'd be\ninterested in this feature at all has probably got quite a number of\ndatabases to contend with.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 18:56:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Wed, 14 Aug 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have no personal preference between period and @ or whatever. See if\n> > you can get some other votes for @ because most left @ when the ORDER BY\n> > idea came up from Marc.\n> \n> FWIW, I still lean to username@database, so I think we're roughly at a\n> tie. It would be good to get more votes ...\n\nSeeing as this is rumbling on I'll throw in my fraction of a vote.\n\nI too like the user@database form, partly because it 'reads'. On the other hand\nI can see the the reasons to like database.user and it does match the style of\ndatabase.schema.object.\n\nUnfortunately for this second form, as '.' is a valid character in a database\nname then I can see this causing problems, especially with the behind the\nscenes combination of the two names. I don't see this problem with the '@' form\nbecause I can't see that character being used in a 'unqualified' user name.\nHmmm...not sure that makes a terribly good arguement for my vote for 'user@db',\nis there a third choice for us confused folks to go for? A\ncompromise: database@username ?\n\n\n[BTW, I did check and '@' seems to be a valid character in database and user\nnames.]\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Thu, 15 Aug 2002 00:30:05 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How about if we just document that they have to create a\n> > postgres@template1 user before flipping the switch. That way, there is\n> > no special user, no PG_INSTALLER file, and no double-tests for user\n> > names.\n> \n> ... and no useful superuser account; if you can't connect to anything\n> except template1 then you ain't much of a superuser.\n> \n> To get around that you'd have to create postgres@db1, postgres@db2,\n> postgres@db3, etc etc. This would be a huge pain in the neck; I think\n> it'd render the scheme impractical. (Keep in mind that anybody who'd be\n> interested in this feature at all has probably got quite a number of\n> databases to contend with.)\n\nYes, I hear you, but that brings us around full-circle to the original\npatch with one super-user who is the install user. \n\nI don't know where else to go with the patch at this point. I think\nincreasing the number of 'global' users is polluting the namespace too\nmuch, and having none seems to be unappealing. This is why I am back to\njust the install user.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 19:44:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't know where else to go with the patch at this point. I think\n> increasing the number of 'global' users is polluting the namespace too\n> much,\n\nWhy? If the installation needs N global users, then it needs N global\nusers; who are you to make that value judgment for them?\n\nIn practice I think an installation that's using this feature is going\nto have a pretty small number of global users, and so the issue of\ncollisions with local usernames isn't really as big as it's been painted\nin this thread. We could ignore that issue (except for documenting it)\nand have a perfectly serviceable feature.\n\nBut I don't think it's a wise idea to design the thing in a way that\nmakes it impossible to have more than one global user.\n\nIf you don't like including all the pg_shadow entries in the flat file\n(though I really don't see any problem with that), could we replace\nPG_INSTALL with a pg_global_users config file that lists the global user\nnames? I think it would be good enough to let this be hand-maintained,\nwith initdb initializing it to contain the install user's name.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 19:58:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Wed, 14 Aug 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > How about if we just document that they have to create a\n> > > postgres@template1 user before flipping the switch. That way, there is\n> > > no special user, no PG_INSTALLER file, and no double-tests for user\n> > > names.\n> >\n> > ... and no useful superuser account; if you can't connect to anything\n> > except template1 then you ain't much of a superuser.\n> >\n> > To get around that you'd have to create postgres@db1, postgres@db2,\n> > postgres@db3, etc etc. This would be a huge pain in the neck; I think\n> > it'd render the scheme impractical. (Keep in mind that anybody who'd be\n> > interested in this feature at all has probably got quite a number of\n> > databases to contend with.)\n>\n> Yes, I hear you, but that brings us around full-circle to the original\n> patch with one super-user who is the install user.\n>\n> I don't know where else to go with the patch at this point. I think\n> increasing the number of 'global' users is polluting the namespace too\n> much, and having none seems to be unappealing. This is why I am back to\n> just the install user.\n\nI wouldn't be in favor of that.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 14 Aug 2002 19:59:33 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't know where else to go with the patch at this point. I think\n> > increasing the number of 'global' users is polluting the namespace too\n> > much,\n> \n> Why? If the installation needs N global users, then it needs N global\n> users; who are you to make that value judgment for them?\n> \n> In practice I think an installation that's using this feature is going\n> to have a pretty small number of global users, and so the issue of\n> collisions with local usernames isn't really as big as it's been painted\n> in this thread. We could ignore that issue (except for documenting it)\n> and have a perfectly serviceable feature.\n\nThe original idea was that Marc wanted people who could create their own\nusers for their own databases. If we make the creation of global users\ntoo easy, all of a sudden people don't have control over their db\nusernames because they have to avoid all the global user names already\ndefined. By adding multiple global users, it is diluting the usefulness\nof the feature.\n\nI suppose a pg_global_users file would be a compromise because only the\nadmin could actually add people to that file. If it was more automatic,\nlike writing pg_shadow, someone could create a user without an @ and\nblock access for other users to other database, which is bad.\n\nI still don't like the fact that people think they have control over\ntheir db namespace, when they really don't, but no one else seems to see\nthat as a problem. The namespace conflicts just yell of poor design.\n\nOK, I have another idea. What if we make global users end with an @, so\ndave@ is a global user. We can easily check for that in the postmaster\nand not append the dbname. I know it makes @ a special character, but\nconsidering the problem of namespace collision, it seems better than\nwhat we have now. We could add the install user too if we wish, or just\ntell them to make sure they add a user@ before turning on the feature.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 20:30:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "pgman@candle.pha.pa.us (Bruce Momjian) wrote:\n> Tom Lane wrote:\n>> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> > I don't know where else to go with the patch at this point. I\n>> > think increasing the number of 'global' users is polluting the\n>> > namespace too much,\n>> \n>> Why? If the installation needs N global users, then it needs N\n>> global users; who are you to make that value judgment for them?\n>> \n>> In practice I think an installation that's using this feature is\n>> going to have a pretty small number of global users, and so the issue\n>> of collisions with local usernames isn't really as big as it's been\n>> painted in this thread. We could ignore that issue (except for\n>> documenting it) and have a perfectly serviceable feature.\n> \n> The original idea was that Marc wanted people who could create their\n> own users for their own databases. If we make the creation of global\n> users too easy, all of a sudden people don't have control over their\n> db usernames because they have to avoid all the global user names\n> already defined. By adding multiple global users, it is diluting the\n> usefulness of the feature.\n> \n\nMaybe I am missing something here but shouldnt db access really be part\nof the privileges system? If all we are talking about is a quick hack\nuntil this can be implemented correctly, what is the concern with having\nso much functionality in the hack? Why does it matter what the actual\nusernames can or cant be? For example you could just make everyone with\na username NNNNNN@dbname (where N's are int) local accounts and then\nleave everything else alone. The only issue I could see with something\nlike this would be that someone trying to use this hack wont be able to\ngive their users names like pudgy@dbname, but who cares? I mean if you\nare giving access to a bunch of developers, how is it going to affect\nthem if you tell them to login with 123456@yourdb instead of\njsmith@yourdb? If they cant remember it or something maybe they can\nwrite it down? I dunno... \n",
"msg_date": "Thu, 15 Aug 2002 01:57:51 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Hannu Krosing wrote:\n> What about functions\n> \n> 1. split(text,text,int) returns text\n> \n> 2. split(text,text) returns text[]\n> \n> and why not\n> \n> 3. split(text,text,text) returns text\n> \n> which returns text from $1 delimited by $2 and $3\n\nGiven the time remaining before beta, I'll be happy just to get #1 done.\n\nI can see the utility of #2 (or perhaps even a table function which \nbreaks the string into individual rows). I'm not sure I understand #3.\n\nI am concerned about the name though -- only in that there are usually \nobjections raised to function names that are too likely to conflict with \nuser created function names. But \"split\" is good from the standpoint \nthat it is used in other languages, so people should find it familiar.\n\nAnyone have comments on the name?\n\nJoe\n\n",
"msg_date": "Wed, 14 Aug 2002 23:12:12 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Joe Conway wrote:\n> \n> Hannu Krosing wrote:\n> > What about functions\n> >\n> > 1. split(text,text,int) returns text\n> >\n> > 2. split(text,text) returns text[]\n> >\n> > and why not\n> >\n> > 3. split(text,text,text) returns text\n> >\n> > which returns text from $1 delimited by $2 and $3\n> \n> Given the time remaining before beta, I'll be happy just to get #1 done.\n> \n> I can see the utility of #2 (or perhaps even a table function which\n> breaks the string into individual rows). I'm not sure I understand #3.\n> \n> I am concerned about the name though -- only in that there are usually\n> objections raised to function names that are too likely to conflict with\n> user created function names. But \"split\" is good from the standpoint\n> that it is used in other languages, so people should find it familiar.\n> \n> Anyone have comments on the name?\n\nActually, I've been wondering if it wouldn't be a good idea with schemas\ncoming to think now about how to divide up namespaces for all sorts of\nthings, including PostgreSQL's built in functions, the contrib code,\netc. I think a naming scheme with which both PostgreSQL and the\ncommunity would comply, a la Java's reverse DNS scheme for namespaces\nwould be neat. So when a database is installed, the following schemas\nare automatically created:\n\norg.postgresql.system <- System tables and core functions\norg.postgresql.text <- Text related functions\norg.postgresql.math <- Math related functions\norg.postgresql.fts <- Full-Text schema\n\nor perhaps:\n\norg.postgresql.contrib.fts <- Full-Text schema\n\netc.\n\nI don't even know if \".\" is allowed in the schema names, but you get the\nidea. Then, a users search_path (or whatever it's called, I haven't used\nthe development version in a while), would be the equivalent of Java's\n\"import\" statement, or C++'s \"using\" statement. So \"split\" would be a\nfunction in the org.postgresql.text schema.\n\nHow about them apples?\n\nIf this is an insane idea, its 3:32 A.M. my time ;-)\n\nMike Mascari\nmascarm@mascari.com\n\n> \n> Joe\n",
"msg_date": "Thu, 15 Aug 2002 03:27:06 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> I don't even know if \".\" is allowed in the schema names,\n\nIt isn't, and we couldn't invent such a scheme without seriously\ndiverging from SQL compliance: the next naming level up from schemas is\nreserved for catalogs (think databases). I don't know that we'll ever\nsupport cross-database access, but we shouldn't foreclose the\npossibility in pursuit of a naming scheme that doesn't really add very\nmuch value.\n\nYou could possibly fake it with schema names like org_postgresql_foo,\nbut I can't get very excited about that ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Aug 2002 09:18:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "OK, no one complained/commented on my idea of having global users have a\ntrailing '@', so here is a patch that implements that. It has the\nadvantages of:\n\n\tno special install user (create global user before enabling feature)\n\tno /data/PG_INSTALLER file\n\tallows multiple global users to be easily added\n\tno namespace collisions because globals have a trailing @\n\teasy for postmaster to recognize global users\n\tno double-user lookups of pg_pwd changes\n\tvery small patch footprint\n\nThe only downside is that it treats '@' as a special character when it\nis enabled, but frankly, because we are appending @dbname anyway, having\n'@' as a special character in that case makes sense.\n\nComments?\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I don't know where else to go with the patch at this point. I think\n> > > increasing the number of 'global' users is polluting the namespace too\n> > > much,\n> > \n> > Why? If the installation needs N global users, then it needs N global\n> > users; who are you to make that value judgment for them?\n> > \n> > In practice I think an installation that's using this feature is going\n> > to have a pretty small number of global users, and so the issue of\n> > collisions with local usernames isn't really as big as it's been painted\n> > in this thread. We could ignore that issue (except for documenting it)\n> > and have a perfectly serviceable feature.\n> \n> The original idea was that Marc wanted people who could create their own\n> users for their own databases. If we make the creation of global users\n> too easy, all of a sudden people don't have control over their db\n> usernames because they have to avoid all the global user names already\n> defined. By adding multiple global users, it is diluting the usefulness\n> of the feature.\n> \n> I suppose a pg_global_users file would be a compromise because only the\n> admin could actually add people to that file. If it was more automatic,\n> like writing pg_shadow, someone could create a user without an @ and\n> block access for other users to other database, which is bad.\n> \n> I still don't like the fact that people think they have control over\n> their db namespace, when they really don't, but no one else seems to see\n> that as a problem. The namespace conflicts just yell of poor design.\n> \n> OK, I have another idea. What if we make global users end with an @, so\n> dave@ is a global user. We can easily check for that in the postmaster\n> and not append the dbname. I know it makes @ a special character, but\n> considering the problem of namespace collision, it seems better than\n> what we have now. We could add the install user too if we wish, or just\n> tell them to make sure they add a user@ before turning on the feature.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.125\ndiff -c -r1.125 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t15 Aug 2002 14:26:15 -0000\t1.125\n--- doc/src/sgml/runtime.sgml\t15 Aug 2002 15:32:29 -0000\n***************\n*** 1191,1196 ****\n--- 1191,1211 ----\n </varlistentry>\n \n <varlistentry>\n+ <term><varname>DB_USER_NAMESPACE</varname> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Appends <literal>@</> and the database name to the user name when\n+ connecting to the database. This allows per-database users. \n+ User names ending with <literal>@</> are considered global and may \n+ connect to any database. It is recommended you create at least one \n+ global user, e.g. <literal>postgres@</>, before enabling this feature. \n+ Also, when creating user names containing <literal>@</>, you will need \n+ to quote the user name.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <indexterm>\n <primary>deadlock</primary>\n <secondary>timeout</secondary>\nIndex: src/backend/libpq/auth.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/libpq/auth.c,v\nretrieving revision 1.82\ndiff -c -r1.82 auth.c\n*** src/backend/libpq/auth.c\t20 Jun 2002 20:29:28 -0000\t1.82\n--- src/backend/libpq/auth.c\t15 Aug 2002 15:32:30 -0000\n***************\n*** 117,123 ****\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n--- 117,123 ----\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_DATABASE_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n***************\n*** 290,296 ****\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\n--- 290,296 ----\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_DATABASE_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\nIndex: src/backend/postmaster/postmaster.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/postmaster/postmaster.c,v\nretrieving revision 1.283\ndiff -c -r1.283 postmaster.c\n*** src/backend/postmaster/postmaster.c\t10 Aug 2002 20:29:18 -0000\t1.283\n--- src/backend/postmaster/postmaster.c\t15 Aug 2002 15:32:34 -0000\n***************\n*** 116,122 ****\n sigset_t\tUnBlockSig,\n \t\t\tBlockSig,\n \t\t\tAuthBlockSig;\n- \n #else\n int\t\t\tUnBlockSig,\n \t\t\tBlockSig,\n--- 116,121 ----\n***************\n*** 191,196 ****\n--- 190,197 ----\n bool\t\tHostnameLookup;\t\t/* for ps display */\n bool\t\tShowPortNumber;\n bool\t\tLog_connections = false;\n+ bool\t\tDb_user_namespace = false;\n+ \n \n /* Startup/shutdown state */\n static pid_t StartupPID = 0,\n***************\n*** 1161,1166 ****\n--- 1162,1177 ----\n \tif (port->user[0] == '\\0')\n \t\telog(FATAL, \"no PostgreSQL user name specified in startup packet\");\n \n+ \t/* Append database name for per-db user namespace, exclude global users. */\n+ \tif (Db_user_namespace && strlen(port->user) > 0 &&\n+ \t\tport->user[strlen(port->user)-1] != '@')\n+ \t{\n+ \t\tchar hold_user[SM_DATABASE_USER];\n+ \t\tsnprintf(hold_user, SM_DATABASE_USER, \"%s@%s\", port->user,\n+ \t\t\t\t port->database);\n+ \t\tstrcpy(port->user, hold_user);\n+ \t}\n+ \n \t/*\n \t * If we're going to reject the connection due to database state, say\n \t * so now instead of wasting cycles on an authentication exchange.\n***************\n*** 2587,2597 ****\n \tif (FindExec(fullprogname, argv[0], \"postmaster\") < 0)\n \t\treturn false;\n \n! \tfilename = palloc(strlen(DataDir) + 20);\n \tsprintf(filename, \"%s/postmaster.opts\", DataDir);\n \n! \tfp = fopen(filename, \"w\");\n! \tif (fp == NULL)\n \t{\n \t\tpostmaster_error(\"cannot create file %s: %s\",\n \t\t\t\t\t\t filename, strerror(errno));\n--- 2598,2607 ----\n \tif (FindExec(fullprogname, argv[0], \"postmaster\") < 0)\n \t\treturn false;\n \n! \tfilename = palloc(strlen(DataDir) + 17);\n \tsprintf(filename, \"%s/postmaster.opts\", DataDir);\n \n! \tif ((fp = fopen(filename, \"w\")) == NULL)\n \t{\n \t\tpostmaster_error(\"cannot create file %s: %s\",\n \t\t\t\t\t\t filename, strerror(errno));\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/guc.c,v\nretrieving revision 1.82\ndiff -c -r1.82 guc.c\n*** src/backend/utils/misc/guc.c\t15 Aug 2002 02:51:26 -0000\t1.82\n--- src/backend/utils/misc/guc.c\t15 Aug 2002 15:32:42 -0000\n***************\n*** 483,488 ****\n--- 483,492 ----\n \t\t{ \"transform_null_equals\", PGC_USERSET }, &Transform_null_equals,\n \t\tfalse, NULL, NULL\n \t},\n+ \t{\n+ \t\t{ \"db_user_namespace\", PGC_SIGHUP }, &Db_user_namespace,\n+ \t\tfalse, NULL, NULL\n+ \t},\n \n \t{\n \t\t{ NULL, 0 }, NULL, false, NULL, NULL\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.44\ndiff -c -r1.44 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t12 Aug 2002 00:36:12 -0000\t1.44\n--- src/backend/utils/misc/postgresql.conf.sample\t15 Aug 2002 15:32:42 -0000\n***************\n*** 113,119 ****\n #\n #\tMessage display\n #\n- \n #server_min_messages = notice\t# Values, in order of decreasing detail:\n \t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n \t\t\t\t# info, notice, warning, error, log, fatal,\n--- 113,118 ----\n***************\n*** 201,203 ****\n--- 200,203 ----\n #sql_inheritance = true\n #transform_null_equals = false\n #statement_timeout = 0\t\t\t\t# 0 is disabled\n+ #db_user_namespace = false\nIndex: src/include/libpq/libpq-be.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/libpq/libpq-be.h,v\nretrieving revision 1.32\ndiff -c -r1.32 libpq-be.h\n*** src/include/libpq/libpq-be.h\t20 Jun 2002 20:29:49 -0000\t1.32\n--- src/include/libpq/libpq-be.h\t15 Aug 2002 15:32:43 -0000\n***************\n*** 59,65 ****\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\n--- 59,65 ----\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_DATABASE_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\n***************\n*** 72,78 ****\n \tSSL\t\t *ssl;\n \tX509\t *peer;\n \tchar\t\tpeer_dn[128 + 1];\n! \tchar\t\tpeer_cn[SM_USER + 1];\n \tunsigned long count;\n #endif\n } Port;\n--- 72,78 ----\n \tSSL\t\t *ssl;\n \tX509\t *peer;\n \tchar\t\tpeer_dn[128 + 1];\n! \tchar\t\tpeer_cn[SM_DATABASE_USER + 1];\n \tunsigned long count;\n #endif\n } Port;\nIndex: src/include/libpq/pqcomm.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/libpq/pqcomm.h,v\nretrieving revision 1.65\ndiff -c -r1.65 pqcomm.h\n*** src/include/libpq/pqcomm.h\t12 Aug 2002 14:35:26 -0000\t1.65\n--- src/include/libpq/pqcomm.h\t15 Aug 2002 15:32:43 -0000\n***************\n*** 114,119 ****\n--- 114,121 ----\n #define SM_DATABASE\t\t64\n /* SM_USER should be the same size as the others. bjm 2002-06-02 */\n #define SM_USER\t\t\t32\n+ /* We append database name if db_user_namespace true. */\n+ #define SM_DATABASE_USER (SM_DATABASE+SM_USER)\n #define SM_OPTIONS\t\t64\n #define SM_UNUSED\t\t64\n #define SM_TTY\t\t\t64\n***************\n*** 124,135 ****\n--- 126,139 ----\n {\n \tProtocolVersion protoVersion;\t\t/* Protocol version */\n \tchar\t\tdatabase[SM_DATABASE];\t/* Database name */\n+ \t\t\t\t/* Db_user_namespace appends dbname */\n \tchar\t\tuser[SM_USER];\t/* User name */\n \tchar\t\toptions[SM_OPTIONS];\t/* Optional additional args */\n \tchar\t\tunused[SM_UNUSED];\t\t/* Unused */\n \tchar\t\ttty[SM_TTY];\t/* Tty for debug output */\n } StartupPacket;\n \n+ extern bool Db_user_namespace;\n \n /* These are the authentication requests sent by the backend. */",
"msg_date": "Thu, 15 Aug 2002 11:54:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 15 Aug 2002, Bruce Momjian wrote:\n\n>\n> OK, no one complained/commented on my idea of having global users have a\n> trailing '@', so here is a patch that implements that. It has the\n> advantages of:\n\nProbably because not everyone saw it. I know I didn't. This entire\nissue is growing more and more complex. How about a configure item\nto not even compile it in? Or better yet, a configure item to put\nit there with the default off.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Aug 2002 12:06:31 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nVince, you were in the CC, and it went to hackers:\n\t\n\tMessage 772/835 Bruce Momjian \n\t Aug 14, 2002 08:30:47 pm -0400\n\tSubject: Re: [HACKERS] Open 7.3 items\n\tTo: Tom Lane <tgl@sss.pgh.pa.us>\n\tDate: Wed, 14 Aug 2002 20:30:47 -0400 (EDT)\n\tcc: Lamar Owen <lamar.owen@wgcr.org>, Vince Vielhaber <vev@michvhf.com>,\n\t PostgreSQL-development <pgsql-hackers@postgresql.org>\n\tX-Virus-Scanned: by AMaViS new-20020517\n\tPrecedence: bulk\n\tSender: pgsql-hackers-owner@postgresql.org\n\tX-Virus-Scanned: by AMaViS new-20020517\n\n---------------------------------------------------------------------------\n\nVince Vielhaber wrote:\n> On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> \n> >\n> > OK, no one complained/commented on my idea of having global users have a\n> > trailing '@', so here is a patch that implements that. It has the\n> > advantages of:\n> \n> Probably because not everyone saw it. I know I didn't. This entire\n> issue is growing more and more complex. How about a configure item\n> to not even compile it in? Or better yet, a configure item to put\n> it there with the default off.\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> http://www.camping-usa.com http://www.cloudninegifts.com\n> http://www.meanstreamradio.com http://www.unknown-artists.com\n> ==========================================================================\n> \n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 15 Aug 2002 12:13:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> \n> >\n> > OK, no one complained/commented on my idea of having global users have a\n> > trailing '@', so here is a patch that implements that. It has the\n> > advantages of:\n> \n> Probably because not everyone saw it. I know I didn't. This entire\n> issue is growing more and more complex. How about a configure item\n> to not even compile it in? Or better yet, a configure item to put\n> it there with the default off.\n\nI think I am prety close, and I don't see a configure flag as any better\nthan a GUC option that is off by default.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 15 Aug 2002 12:16:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 15 Aug 2002, Bruce Momjian wrote:\n\n>\n> Vince, you were in the CC, and it went to hackers:\n\nOh, I'm not saying I didn't get it, I'm saying I didn't see it in\nthe message. It looked as if you were only replying to Tom so after\nreading the jist of it I moved on. When you included it a little\nwhile ago I wondered what you were referring to so I read the whole\nthing more carefully and realized that I missed the end.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Aug 2002 12:34:40 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 15 Aug 2002, Bruce Momjian wrote:\n\n> Vince Vielhaber wrote:\n> > On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> >\n> > >\n> > > OK, no one complained/commented on my idea of having global users have a\n> > > trailing '@', so here is a patch that implements that. It has the\n> > > advantages of:\n> >\n> > Probably because not everyone saw it. I know I didn't. This entire\n> > issue is growing more and more complex. How about a configure item\n> > to not even compile it in? Or better yet, a configure item to put\n> > it there with the default off.\n>\n> I think I am prety close, and I don't see a configure flag as any better\n> than a GUC option that is off by default.\n\nBut how many people would even use it? I can't see adding the bloat\nunnecessarily and risking it accidently being turned on. Am I wrong\nand really alot of people actually want/need this?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Aug 2002 12:38:45 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> \n> > Vince Vielhaber wrote:\n> > > On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> > >\n> > > >\n> > > > OK, no one complained/commented on my idea of having global users have a\n> > > > trailing '@', so here is a patch that implements that. It has the\n> > > > advantages of:\n> > >\n> > > Probably because not everyone saw it. I know I didn't. This entire\n> > > issue is growing more and more complex. How about a configure item\n> > > to not even compile it in? Or better yet, a configure item to put\n> > > it there with the default off.\n> >\n> > I think I am prety close, and I don't see a configure flag as any better\n> > than a GUC option that is off by default.\n> \n> But how many people would even use it? I can't see adding the bloat\n> unnecessarily and risking it accidently being turned on. Am I wrong\n> and really alot of people actually want/need this?\n\nWell, the demand seems to be larger than I thought, considering the\nnumber of people who have chimed in and want certain features, like\nmultiple global users. I see this being using more by ISP's and\nuniversities that need better user/db partitioning.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 15 Aug 2002 12:41:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 15 Aug 2002, Bruce Momjian wrote:\n\n> Vince Vielhaber wrote:\n> > On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> >\n> > > Vince Vielhaber wrote:\n> > > > On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> > > >\n> > > > >\n> > > > > OK, no one complained/commented on my idea of having global users have a\n> > > > > trailing '@', so here is a patch that implements that. It has the\n> > > > > advantages of:\n> > > >\n> > > > Probably because not everyone saw it. I know I didn't. This entire\n> > > > issue is growing more and more complex. How about a configure item\n> > > > to not even compile it in? Or better yet, a configure item to put\n> > > > it there with the default off.\n> > >\n> > > I think I am prety close, and I don't see a configure flag as any better\n> > > than a GUC option that is off by default.\n> >\n> > But how many people would even use it? I can't see adding the bloat\n> > unnecessarily and risking it accidently being turned on. Am I wrong\n> > and really alot of people actually want/need this?\n>\n> Well, the demand seems to be larger than I thought, considering the\n> number of people who have chimed in and want certain features, like\n> multiple global users. I see this being using more by ISP's and\n> universities that need better user/db partitioning.\n\nI don't know that concern over a possible limited number of global\nusers is directly proportional to the desire for the feature.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Aug 2002 12:50:58 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> + /* We append database name if db_user_namespace true. */\n> + #define SM_DATABASE_USER (SM_DATABASE+SM_USER)\n\nIs this calculation correct? I'd think you'd need at least one more\ncharacter to allow for the \"@\". And I'm not sure about whether trailing\nnulls are or need to be counted. There seem to be some places in your\npatch where things are dimensioned SM_DATABASE_USER and some where it's\nSM_DATABASE_USER+1; why the inconsistency, and which is right?\n\nOther than getting the array sizes right, it does look like a nice\npatch; very small, which is what I'd hoped for. The notion of having to\nsay \"postgres@\" still seems kinda ugly, but given the simplicity of the\npatch I'm willing to live with that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Aug 2002 12:56:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Thu, 15 Aug 2002, Tom Lane wrote:\n\n> Other than getting the array sizes right, it does look like a nice\n> patch; very small, which is what I'd hoped for. The notion of having to\n> say \"postgres@\" still seems kinda ugly, but given the simplicity of the\n> patch I'm willing to live with that.\n\nGoing from postgres to postgres@ ??? I don't care how simple the patch\nis, I'd rather it was configurable to keep it out completely. That's\nnot just ugly, that's coyote ugly!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Aug 2002 13:00:37 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Thursday 15 August 2002 11:54 am, Bruce Momjian wrote:\n> OK, no one complained/commented on my idea of having global users have a\n> trailing '@', so here is a patch that implements that. It has the\n> advantages of:\n\nAs it's substantially the same as user@template1, I am of course OK with it. \n:-) Easier to type than user@template1, too.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 15 Aug 2002 13:04:16 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Going from postgres to postgres@ ??? I don't care how simple the patch\n> is, I'd rather it was configurable to keep it out completely. That's\n> not just ugly, that's coyote ugly!\n\nYeah, but it doesn't affect you unless you turn on the GUC parameter.\nMost people will never even know this code is there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Aug 2002 13:07:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Thu, 15 Aug 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > Going from postgres to postgres@ ??? I don't care how simple the patch\n> > is, I'd rather it was configurable to keep it out completely. That's\n> > not just ugly, that's coyote ugly!\n>\n> Yeah, but it doesn't affect you unless you turn on the GUC parameter.\n> Most people will never even know this code is there.\n\nBut it doesn't need to affect anyone, even if it's enabled. Isn't\nthe lack of an @ just as good as an @ at the end of the username?\nGets rid of the ugliness and won't break things if it's suddenly\nenabled.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Aug 2002 13:13:52 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> But it doesn't need to affect anyone, even if it's enabled. Isn't\n> the lack of an @ just as good as an @ at the end of the username?\n\nNo, because there isn't any @ in the incoming connection request in the\nnormal-user case: just a user name and a database name, which *we* have\nto assemble into user@database.\n\nWe can't really expect the users to do this for us (give user@database\nas their full user name). There are a number of reasons why I don't\nwanna do that, but the real showstopper is that the username field of\nthe connection request packet is only 32 bytes wide, and we cannot\nenlarge it without a protocol breakage. Fitting \"user@database\" in 32\nbytes would be awfully restrictive about your user and database names.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Aug 2002 13:21:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > + /* We append database name if db_user_namespace true. */\n> > + #define SM_DATABASE_USER (SM_DATABASE+SM_USER)\n> \n> Is this calculation correct? I'd think you'd need at least one more\n> character to allow for the \"@\". And I'm not sure about whether trailing\n> nulls are or need to be counted. There seem to be some places in your\n> patch where things are dimensioned SM_DATABASE_USER and some where it's\n> SM_DATABASE_USER+1; why the inconsistency, and which is right?\n\nYes, there was some inconsistency. The new patch fixes that up; \nattached.\n\n> Other than getting the array sizes right, it does look like a nice\n> patch; very small, which is what I'd hoped for. The notion of having to\n> say \"postgres@\" still seems kinda ugly, but given the simplicity of the\n> patch I'm willing to live with that.\n\nGlad we have something now everyone likes, or at least can live with.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.125\ndiff -c -r1.125 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t15 Aug 2002 14:26:15 -0000\t1.125\n--- doc/src/sgml/runtime.sgml\t15 Aug 2002 16:34:12 -0000\n***************\n*** 1191,1196 ****\n--- 1191,1211 ----\n </varlistentry>\n \n <varlistentry>\n+ <term><varname>DB_USER_NAMESPACE</varname> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Appends <literal>@</> and the database name to the user name when\n+ connecting to the database. This allows per-database users. \n+ User names ending with <literal>@</> are considered global and may \n+ connect to any database. It is recommended you create at least one \n+ global user, e.g. <literal>postgres@</>, before enabling this feature. \n+ Also, when creating user names containing <literal>@</>, you will need \n+ to quote the user name.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <indexterm>\n <primary>deadlock</primary>\n <secondary>timeout</secondary>\nIndex: src/backend/libpq/auth.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/libpq/auth.c,v\nretrieving revision 1.82\ndiff -c -r1.82 auth.c\n*** src/backend/libpq/auth.c\t20 Jun 2002 20:29:28 -0000\t1.82\n--- src/backend/libpq/auth.c\t15 Aug 2002 16:34:13 -0000\n***************\n*** 117,123 ****\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n--- 117,123 ----\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_DATABASE_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n***************\n*** 290,296 ****\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\n--- 290,296 ----\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_DATABASE_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\nIndex: src/backend/postmaster/postmaster.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/postmaster/postmaster.c,v\nretrieving revision 1.283\ndiff -c -r1.283 postmaster.c\n*** src/backend/postmaster/postmaster.c\t10 Aug 2002 20:29:18 -0000\t1.283\n--- src/backend/postmaster/postmaster.c\t15 Aug 2002 16:34:15 -0000\n***************\n*** 116,122 ****\n sigset_t\tUnBlockSig,\n \t\t\tBlockSig,\n \t\t\tAuthBlockSig;\n- \n #else\n int\t\t\tUnBlockSig,\n \t\t\tBlockSig,\n--- 116,121 ----\n***************\n*** 191,196 ****\n--- 190,197 ----\n bool\t\tHostnameLookup;\t\t/* for ps display */\n bool\t\tShowPortNumber;\n bool\t\tLog_connections = false;\n+ bool\t\tDb_user_namespace = false;\n+ \n \n /* Startup/shutdown state */\n static pid_t StartupPID = 0,\n***************\n*** 1161,1166 ****\n--- 1162,1177 ----\n \tif (port->user[0] == '\\0')\n \t\telog(FATAL, \"no PostgreSQL user name specified in startup packet\");\n \n+ \t/* Append database name for per-db user namespace, exclude global users. */\n+ \tif (Db_user_namespace && strlen(port->user) > 0 &&\n+ \t\tport->user[strlen(port->user)-1] != '@')\n+ \t{\n+ \t\tchar hold_user[SM_DATABASE_USER+1];\n+ \t\tsnprintf(hold_user, SM_DATABASE_USER+1, \"%s@%s\", port->user,\n+ \t\t\t\t port->database);\n+ \t\tstrcpy(port->user, hold_user);\n+ \t}\n+ \n \t/*\n \t * If we're going to reject the connection due to database state, say\n \t * so now instead of wasting cycles on an authentication exchange.\n***************\n*** 2587,2597 ****\n \tif (FindExec(fullprogname, argv[0], \"postmaster\") < 0)\n \t\treturn false;\n \n! \tfilename = palloc(strlen(DataDir) + 20);\n \tsprintf(filename, \"%s/postmaster.opts\", DataDir);\n \n! \tfp = fopen(filename, \"w\");\n! \tif (fp == NULL)\n \t{\n \t\tpostmaster_error(\"cannot create file %s: %s\",\n \t\t\t\t\t\t filename, strerror(errno));\n--- 2598,2607 ----\n \tif (FindExec(fullprogname, argv[0], \"postmaster\") < 0)\n \t\treturn false;\n \n! \tfilename = palloc(strlen(DataDir) + 17);\n \tsprintf(filename, \"%s/postmaster.opts\", DataDir);\n \n! \tif ((fp = fopen(filename, \"w\")) == NULL)\n \t{\n \t\tpostmaster_error(\"cannot create file %s: %s\",\n \t\t\t\t\t\t filename, strerror(errno));\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/guc.c,v\nretrieving revision 1.82\ndiff -c -r1.82 guc.c\n*** src/backend/utils/misc/guc.c\t15 Aug 2002 02:51:26 -0000\t1.82\n--- src/backend/utils/misc/guc.c\t15 Aug 2002 16:34:17 -0000\n***************\n*** 483,488 ****\n--- 483,492 ----\n \t\t{ \"transform_null_equals\", PGC_USERSET }, &Transform_null_equals,\n \t\tfalse, NULL, NULL\n \t},\n+ \t{\n+ \t\t{ \"db_user_namespace\", PGC_SIGHUP }, &Db_user_namespace,\n+ \t\tfalse, NULL, NULL\n+ \t},\n \n \t{\n \t\t{ NULL, 0 }, NULL, false, NULL, NULL\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.44\ndiff -c -r1.44 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t12 Aug 2002 00:36:12 -0000\t1.44\n--- src/backend/utils/misc/postgresql.conf.sample\t15 Aug 2002 16:34:17 -0000\n***************\n*** 113,119 ****\n #\n #\tMessage display\n #\n- \n #server_min_messages = notice\t# Values, in order of decreasing detail:\n \t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n \t\t\t\t# info, notice, warning, error, log, fatal,\n--- 113,118 ----\n***************\n*** 201,203 ****\n--- 200,203 ----\n #sql_inheritance = true\n #transform_null_equals = false\n #statement_timeout = 0\t\t\t\t# 0 is disabled\n+ #db_user_namespace = false\nIndex: src/include/libpq/libpq-be.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/libpq/libpq-be.h,v\nretrieving revision 1.32\ndiff -c -r1.32 libpq-be.h\n*** src/include/libpq/libpq-be.h\t20 Jun 2002 20:29:49 -0000\t1.32\n--- src/include/libpq/libpq-be.h\t15 Aug 2002 16:34:18 -0000\n***************\n*** 59,65 ****\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\n--- 59,65 ----\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_DATABASE_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\nIndex: src/include/libpq/pqcomm.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/libpq/pqcomm.h,v\nretrieving revision 1.65\ndiff -c -r1.65 pqcomm.h\n*** src/include/libpq/pqcomm.h\t12 Aug 2002 14:35:26 -0000\t1.65\n--- src/include/libpq/pqcomm.h\t15 Aug 2002 16:34:18 -0000\n***************\n*** 114,119 ****\n--- 114,121 ----\n #define SM_DATABASE\t\t64\n /* SM_USER should be the same size as the others. bjm 2002-06-02 */\n #define SM_USER\t\t\t32\n+ /* We append database name if db_user_namespace true. */\n+ #define SM_DATABASE_USER (SM_DATABASE+SM_USER+1) /* +1 for @ */\n #define SM_OPTIONS\t\t64\n #define SM_UNUSED\t\t64\n #define SM_TTY\t\t\t64\n***************\n*** 124,135 ****\n--- 126,139 ----\n {\n \tProtocolVersion protoVersion;\t\t/* Protocol version */\n \tchar\t\tdatabase[SM_DATABASE];\t/* Database name */\n+ \t\t\t\t/* Db_user_namespace appends dbname */\n \tchar\t\tuser[SM_USER];\t/* User name */\n \tchar\t\toptions[SM_OPTIONS];\t/* Optional additional args */\n \tchar\t\tunused[SM_UNUSED];\t\t/* Unused */\n \tchar\t\ttty[SM_TTY];\t/* Tty for debug output */\n } StartupPacket;\n \n+ extern bool Db_user_namespace;\n \n /* These are the authentication requests sent by the backend. */",
"msg_date": "Thu, 15 Aug 2002 13:43:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\n> But how many people would even use it? I can't see adding the bloat\n> unnecessarily and risking it accidently being turned on. Am I wrong\n> and really alot of people actually want/need this?\n\nAt an absolute minimum there are two. Myself and Marc.\n\nThat said, this is a semi-required step to offerring Postgresql as a\nservice to clients. The refined permissions where a much more important\nstep.\n\nSo, take the number of people actively watching -hackers and use that as\na percentage.\n\n\n\n",
"msg_date": "15 Aug 2002 14:10:24 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 15 Aug 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > But it doesn't need to affect anyone, even if it's enabled. Isn't\n> > the lack of an @ just as good as an @ at the end of the username?\n>\n> No, because there isn't any @ in the incoming connection request in the\n> normal-user case: just a user name and a database name, which *we* have\n> to assemble into user@database.\n>\n> We can't really expect the users to do this for us (give user@database\n> as their full user name). There are a number of reasons why I don't\n> wanna do that, but the real showstopper is that the username field of\n> the connection request packet is only 32 bytes wide, and we cannot\n> enlarge it without a protocol breakage. Fitting \"user@database\" in 32\n> bytes would be awfully restrictive about your user and database names.\n\nOk, I misunderstood. I thought it was the user going to have to type\nthat in based on some of yesterday's comments.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Aug 2002 14:30:34 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, I have another idea. What if we make global users end with an @, so\n> dave@ is a global user. We can easily check for that in the postmaster\n> and not append the dbname. I know it makes @ a special character, but\n> considering the problem of namespace collision, it seems better than\n> what we have now. We could add the install user too if we wish, or just\n> tell them to make sure they add a user@ before turning on the feature.\n\nI don't like where this is going. The original plan was to create a\nfeature that was simple and transparent. Now we have a feature that might\nbe simple to implement, but is neither simple to understand nor\ntransparent. Instead it uglifies the common interfaces.\n\nI don't see what the problem is of dumping out the entire content of\npg_shadow into a flat file. First you look for a non-@ user, then you\nlook for an @ user that matches the database.\n\nThe interface should drive the implementation, not the other way around.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 15 Aug 2002 21:30:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I don't see what the problem is of dumping out the entire content of\n> pg_shadow into a flat file. First you look for a non-@ user, then you\n> look for an @ user that matches the database.\n\nWhile I'd prefer that approach myself, the way Bruce is proposing does\nhave a definite advantage: there is no problem with confusion between\nglobal users and database-local users of the same username. \"foo@\" is\nglobal, \"foo\" is not.\n\nMy own feeling is that the confusion argument is a weak one, and that\nnot having to use \"@\" to log in as a global user would be worth having\nto avoid duplicating global and local names. But I'm not sufficiently\nexcited about it to volunteer to do the work ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Aug 2002 15:51:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Thu, 15 Aug 2002, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I don't see what the problem is of dumping out the entire content of\n> > pg_shadow into a flat file. First you look for a non-@ user, then you\n> > look for an @ user that matches the database.\n>\n> While I'd prefer that approach myself, the way Bruce is proposing does\n> have a definite advantage: there is no problem with confusion between\n> global users and database-local users of the same username. \"foo@\" is\n> global, \"foo\" is not.\n>\n> My own feeling is that the confusion argument is a weak one, and that\n> not having to use \"@\" to log in as a global user would be worth having\n> to avoid duplicating global and local names. But I'm not sufficiently\n> excited about it to volunteer to do the work ;-)\n\nHere we go again. I thought you just said that the @ wouldn't be\nsomething a user would have to do. I understood that to be any user.\nIt's back to ugly again.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Aug 2002 16:04:24 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "vev@michvhf.com (Vince Vielhaber) wrote\n> Here we go again. I thought you just said that the @ wouldn't be\n> something a user would have to do. I understood that to be any user.\n> It's back to ugly again.\n> \n> Vince.\n\nIf it means anything to you, I agree that it should be a configure/compile \ntime option and not a GUC variable -- no, actually this whole thing should \njust be distributed as diff in contrib and if someone wants it they could \npatch it by hand, thats just as asinine as the current implemenation.\n\nWhat about actually incorporating this into the privileges system?\n",
"msg_date": "Thu, 15 Aug 2002 23:03:23 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I don't see what the problem is of dumping out the entire content of\n> > pg_shadow into a flat file. First you look for a non-@ user, then you\n> > look for an @ user that matches the database.\n> \n> While I'd prefer that approach myself, the way Bruce is proposing does\n> have a definite advantage: there is no problem with confusion between\n> global users and database-local users of the same username. \"foo@\" is\n> global, \"foo\" is not.\n> \n> My own feeling is that the confusion argument is a weak one, and that\n> not having to use \"@\" to log in as a global user would be worth having\n> to avoid duplicating global and local names. But I'm not sufficiently\n> excited about it to volunteer to do the work ;-)\n\nIf we don't suffix global users with '@', a global user named 'dave'\ncould not attach to a database called 'db1' as himself if a user called\n'dave@db1' existed. If you have a super-user, who you want to be able to\nconnect to any database, the creation of that name in any database would\nblock the superuser from connecting as themselves. That is the\nconfusion I want to avoid. \n\nI have seen some negative reactions to the feature. I am willing to ask\nfor a vote, if that is what people want. If not, I will apply the patch\nin the next day or two.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 15 Aug 2002 21:12:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "> I have seen some negative reactions to the feature. I am willing to ask\n> for a vote, if that is what people want. If not, I will apply the patch\n> in the next day or two.\n\nPlease apply.\n\n",
"msg_date": "15 Aug 2002 21:14:42 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> If we don't suffix global users with '@', a global user named 'dave'\n> could not attach to a database called 'db1' as himself if a user called\n> 'dave@db1' existed.\n\nNo, it's the other way around (assuming you check user before user@db):\nthe existence of a global user would prevent similarly-named local users\nfrom connecting. This does not strike me as too terrible, assuming that\nthere are not very many global users.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Aug 2002 22:20:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > If we don't suffix global users with '@', a global user named 'dave'\n> > could not attach to a database called 'db1' as himself if a user called\n> > 'dave@db1' existed.\n> \n> No, it's the other way around (assuming you check user before user@db):\n> the existence of a global user would prevent similarly-named local users\n> from connecting. This does not strike me as too terrible, assuming that\n> there are not very many global users.\n\nYes, something like that. It could go either way. I never actually\ncoded the double checking.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 16 Aug 2002 00:10:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 15 Aug 2002, Bruce Momjian wrote:\n\n> I have seen some negative reactions to the feature. I am willing to ask\n> for a vote, if that is what people want. If not, I will apply the patch\n> in the next day or two.\n\nSo are you calling for a vote or just willing to ask for one? I vote for\nputting it in contrib and letting whoever wants it apply it and use it.\nThe more we discuss it the worse it looks.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 16 Aug 2002 05:21:45 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> So are you calling for a vote or just willing to ask for one? I vote for\n> putting it in contrib and letting whoever wants it apply it and use it.\n\nThe trouble with putting it in contrib is that that makes it effectively\nunavailable to anyone who installs from RPMs, or otherwise doesn't build\nfrom source for themselves. Putting a patch diff in contrib is a bad\nidea anyway since the patch will suffer bit-rot in no time, as the\nreferenced files change.\n\nSince the patch is small and doesn't change behavior or performance if\nyou don't enable the feature, I don't think there's a good reason to\npush it off to contrib just because it's ugly.\n\n> The more we discuss it the worse it looks.\n\nI still like the other way better --- but I'm still not prepared to do\nthe legwork to make it happen, so I have to defer to whatever Bruce is\nwilling to implement.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Aug 2002 09:47:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Fri, 16 Aug 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > So are you calling for a vote or just willing to ask for one? I vote for\n> > putting it in contrib and letting whoever wants it apply it and use it.\n>\n> The trouble with putting it in contrib is that that makes it effectively\n> unavailable to anyone who installs from RPMs, or otherwise doesn't build\n> from source for themselves. Putting a patch diff in contrib is a bad\n> idea anyway since the patch will suffer bit-rot in no time, as the\n> referenced files change.\n\nRPMs aren't a good enough reason to put it in. All features aren't\ninstalled in an RPM, why would this need to? Besides, anything that\nis runtime configurable can end up getting its default changed on a\nwhim. Then again as long as 7.2.1 is stable enough for me there's\nno reason to upgrade 'cuze I damn sure ain't going back and changing\nall sorts of programs and scripts that have global users.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 16 Aug 2002 10:21:12 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Vince Vielhaber writes:\n > [ 'user@' patch ]\n > whim. Then again as long as 7.2.1 is stable enough for me there's\n > no reason to upgrade 'cuze I damn sure ain't going back and changing\n > all sorts of programs and scripts that have global users.\n\nHaving read bits and pieces of this thread, can those in favour\nconfirm that this would be an effect of this patch? If so I fail to\nsee the usefulness of this and indeed it would be very harmful to\nexisting installations! All use of PostgreSQL utilities in scripts for\nour product always do a '-U sprint' to use a global user, this aids\nour internal development and makes installation notes for clients\neasier...\n\nAlso what effect would adding significance to '@' in the context of\nusernames have, if any, on the current use of it as a database/host\nseparator (in ECPG, certainly would be useful in the utilities too)?\n\nThanks, Lee.\n",
"msg_date": "Fri, 16 Aug 2002 15:40:15 +0100",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> Vince Vielhaber writes:\n>>> [ 'user@' patch ]\n>>> whim. Then again as long as 7.2.1 is stable enough for me there's\n>>> no reason to upgrade 'cuze I damn sure ain't going back and changing\n>>> all sorts of programs and scripts that have global users.\n\n> Having read bits and pieces of this thread, can those in favour\n> confirm that this would be an effect of this patch?\n\nI think Vince is talking through his hat. The proposed flag wouldn't\never be enabled by default. If someone did turn it on in their\ninstallation \"on a whim\", they'd soon turn it off again if they didn't\nlike the effects. I do not see much difference between the above\nargument and arguing \"we shouldn't have i18n support, because if I\nturned it on on a whim I wouldn't be able to read my error messages\".\n\nOnce again: *no one* has at any time suggested that any form of this\npatch should affect the default behavior in the slightest.\n\n> Also what effect would adding significance to '@' in the context of\n> usernames have, if any, on the current use of it as a database/host\n> separator (in ECPG, certainly would be useful in the utilities too)?\n\nWell, I don't see any difficulty there, but if you are aware of a\ncontext where it'd be a problem, point it out!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Aug 2002 10:48:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Fri, Aug 16, 2002 at 10:21:12AM -0400, Vince Vielhaber wrote:\n \n> RPMs aren't a good enough reason to put it in. All features aren't\n> installed in an RPM, why would this need to? Besides, anything that\n> is runtime configurable can end up getting its default changed on a\n> whim. Then again as long as 7.2.1 is stable enough for me there's\n> no reason to upgrade 'cuze I damn sure ain't going back and changing\n> all sorts of programs and scripts that have global users.\n \nSo, Vince, do you have problems with the various GUC based optimizer\nhooks getting set to other than the default? I'd think you'd notice \nif suddenly indexscans all went away, or any of these:\n\n#enable_seqscan = true\n#enable_indexscan = true\n#enable_tidscan = true\n#enable_sort = true\n#enable_nestloop = true\n#enable_mergejoin = true\n#enable_hashjoin = true\n\nMy point is that your resistance to a GUC controlled runtime configurable\non the basis of 'it might get changed accidently' makes little sense to\nme, given all the other runtime config settings that never do get changed.\nWhat makes you think this one will be more susceptible to accidental\nflipping?\n\nI'm not sure who's 'whim' it is that your afraid of: perhaps you have a\npaticularly sadistic DBA to deal with? ;-) And of course, this being \nfree software and all, noone is forcing an upgrade on you.\n\nRoss\n",
"msg_date": "Fri, 16 Aug 2002 09:51:15 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Fri, 2002-08-16 at 09:51, Ross J. Reedstrom wrote:\n> On Fri, Aug 16, 2002 at 10:21:12AM -0400, Vince Vielhaber wrote:\n> \n> > RPMs aren't a good enough reason to put it in. All features aren't\n> > installed in an RPM, why would this need to? Besides, anything that\n> > is runtime configurable can end up getting its default changed on a\n> > whim. Then again as long as 7.2.1 is stable enough for me there's\n> > no reason to upgrade 'cuze I damn sure ain't going back and changing\n> > all sorts of programs and scripts that have global users.\n> \n> So, Vince, do you have problems with the various GUC based optimizer\n> hooks getting set to other than the default? I'd think you'd notice \n> if suddenly indexscans all went away, or any of these:\n> \n> #enable_seqscan = true\n> #enable_indexscan = true\n> #enable_tidscan = true\n> #enable_sort = true\n> #enable_nestloop = true\n> #enable_mergejoin = true\n> #enable_hashjoin = true\n> \n> My point is that your resistance to a GUC controlled runtime configurable\n> on the basis of 'it might get changed accidently' makes little sense to\n> me, given all the other runtime config settings that never do get changed.\n> What makes you think this one will be more susceptible to accidental\n> flipping?\n> \n> I'm not sure who's 'whim' it is that your afraid of: perhaps you have a\n> paticularly sadistic DBA to deal with? ;-) And of course, this being \n> free software and all, noone is forcing an upgrade on you.\nAND, I thought the general consensus was **AWAY** from configure time\ndirectives and to GUC variables whenever **POSSIBLE**. \n\nLER\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "16 Aug 2002 10:09:46 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> \n> > I have seen some negative reactions to the feature. I am willing to ask\n> > for a vote, if that is what people want. If not, I will apply the patch\n> > in the next day or two.\n> \n> So are you calling for a vote or just willing to ask for one? I vote for\n> putting it in contrib and letting whoever wants it apply it and use it.\n> The more we discuss it the worse it looks.\n\nI can do a vote. However, seeing many positive comments about the\npatch, and 1-2 negative ones (with no suggestion on how to improve it),\nI don't think the negative votes will win.\n\nI usually do a vote when the email comments are coming in kind of close.\n\nSpecifically, in the thread, I have Vince and Peter as negative, and >7\npositive, I think.\n\nLook at the contraints I am under to implement what is effectively\nusername schemas:\n\n\tsmall patch, no bloat, because it isn't a core feature\n\tmultiple global users\n\tno namespace collisions between global/non-global users\n\tzero performance impact\n\t32-byte user string coming from the client\n\nSpecifically, what is ugly about it? Is it that global users have an @\nat the end of their names? How do we prevent namespace collisions\n_without_ doing this? I am all ears.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 16 Aug 2002 11:45:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Specifically, what is ugly about it? Is it that global users have an @\n> at the end of their names? How do we prevent namespace collisions\n> _without_ doing this? I am all ears.\n\nThe folks who are unhappy about this design basically think that the\nnamespace collisions issue should not be considered a vital requirement;\nwhereupon you don't have to have the '@' because a search in the\npg_shadow flat file would work well enough.\n\nIt comes down to a judgment call about which is uglier, putting '@' on\nglobal usernames or having to avoid namespace collisions.\n\nAt this point I think we've wasted more than enough time on the\nargument; I haven't seen any new ideas recently, nor any change in\nanyone's position. Since no one seems to want to do the work to make a\nbetter implementation, I vote we accept the patch we have and move on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Aug 2002 12:46:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Fri, 16 Aug 2002, Tom Lane wrote:\n\n> Lee Kindness <lkindness@csl.co.uk> writes:\n> > Vince Vielhaber writes:\n> >>> [ 'user@' patch ]\n> >>> whim. Then again as long as 7.2.1 is stable enough for me there's\n> >>> no reason to upgrade 'cuze I damn sure ain't going back and changing\n> >>> all sorts of programs and scripts that have global users.\n>\n> > Having read bits and pieces of this thread, can those in favour\n> > confirm that this would be an effect of this patch?\n>\n> I think Vince is talking through his hat. The proposed flag wouldn't\n> ever be enabled by default. If someone did turn it on in their\n> installation \"on a whim\", they'd soon turn it off again if they didn't\n> like the effects. I do not see much difference between the above\n> argument and arguing \"we shouldn't have i18n support, because if I\n> turned it on on a whim I wouldn't be able to read my error messages\".\n>\n> Once again: *no one* has at any time suggested that any form of this\n> patch should affect the default behavior in the slightest.\n\nNot yet they haven't. What happens when it's decided that this\n*feature* is a good thing and should be the default? Maybe not\nnow, but can you guarantee that that won't happen in say 7.4? Or\nmaybe 8.0? I can hear it now, \"Well we're giving you an entire\nversion to change your scripts\".\n\nThere's not even a concensus that this is the right way to do it,\nyou even said you'd prefer it was implemented in another way but\ndon't have the time to do it. Since when does this group rush to\nstuff features in without agreement even on HOW to implement it?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 16 Aug 2002 13:01:21 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Vince Vielhaber wrote:\n> > Once again: *no one* has at any time suggested that any form of this\n> > patch should affect the default behavior in the slightest.\n> \n> Not yet they haven't. What happens when it's decided that this\n> *feature* is a good thing and should be the default? Maybe not\n> now, but can you guarantee that that won't happen in say 7.4? Or\n> maybe 8.0? I can hear it now, \"Well we're giving you an entire\n> version to change your scripts\".\n\n\nI can't argue hypothetical with you, but if we decided to make this a\ndefault behavior, we would probably push the functionality down into\nCREATE USER, create a new column in pg_shadow, lengthen the username\npassed from the client, and do it that way. However, because it is not\non by default _and_ we don't want to add visibility to a functionality\nthat is off by default, we are doing it this way.\n\nRemember, non-local users already have an @ in their username. I am\njust adding @ to the global users too. This functionality actually\nallows you to keep your old users in pg_shadow and once you turn on the\nfeature, those users become unusable. When you turn the feature off,\nthey are back again.\n\nI know the trailing @ is ugly, but it prevents surpises when connecting\nto the database.\n\n> There's not even a consensus that this is the right way to do it,\n> you even said you'd prefer it was implemented in another way but\n> don't have the time to do it. Since when does this group rush to\n> stuff features in without agreement even on HOW to implement it?\n\nThis is an argument I don't want to bow to. How many features have we\nleft undone, for release after release, because we couldn't find a\nperfect way to do it, so we did nothing, and users went elsewhere for\ntheir database needs? We have had enough discussion to know that there\nisn't a perfect solution in this case, so we are going to implement the\nbest we can, and if we have to revisit it in 8.0, so be it. I am sure\nyou will still be around to help craft that solution.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 16 Aug 2002 13:30:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Fri, 16 Aug 2002, Ross J. Reedstrom wrote:\n\n> On Fri, Aug 16, 2002 at 10:21:12AM -0400, Vince Vielhaber wrote:\n>\n> > RPMs aren't a good enough reason to put it in. All features aren't\n> > installed in an RPM, why would this need to? Besides, anything that\n> > is runtime configurable can end up getting its default changed on a\n> > whim. Then again as long as 7.2.1 is stable enough for me there's\n> > no reason to upgrade 'cuze I damn sure ain't going back and changing\n> > all sorts of programs and scripts that have global users.\n>\n> So, Vince, do you have problems with the various GUC based optimizer\n> hooks getting set to other than the default? I'd think you'd notice\n> if suddenly indexscans all went away, or any of these:\n>\n> #enable_seqscan = true\n> #enable_indexscan = true\n> #enable_tidscan = true\n> #enable_sort = true\n> #enable_nestloop = true\n> #enable_mergejoin = true\n> #enable_hashjoin = true\n>\n> My point is that your resistance to a GUC controlled runtime configurable\n> on the basis of 'it might get changed accidently' makes little sense to\n> me, given all the other runtime config settings that never do get changed.\n> What makes you think this one will be more susceptible to accidental\n> flipping?\n\nMy point has nothing to do with resistance to GUC configurables. Someone\nWILL decide that having it as a default is a *Good Thing* because it's\nthere and is useful to them and in its current implementation there's not\neven a concensus that it's the right way to do it. It's being rushed into\nthis version unnecessarily.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 16 Aug 2002 13:34:05 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> My point has nothing to do with resistance to GUC configurables. Someone\n> WILL decide that having it as a default is a *Good Thing* because it's\n> there and is useful to them\n\nWhich someone would this be? There's no chance that such a proposal \nwould pass a pghackers vote, and certainly no chance that someone\ncould commit such a change into CVS without everyone noticing.\n\n> and in its current implementation there's not\n> even a concensus that it's the right way to do it. It's being rushed into\n> this version unnecessarily.\n\nIt's being rushed into this version because we need a stopgap solution.\nI don't see it as anything but a stopgap. The fact that it's a very\nsmall patch is good, because it can be replaced with minimal effort once\nsomeone has the time to design and implement a better mechanism for\nmulti-database user management. AFAICT a proper solution will involve\nconsiderable work, and I don't see it happening in time for 7.3.\n\nAlso, ugly as this may be, it's still better than the old solution for\npeople who are trying to support multiple similarly-named users in\ndifferent databases. The old hack required external password files\nwhich mean manual management, admin involvement in any password change,\netc. With this approach users can set their password normally even if\nthey're being restricted to one database. So realistically I think this\ndoes not affect people who aren't using it, and for people who do want\nto use it it's a step forward, even if not as far forward as we'd like.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Aug 2002 14:17:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "BTW, I just thought of a small improvement to your patch that eliminates\nsome of the ugliness. Suppose that when we recognize an attempt to\nconnect as a global user (ie, feature flag is on and last character of\nusername is '@'), we strip off the '@' before proceeding. Then we would\nhave:\n\tglobal users appear in pg_shadow as foo\n\tlocal users appear in pg_shadow as foo@db\nand what this would mean is that you can flip between feature-enabled\nand feature-disabled states without breaking your global logins. So you\ndon't need the extra step of creating a \"postgres@\" before turning on\nthe feature. (Which was pretty ugly anyway, since even though postgres@\ncould be made a superuser, he wouldn't be the same user as postgres ---\nthis affects table ownership, for example, and would be a serious issue\nif you wanted any non-superuser global users.)\n\nI suppose some might argue that having to say postgres@ to log in,\nwhen your username is really just postgres as far as you can see in the\ndatabase, is a tad confusing. But the whole thing is an acknowledged\nwart anyway, and I think getting rid of the two problems mentioned above\nis worth it.\n\nAlso, if we do this then it's important to strip a trailing '@' only\nif it's the *only* one in the given username. Else a local user\n'foo@db1' could cheat to log into db2 by saying username = 'foo@db1@'\nwith requested database db2. But I can't see any other security hole.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Aug 2002 14:29:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> BTW, I just thought of a small improvement to your patch that eliminates\n> some of the ugliness. Suppose that when we recognize an attempt to\n> connect as a global user (ie, feature flag is on and last character of\n> username is '@'), we strip off the '@' before proceeding. Then we would\n> have:\n> \tglobal users appear in pg_shadow as foo\n> \tlocal users appear in pg_shadow as foo@db\n> and what this would mean is that you can flip between feature-enabled\n> and feature-disabled states without breaking your global logins. So you\n> don't need the extra step of creating a \"postgres@\" before turning on\n> the feature. (Which was pretty ugly anyway, since even though postgres@\n> could be made a superuser, he wouldn't be the same user as postgres ---\n> this affects table ownership, for example, and would be a serious issue\n> if you wanted any non-superuser global users.)\n> \n> I suppose some might argue that having to say postgres@ to log in,\n> when your username is really just postgres as far as you can see in the\n> database, is a tad confusing. But the whole thing is an acknowledged\n> wart anyway, and I think getting rid of the two problems mentioned above\n> is worth it.\n\nSure. If I can get one more 'yes' I will submit a new patch with the\nchange. It does prevent the namespace collision without mucking up\npg_shadow. We only need to tell people that global users need to supply\ntheir username to the client as user@. Is that cleaner?\n\n> Also, if we do this then it's important to strip a trailing '@' only\n> if it's the *only* one in the given username. Else a local user\n> 'foo@db1' could cheat to log into db2 by saying username = 'foo@db1@'\n> with requested database db2. But I can't see any other security hole.\n\nEwe, I didn't think of that. Good point.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 16 Aug 2002 15:03:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Fri, 2002-08-16 at 20:03, Bruce Momjian wrote:\n> Sure. If I can get one more 'yes' I will submit a new patch with the\n> change. It does prevent the namespace collision without mucking up\n> pg_shadow. We only need to tell people that global users need to supply\n> their username to the client as user@. Is that cleaner?\n\nI will vote yes for this change. I think the flexibility this new\nsystem offers will make it much easier for people to offer PostgreSQL\nhosting facilities, of which I would like to see many more.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And whatsoever ye shall ask in my name, that will I \n do, that the Father may be glorified in the Son.\" \n John 14:13 \n\n",
"msg_date": "16 Aug 2002 20:31:01 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "pgman@candle.pha.pa.us (Bruce Momjian) wrote\n> \n> I know the trailing @ is ugly, but it prevents surpises when connecting\n> to the database.\n> \n\nif you would make the magic character a variable then perhaps you could \nprevent the ugly... if/when you turn off the feature, you could set the \nPGSQL_STUPID_MAGIC_CHARACTER to '', then you would be appending an empty \nstring instead of a @, when you want to turn it back on, set the variable \nback to '@'... and if you change the character, well dont..\n",
"msg_date": "Sat, 17 Aug 2002 00:40:32 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "ngpg@grymmjack.com wrote:\n> pgman@candle.pha.pa.us (Bruce Momjian) wrote\n> > \n> > I know the trailing @ is ugly, but it prevents surpises when connecting\n> > to the database.\n> > \n> \n> if you would make the magic character a variable then perhaps you could \n> prevent the ugly... if/when you turn off the feature, you could set the \n> PGSQL_STUPID_MAGIC_CHARACTER to '', then you would be appending an empty \n> string instead of a @, when you want to turn it back on, set the variable \n> back to '@'... and if you change the character, well dont..\n\nIt already does that. When it is off, it works just like it does in 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 16 Aug 2002 22:13:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "OK, here is the patch with the suggested changes. I am sending the\npatch to hackers because there has been so much interest in this.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> BTW, I just thought of a small improvement to your patch that eliminates\n> some of the ugliness. Suppose that when we recognize an attempt to\n> connect as a global user (ie, feature flag is on and last character of\n> username is '@'), we strip off the '@' before proceeding. Then we would\n> have:\n> \tglobal users appear in pg_shadow as foo\n> \tlocal users appear in pg_shadow as foo@db\n> and what this would mean is that you can flip between feature-enabled\n> and feature-disabled states without breaking your global logins. So you\n> don't need the extra step of creating a \"postgres@\" before turning on\n> the feature. (Which was pretty ugly anyway, since even though postgres@\n> could be made a superuser, he wouldn't be the same user as postgres ---\n> this affects table ownership, for example, and would be a serious issue\n> if you wanted any non-superuser global users.)\n> \n> I suppose some might argue that having to say postgres@ to log in,\n> when your username is really just postgres as far as you can see in the\n> database, is a tad confusing. But the whole thing is an acknowledged\n> wart anyway, and I think getting rid of the two problems mentioned above\n> is worth it.\n> \n> Also, if we do this then it's important to strip a trailing '@' only\n> if it's the *only* one in the given username. Else a local user\n> 'foo@db1' could cheat to log into db2 by saying username = 'foo@db1@'\n> with requested database db2. But I can't see any other security hole.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.125\ndiff -c -r1.125 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t15 Aug 2002 14:26:15 -0000\t1.125\n--- doc/src/sgml/runtime.sgml\t17 Aug 2002 04:14:34 -0000\n***************\n*** 1191,1196 ****\n--- 1191,1216 ----\n </varlistentry>\n \n <varlistentry>\n+ <term><varname>DB_USER_NAMESPACE</varname> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ This allows per-database user names. You can create users as <literal>\n+ username@dbname</>. When <literal>username</> is passed by the client,\n+ <literal>@</> and the database name is appended to the user name and\n+ that database-specific user name is looked up by the server. \n+ When creating user names containing <literal>@</>, you will need\n+ to quote the user name.\n+ </para>\n+ <para>\n+ With this option enabled, you can still create ordinary global \n+ users. Simply append <literal>@</> when specifying the user name\n+ in the client. The <literal>@</> will be stripped off and looked up\n+ by the server. \n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <indexterm>\n <primary>deadlock</primary>\n <secondary>timeout</secondary>\nIndex: src/backend/libpq/auth.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/libpq/auth.c,v\nretrieving revision 1.82\ndiff -c -r1.82 auth.c\n*** src/backend/libpq/auth.c\t20 Jun 2002 20:29:28 -0000\t1.82\n--- src/backend/libpq/auth.c\t17 Aug 2002 04:14:35 -0000\n***************\n*** 117,123 ****\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n--- 117,123 ----\n \t\t\t version, PG_KRB4_VERSION);\n \t\treturn STATUS_ERROR;\n \t}\n! \tif (strncmp(port->user, auth_data.pname, SM_DATABASE_USER) != 0)\n \t{\n \t\telog(LOG, \"pg_krb4_recvauth: name \\\"%s\\\" != \\\"%s\\\"\",\n \t\t\t port->user, auth_data.pname);\n***************\n*** 290,296 ****\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\n--- 290,296 ----\n \t}\n \n \tkusername = pg_an_to_ln(kusername);\n! \tif (strncmp(port->user, kusername, SM_DATABASE_USER))\n \t{\n \t\telog(LOG, \"pg_krb5_recvauth: user name \\\"%s\\\" != krb5 name \\\"%s\\\"\",\n \t\t\t port->user, kusername);\nIndex: src/backend/postmaster/postmaster.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/postmaster/postmaster.c,v\nretrieving revision 1.283\ndiff -c -r1.283 postmaster.c\n*** src/backend/postmaster/postmaster.c\t10 Aug 2002 20:29:18 -0000\t1.283\n--- src/backend/postmaster/postmaster.c\t17 Aug 2002 04:14:40 -0000\n***************\n*** 116,122 ****\n sigset_t\tUnBlockSig,\n \t\t\tBlockSig,\n \t\t\tAuthBlockSig;\n- \n #else\n int\t\t\tUnBlockSig,\n \t\t\tBlockSig,\n--- 116,121 ----\n***************\n*** 191,196 ****\n--- 190,197 ----\n bool\t\tHostnameLookup;\t\t/* for ps display */\n bool\t\tShowPortNumber;\n bool\t\tLog_connections = false;\n+ bool\t\tDb_user_namespace = false;\n+ \n \n /* Startup/shutdown state */\n static pid_t StartupPID = 0,\n***************\n*** 1161,1166 ****\n--- 1162,1182 ----\n \tif (port->user[0] == '\\0')\n \t\telog(FATAL, \"no PostgreSQL user name specified in startup packet\");\n \n+ \tif (Db_user_namespace)\n+ {\n+ \t\t/* If user@, it is a global user, remove '@' */\n+ \t\tif (strchr(port->user, '@') == port->user + strlen(port->user)-1)\n+ \t\t\t*strchr(port->user, '@') = '\\0';\n+ \t\telse\n+ \t\t{\n+ \t\t\t/* Append '@' and dbname */\n+ \t\t\tchar hold_user[SM_DATABASE_USER+1];\n+ \t\t\tsnprintf(hold_user, SM_DATABASE_USER+1, \"%s@%s\", port->user,\n+ \t\t\t\t\t port->database);\n+ \t\t\tstrcpy(port->user, hold_user);\n+ \t\t}\n+ \t}\n+ \n \t/*\n \t * If we're going to reject the connection due to database state, say\n \t * so now instead of wasting cycles on an authentication exchange.\n***************\n*** 2587,2597 ****\n \tif (FindExec(fullprogname, argv[0], \"postmaster\") < 0)\n \t\treturn false;\n \n! \tfilename = palloc(strlen(DataDir) + 20);\n \tsprintf(filename, \"%s/postmaster.opts\", DataDir);\n \n! \tfp = fopen(filename, \"w\");\n! \tif (fp == NULL)\n \t{\n \t\tpostmaster_error(\"cannot create file %s: %s\",\n \t\t\t\t\t\t filename, strerror(errno));\n--- 2603,2612 ----\n \tif (FindExec(fullprogname, argv[0], \"postmaster\") < 0)\n \t\treturn false;\n \n! \tfilename = palloc(strlen(DataDir) + 17);\n \tsprintf(filename, \"%s/postmaster.opts\", DataDir);\n \n! \tif ((fp = fopen(filename, \"w\")) == NULL)\n \t{\n \t\tpostmaster_error(\"cannot create file %s: %s\",\n \t\t\t\t\t\t filename, strerror(errno));\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/guc.c,v\nretrieving revision 1.82\ndiff -c -r1.82 guc.c\n*** src/backend/utils/misc/guc.c\t15 Aug 2002 02:51:26 -0000\t1.82\n--- src/backend/utils/misc/guc.c\t17 Aug 2002 04:14:49 -0000\n***************\n*** 483,488 ****\n--- 483,492 ----\n \t\t{ \"transform_null_equals\", PGC_USERSET }, &Transform_null_equals,\n \t\tfalse, NULL, NULL\n \t},\n+ \t{\n+ \t\t{ \"db_user_namespace\", PGC_SIGHUP }, &Db_user_namespace,\n+ \t\tfalse, NULL, NULL\n+ \t},\n \n \t{\n \t\t{ NULL, 0 }, NULL, false, NULL, NULL\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.44\ndiff -c -r1.44 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t12 Aug 2002 00:36:12 -0000\t1.44\n--- src/backend/utils/misc/postgresql.conf.sample\t17 Aug 2002 04:14:50 -0000\n***************\n*** 113,119 ****\n #\n #\tMessage display\n #\n- \n #server_min_messages = notice\t# Values, in order of decreasing detail:\n \t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n \t\t\t\t# info, notice, warning, error, log, fatal,\n--- 113,118 ----\n***************\n*** 201,203 ****\n--- 200,203 ----\n #sql_inheritance = true\n #transform_null_equals = false\n #statement_timeout = 0\t\t\t\t# 0 is disabled\n+ #db_user_namespace = false\nIndex: src/include/libpq/libpq-be.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/libpq/libpq-be.h,v\nretrieving revision 1.32\ndiff -c -r1.32 libpq-be.h\n*** src/include/libpq/libpq-be.h\t20 Jun 2002 20:29:49 -0000\t1.32\n--- src/include/libpq/libpq-be.h\t17 Aug 2002 04:14:50 -0000\n***************\n*** 59,65 ****\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\n--- 59,65 ----\n \n \tProtocolVersion proto;\n \tchar\t\tdatabase[SM_DATABASE + 1];\n! \tchar\t\tuser[SM_DATABASE_USER + 1];\n \tchar\t\toptions[SM_OPTIONS + 1];\n \tchar\t\ttty[SM_TTY + 1];\n \tchar\t\tauth_arg[MAX_AUTH_ARG];\nIndex: src/include/libpq/pqcomm.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/libpq/pqcomm.h,v\nretrieving revision 1.65\ndiff -c -r1.65 pqcomm.h\n*** src/include/libpq/pqcomm.h\t12 Aug 2002 14:35:26 -0000\t1.65\n--- src/include/libpq/pqcomm.h\t17 Aug 2002 04:14:50 -0000\n***************\n*** 114,119 ****\n--- 114,121 ----\n #define SM_DATABASE\t\t64\n /* SM_USER should be the same size as the others. bjm 2002-06-02 */\n #define SM_USER\t\t\t32\n+ /* We append database name if db_user_namespace true. */\n+ #define SM_DATABASE_USER (SM_DATABASE+SM_USER+1) /* +1 for @ */\n #define SM_OPTIONS\t\t64\n #define SM_UNUSED\t\t64\n #define SM_TTY\t\t\t64\n***************\n*** 124,135 ****\n--- 126,139 ----\n {\n \tProtocolVersion protoVersion;\t\t/* Protocol version */\n \tchar\t\tdatabase[SM_DATABASE];\t/* Database name */\n+ \t\t\t\t/* Db_user_namespace appends dbname */\n \tchar\t\tuser[SM_USER];\t/* User name */\n \tchar\t\toptions[SM_OPTIONS];\t/* Optional additional args */\n \tchar\t\tunused[SM_UNUSED];\t\t/* Unused */\n \tchar\t\ttty[SM_TTY];\t/* Tty for debug output */\n } StartupPacket;\n \n+ extern bool Db_user_namespace;\n \n /* These are the authentication requests sent by the backend. */",
"msg_date": "Sat, 17 Aug 2002 00:16:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nSample run:\n\t\n\t$ psql -U postgres test\n\tpsql: FATAL: user \"postgres@test\" does not exist\n\n\t$ psql -U postgres@ test\n\tWelcome to psql 7.3devel, the PostgreSQL interactive terminal.\n\t\n\tType: \\copyright for distribution terms\n\t \\h for help with SQL commands\n\t \\? for help on internal slash commands\n\t \\g or terminate with semicolon to execute query\n\t \\q to quit\n\t\n\ttest=> \n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> BTW, I just thought of a small improvement to your patch that eliminates\n> some of the ugliness. Suppose that when we recognize an attempt to\n> connect as a global user (ie, feature flag is on and last character of\n> username is '@'), we strip off the '@' before proceeding. Then we would\n> have:\n> \tglobal users appear in pg_shadow as foo\n> \tlocal users appear in pg_shadow as foo@db\n> and what this would mean is that you can flip between feature-enabled\n> and feature-disabled states without breaking your global logins. So you\n> don't need the extra step of creating a \"postgres@\" before turning on\n> the feature. (Which was pretty ugly anyway, since even though postgres@\n> could be made a superuser, he wouldn't be the same user as postgres ---\n> this affects table ownership, for example, and would be a serious issue\n> if you wanted any non-superuser global users.)\n> \n> I suppose some might argue that having to say postgres@ to log in,\n> when your username is really just postgres as far as you can see in the\n> database, is a tad confusing. But the whole thing is an acknowledged\n> wart anyway, and I think getting rid of the two problems mentioned above\n> is worth it.\n> \n> Also, if we do this then it's important to strip a trailing '@' only\n> if it's the *only* one in the given username. Else a local user\n> 'foo@db1' could cheat to log into db2 by saying username = 'foo@db1@'\n> with requested database db2. But I can't see any other security hole.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 17 Aug 2002 00:17:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nOK, I think we are doing this backwards. Instead of adding '@' to\nglobal users, and then removing it in the backend, why don't we have\nlocal users end with '@', that way, global users continue to connect\njust as they have before, and local users connect with @, so dave@db1\nconnects as 'dave@' and if he has other database access, he can use the\nsame 'dave@' name.\n\nThat removes some of the uglification, I think.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> BTW, I just thought of a small improvement to your patch that eliminates\n> some of the ugliness. Suppose that when we recognize an attempt to\n> connect as a global user (ie, feature flag is on and last character of\n> username is '@'), we strip off the '@' before proceeding. Then we would\n> have:\n> \tglobal users appear in pg_shadow as foo\n> \tlocal users appear in pg_shadow as foo@db\n> and what this would mean is that you can flip between feature-enabled\n> and feature-disabled states without breaking your global logins. So you\n> don't need the extra step of creating a \"postgres@\" before turning on\n> the feature. (Which was pretty ugly anyway, since even though postgres@\n> could be made a superuser, he wouldn't be the same user as postgres ---\n> this affects table ownership, for example, and would be a serious issue\n> if you wanted any non-superuser global users.)\n> \n> I suppose some might argue that having to say postgres@ to log in,\n> when your username is really just postgres as far as you can see in the\n> database, is a tad confusing. But the whole thing is an acknowledged\n> wart anyway, and I think getting rid of the two problems mentioned above\n> is worth it.\n> \n> Also, if we do this then it's important to strip a trailing '@' only\n> if it's the *only* one in the given username. Else a local user\n> 'foo@db1' could cheat to log into db2 by saying username = 'foo@db1@'\n> with requested database db2. But I can't see any other security hole.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 17 Aug 2002 12:26:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I think we are doing this backwards. Instead of adding '@' to\n> global users, and then removing it in the backend, why don't we have\n> local users end with '@', that way, global users continue to connect\n> just as they have before, and local users connect with @, so dave@db1\n> connects as 'dave@' and if he has other database access, he can use the\n> same 'dave@' name.\n\nNo, *that* would be backwards. In installations that are using this\nfeature, the vast majority of the users are going to be local ones.\nAnd the global users will be the presumably-more-sophisticated admins.\nPutting the onus of the '@' decoration on the local users instead of\nthe global ones is exactly the wrong way to go.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Aug 2002 12:47:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I think we are doing this backwards. Instead of adding '@' to\n> > global users, and then removing it in the backend, why don't we have\n> > local users end with '@', that way, global users continue to connect\n> > just as they have before, and local users connect with @, so dave@db1\n> > connects as 'dave@' and if he has other database access, he can use the\n> > same 'dave@' name.\n> \n> No, *that* would be backwards. In installations that are using this\n> feature, the vast majority of the users are going to be local ones.\n> And the global users will be the presumably-more-sophisticated admins.\n> Putting the onus of the '@' decoration on the local users instead of\n> the global ones is exactly the wrong way to go.\n\nOK, but it looks slightly less ugly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 17 Aug 2002 19:41:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, here is the patch with the suggested changes. I am sending the\n> patch to hackers because there has been so much interest in this.\n\nOne minor gripe:\n\n> + \t\t/* If user@, it is a global user, remove '@' */\n> + \t\tif (strchr(port->user, '@') == port->user + strlen(port->user)-1)\n\nThis code is correct, but it tempts someone to replace the strchr()\nwith a single-character check on the last character of the string.\nWhich would introduce the security hole we discussed before. The\ncode is okay, but *please* improve the comment to point out that you\nare also excluding the case where there are @'s to the left of the\nlast character.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Aug 2002 22:36:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "\nOK, applied, with that change.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, here is the patch with the suggested changes. I am sending the\n> > patch to hackers because there has been so much interest in this.\n> \n> One minor gripe:\n> \n> > + \t\t/* If user@, it is a global user, remove '@' */\n> > + \t\tif (strchr(port->user, '@') == port->user + strlen(port->user)-1)\n> \n> This code is correct, but it tempts someone to replace the strchr()\n> with a single-character check on the last character of the string.\n> Which would introduce the security hole we discussed before. The\n> code is okay, but *please* improve the comment to point out that you\n> are also excluding the case where there are @'s to the left of the\n> last character.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 17 Aug 2002 23:04:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Tom Lane writes:\n\n> BTW, I just thought of a small improvement to your patch that eliminates\n> some of the ugliness. Suppose that when we recognize an attempt to\n> connect as a global user (ie, feature flag is on and last character of\n> username is '@'), we strip off the '@' before proceeding.\n\nI'm missing how hard it is to change \"last character of username is @\" to\n\"no @ in username\". This would seem to be a two-line change somewhere.\n\nI'm concerned that we leave essentially no migration path, that is, the\nability to turn the feature on to try it out without immediately breaking\nevery application.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 18 Aug 2002 11:37:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I'm concerned that we leave essentially no migration path, that is, the\n> ability to turn the feature on to try it out without immediately breaking\n> every application.\n\nUh ... what? I fail to understand your objection. AFAICS the only\napps that could be \"broken\" are scripts that have usernames hardwired\ninto them ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Aug 2002 12:55:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Sat, 17 Aug 2002, Bruce Momjian wrote:\n\n>\n> OK, I think we are doing this backwards. Instead of adding '@' to\n> global users, and then removing it in the backend, why don't we have\n> local users end with '@', that way, global users continue to connect\n> just as they have before, and local users connect with @, so dave@db1\n> connects as 'dave@' and if he has other database access, he can use the\n> same 'dave@' name.\n>\n> That removes some of the uglification, I think.\n\nThen why was it when I mentioned global users not having the @ you shot\nit down as not possible?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 18 Aug 2002 13:10:12 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Sat, 17 Aug 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I think we are doing this backwards. Instead of adding '@' to\n> > global users, and then removing it in the backend, why don't we have\n> > local users end with '@', that way, global users continue to connect\n> > just as they have before, and local users connect with @, so dave@db1\n> > connects as 'dave@' and if he has other database access, he can use the\n> > same 'dave@' name.\n>\n> No, *that* would be backwards. In installations that are using this\n> feature, the vast majority of the users are going to be local ones.\n> And the global users will be the presumably-more-sophisticated admins.\n> Putting the onus of the '@' decoration on the local users instead of\n> the global ones is exactly the wrong way to go.\n\nUnsophisticated users is hardly a reason. After all they do have an\n@ in their email address. If they're told the username is foo@ then\ntheir username is foo@. What's so difficult about that?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 18 Aug 2002 13:13:17 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I'm concerned that we leave essentially no migration path, that is, the\n> > ability to turn the feature on to try it out without immediately breaking\n> > every application.\n>\n> Uh ... what? I fail to understand your objection. AFAICS the only\n> apps that could be \"broken\" are scripts that have usernames hardwired\n> into them ...\n\nI'm completely lost between all the proposals about where the @ is going\nto be specified, added, or removed. What happens on the client side and\nwhat happens on the server side?\n\nAll I would like to see is that I can turn on this feature and nothing\nchanges as long as I don't add any \"local users\". Yes, that includes\nhard-wired user names on the client side. Of course there are various\ndegrees of hard-wiring, but what if the ISP admin updates to 7.3 and wants\nto turn on the feature for new clients? Does he tell all his existing\nclients that they must update their user names? Possibly, these users got\ntheir database access with a shell account and don't specify the user name\nat all because it defaults to the OS user name. Does that continue to\nwork?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 18 Aug 2002 23:36:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I'm completely lost between all the proposals about where the @ is going\n> to be specified, added, or removed. What happens on the client side and\n> what happens on the server side?\n\nWell, the way things stand as of CVS tip is that (assuming you have this\nfeature turned on in postgresql.conf):\n\n* If a connection request has a username with a trailing '@' (and no\nembedded '@'), then the '@' is stripped and connection proceeds.\n\n* Otherwise, '@dbname' is appended to the given username and connection\nproceeds.\n\nSo a \"global\" user foo has to say username=\"foo@\" in his connection\nrequest, but he's just \"foo\" in pg_shadow. A \"local\" user foo has to\nsay \"foo\" in his connection request, and he's \"foo@somedb\" in pg_shadow.\n\n> All I would like to see is that I can turn on this feature and nothing\n> changes as long as I don't add any \"local users\". Yes, that includes\n> hard-wired user names on the client side.\n\nWell, we could have that by inverting the use of '@'; but as I commented\nbefore, it makes more sense to me to make the global users say '@' than\nto make the local users do so, because I think in an installation that\nwants this feature there will be lots more local than global users.\nI really don't put that much weight on the compatibility argument you\nmake --- not that I don't see your point, but that I don't think it\noutweighs convenience of day-to-day use after one has gotten the system\nset up. (Also, compatibility cuts both ways: it seems just as likely\nto me that the clients with hardwired usernames are going to be ones\nyou want to connect as local users, as that they are going to be ones\nyou want to connect as global users. Maybe more likely, if you grant\nthe assumption that there will be more local than global users.)\n\nIt might be worth recalling the reason that we are going through this\npushup in the first place: Marc wants to be able to assign the same\nusername to two different users who want to access two different\ndatabases. If he would be happy with the answer \"give them two\ndifferent usernames\", we'd not be having this discussion at all.\nDo you think he will be happy with the answer \"you can give them\nthe same username as long as it ends in '@'\"? I think it's highly\nunlikely that he'll be satisfied with that --- he wants to *not*\nhave constraints on the names he gives out for local users.\n\n> Of course there are various\n> degrees of hard-wiring, but what if the ISP admin updates to 7.3 and wants\n> to turn on the feature for new clients? Does he tell all his existing\n> clients that they must update their user names? Possibly, these users got\n> their database access with a shell account and don't specify the user name\n> at all because it defaults to the OS user name. Does that continue to\n> work?\n\nIt works great if the ISP intends to make them all local users, which\nseems more likely to me than the other case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Aug 2002 18:08:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "I'm trying to make postGIS work with pg7.3devel. But a problem is occuring that did not appear in pg7.2. When I execute:\n\nALTER TABLE geotest ADD CHECK ( geometrytype(geopoint)='POINT'\nOR NULL=geopoint);\n\nI get: \"ERROR: copyObject: don't know how to copy node type 506\"\n\nBut when I execute:\n\nALTER TABLE geotest ADD CHECK ( geometrytype(geopoint)='POINT');\n\nIt works fine, which, due to the error message it seems that it is trying to assign rather to NULL, rather than compare (else what object needs to be copied in \"NULL=geopoint\"?). Is this a bug, a change in NULL, or a change in user defined datatypes?\nThanks;\nEric\n",
"msg_date": "Sun, 18 Aug 2002 17:25:48 -0500 (EST)",
"msg_from": "redmonde@purdue.edu",
"msg_from_op": false,
"msg_subject": "assigning to NULL?"
},
{
"msg_contents": "redmonde@purdue.edu writes:\n> I get: \"ERROR: copyObject: don't know how to copy node type 506\"\n\nThis is a bug in someone's recent patch ... but you don't want to say\n\"NULL=geopoint\" anyway, do you? Surely it should be \"geopoint IS NULL\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Aug 2002 18:46:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: assigning to NULL? "
},
{
"msg_contents": "tgl@sss.pgh.pa.us (Tom Lane) wrote\n\n> * If a connection request has a username with a trailing '@' (and no\n> embedded '@'), then the '@' is stripped and connection proceeds.\n> \n> * Otherwise, '@dbname' is appended to the given username and\n> connection proceeds.\n<snip>\n> It might be worth recalling the reason that we are going through this\n> pushup in the first place: Marc wants to be able to assign the same\n> username to two different users who want to access two different\n> databases. If he would be happy with the answer \"give them two\n> different usernames\", we'd not be having this discussion at all.\n> Do you think he will be happy with the answer \"you can give them\n> the same username as long as it ends in '@'\"? I think it's highly\n> unlikely that he'll be satisfied with that --- he wants to *not*\n> have constraints on the names he gives out for local users.\n\n\nWhat about usernames that have trailing or embedded @'s? I mean you are \neseentially making the @ a magic character. I admit I havent looked at \nthe source, but doesnt this method effectively put a constraint on the \nuse of @? What if an isp, that could use this feature, already has \nusernames with @'s in them (say a customers email address, etc)? Will \nthey need to assign all new usernames to make this thing function?\n\nWhat if you want to give one person (one username) access to 2 db's? \nDoes that mean, under the current scheme, that the two accounts you \ncreate can have the same username but have different passwords? What if \nyou want to erase the \"one\" account (do you have to remember to erase all \nn accounts you created with the same username, or all n except the ones \nthat were never mean to be the same person but share the same username)?\n\nNormally a user has a unique name. Does anyone see a problem if/when the \nwhole db access thing becomes part of the privileges system? If you \nimplement the \"multiple users same username\", then you'll have to \nreassign all but one of the users to new usernames.\n",
"msg_date": "Mon, 19 Aug 2002 00:01:02 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "I'd have thought that if a matching user couldn't be found in the\nspecified database then it would default to searching through the\nglobal users? Would be more intuitive...\n\nLee.\n\nBruce Momjian writes:\n > Sample run:\n > \t$ psql -U postgres test\n > \tpsql: FATAL: user \"postgres@test\" does not exist\n > \n > \t$ psql -U postgres@ test\n > \tWelcome to psql 7.3devel, the PostgreSQL interactive terminal.\n",
"msg_date": "Mon, 19 Aug 2002 09:04:24 +0100",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nI think we need to resolve this discussion from a week ago. The current\ncode is this:\n\n\tglobal usernames are stored just like before, e.g. postgres\n\tlocal users are stored as user@dbname\n\twhen connecting, global users add '@' to their names\n\twhen connecting, local users use just their user name, no @dbname\n\nTom likes this because it is the fewer global users who have to append\nthe '@'.\n\nVince and Peter think that it should be local users adding '@' when\nconnecting because:\n\n\tthey have an @ sign in their name anyway\n\tglobal users should be able to connect unchanged\n\nI can foresee a time when we will have longer usernames, and local users\nwill be able to connect with the full user@dbname, and we can allow\nuser@ as a shortcut.\n\nIn summary, I prefer to change the code to have local users append the\n'@'.\n\nComments? \n\nIt is an easy change and prevents what is a very confusing situation\nwhere we add '@' for users who don't have @, and remove '@' for users\nwho have it.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > I'm concerned that we leave essentially no migration path, that is, the\n> > > ability to turn the feature on to try it out without immediately breaking\n> > > every application.\n> >\n> > Uh ... what? I fail to understand your objection. AFAICS the only\n> > apps that could be \"broken\" are scripts that have usernames hardwired\n> > into them ...\n> \n> I'm completely lost between all the proposals about where the @ is going\n> to be specified, added, or removed. What happens on the client side and\n> what happens on the server side?\n> \n> All I would like to see is that I can turn on this feature and nothing\n> changes as long as I don't add any \"local users\". Yes, that includes\n> hard-wired user names on the client side. Of course there are various\n> degrees of hard-wiring, but what if the ISP admin updates to 7.3 and wants\n> to turn on the feature for new clients? Does he tell all his existing\n> clients that they must update their user names? Possibly, these users got\n> their database access with a shell account and don't specify the user name\n> at all because it defaults to the OS user name. Does that continue to\n> work?\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 27 Aug 2002 15:19:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Tuesday 27 August 2002 03:19 pm, Bruce Momjian wrote:\n> I think we need to resolve this discussion from a week ago. The current\n> code is this:\n\nI thought it WAS resolved, to do:\n\n> \tglobal usernames are stored just like before, e.g. postgres\n> \tlocal users are stored as user@dbname\n> \twhen connecting, global users add '@' to their names\n> \twhen connecting, local users use just their user name, no @dbname\n\n> Tom likes this because it is the fewer global users who have to append\n> the '@'.\n\nAt least that was my perception of the uneasy consensus reached.\n\nBasically, this tags the @ as magic saying, during the client connect process, \n'I'm GLOBAL, treat me differently'. Now that I actually understand how this \nis supposed to work, which your four lines above elucidate nicely, I am in \nmore agreement than I was that this is the right answer to this issue.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 27 Aug 2002 15:35:41 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Lamar Owen wrote:\n> On Tuesday 27 August 2002 03:19 pm, Bruce Momjian wrote:\n> > I think we need to resolve this discussion from a week ago. The current\n> > code is this:\n> \n> I thought it WAS resolved, to do:\n> \n> > \tglobal usernames are stored just like before, e.g. postgres\n> > \tlocal users are stored as user@dbname\n> > \twhen connecting, global users add '@' to their names\n> > \twhen connecting, local users use just their user name, no @dbname\n> \n> > Tom likes this because it is the fewer global users who have to append\n> > the '@'.\n> \n> At least that was my perception of the uneasy consensus reached.\n> \n> Basically, this tags the @ as magic saying, during the client connect process, \n> 'I'm GLOBAL, treat me differently'. Now that I actually understand how this \n> is supposed to work, which your four lines above elucidate nicely, I am in \n> more agreement than I was that this is the right answer to this issue.\n\nOK, you have now split the vote because we have two for the change, and\ntwo against. Why do you prefer to tag the globals? Is it Tom's\nargument? I think it is kind of strange to tag the globals when it is\nthe locals who have @ in their username, and when they do:\n\n\t$ psql -U dave test\n\tWelcome to psql 7.3devel, the PostgreSQL interactive terminal.\n\t\n\tType: \\copyright for distribution terms\n\t \\h for help with SQL commands\n\t \\? for help on internal slash commands\n\t \\g or terminate with semicolon to execute query\n\t \\q to quit\n\t\n\ttest=> select current_user;\n\t current_user \n\t--------------\n\t dave@test\n\t(1 row)\n\nthey will see their full username.\n\nI can go either way. I am just saying we need to hear from more people\nto make sure we are doing this properly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 27 Aug 2002 15:43:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I can go either way. I am just saying we need to hear from more people\n> to make sure we are doing this properly.\n\nLikewise. In particular I'd like to hear from Marc, who after all\nis the one who caused us to consider this hack in the first place.\nDoes it satisfy his requirement? Is one way or the other preferable\nfor his actual usage?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Aug 2002 15:47:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Tuesday 27 August 2002 03:43 pm, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > On Tuesday 27 August 2002 03:19 pm, Bruce Momjian wrote:\n> > I thought it WAS resolved, to do:\n\n> > > Tom likes this because it is the fewer global users who have to append\n> > > the '@'.\n\n> > At least that was my perception of the uneasy consensus reached.\n\n> OK, you have now split the vote because we have two for the change, and\n> two against. Why do you prefer to tag the globals? Is it Tom's\n> argument? I think it is kind of strange to tag the globals when it is\n> the locals who have @ in their username, and when they do:\n\nI agree with what Tom said, and understand why he said it. And I thought you \ndid, too -- I have apparently misunderstood (again!) the issue.\n\nIn the local-enabled scheme, ISTM the majority of users will be local users. \nThe goal is transparent virtual databases -- at least that's what I consider \nthe goal. As far as the user is concerned, the other databases might as well \nnot even exist -- all they are doing is connecting to their database. Since \nthey have to give the database name as part of the connection, it just makes \nsense that they should have the closest to default behavior.\n\nIn the case of a virtual hosting postmaster, global users would likely be \nDBA's, although they might not be. These users are going to be the \nexception, not the rule -- thus a character to tag their 'exceptional' \nnature.\n\nYou may not even want your virtual host local users to realize that there is \nanother user by that name. Thus, the standard notation is the least \nintrusive for the very users that need uninstrusive notation.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 27 Aug 2002 16:05:36 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "It should also be noted that it's easy to get the DBAs to change their\nusername in the future when / if the @ hack goes away BUT it will be\ndifficult to change the usernames of the hundreds to thousands of\ncustomer accounts.\n\nFor an upgrade, we'd end up making a script in the upgrade to keep them\nthe same (with the @) then have a control panel code in place to suggest\nto the user that they may stop using the @ if they wish <click here>\ntype of thing.\n\n\n> > > > Tom likes this because it is the fewer global users who have to append\n> > > > the '@'.\n> \n> > > At least that was my perception of the uneasy consensus reached.\n> \n> > OK, you have now split the vote because we have two for the change, and\n> > two against. Why do you prefer to tag the globals? Is it Tom's\n> > argument? I think it is kind of strange to tag the globals when it is\n> > the locals who have @ in their username, and when they do:\n\n> In the case of a virtual hosting postmaster, global users would likely be \n> DBA's, although they might not be. These users are going to be the \n> exception, not the rule -- thus a character to tag their 'exceptional' \n> nature.\n\n\n",
"msg_date": "27 Aug 2002 16:10:56 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Lamar Owen wrote:\n> On Tuesday 27 August 2002 03:43 pm, Bruce Momjian wrote:\n> > Lamar Owen wrote:\n> > > On Tuesday 27 August 2002 03:19 pm, Bruce Momjian wrote:\n> > > I thought it WAS resolved, to do:\n> \n> > > > Tom likes this because it is the fewer global users who have to append\n> > > > the '@'.\n> \n> > > At least that was my perception of the uneasy consensus reached.\n> \n> > OK, you have now split the vote because we have two for the change, and\n> > two against. Why do you prefer to tag the globals? Is it Tom's\n> > argument? I think it is kind of strange to tag the globals when it is\n> > the locals who have @ in their username, and when they do:\n> \n> I agree with what Tom said, and understand why he said it. And I thought you \n> did, too -- I have apparently misunderstood (again!) the issue.\n\nI try not to interject my opinions into emails where I am asking for\ndisucsion so people can more clearly see the options and vote\naccordingly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 27 Aug 2002 16:19:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "\nOK, we have enough votes to keep the existing behavior, unless Marc\nappears and says he doesn't like it. ;-)\n\nThanks.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> It should also be noted that it's easy to get the DBAs to change their\n> username in the future when / if the @ hack goes away BUT it will be\n> difficult to change the usernames of the hundreds to thousands of\n> customer accounts.\n> \n> For an upgrade, we'd end up making a script in the upgrade to keep them\n> the same (with the @) then have a control panel code in place to suggest\n> to the user that they may stop using the @ if they wish <click here>\n> type of thing.\n> \n> \n> > > > > Tom likes this because it is the fewer global users who have to append\n> > > > > the '@'.\n> > \n> > > > At least that was my perception of the uneasy consensus reached.\n> > \n> > > OK, you have now split the vote because we have two for the change, and\n> > > two against. Why do you prefer to tag the globals? Is it Tom's\n> > > argument? I think it is kind of strange to tag the globals when it is\n> > > the locals who have @ in their username, and when they do:\n> \n> > In the case of a virtual hosting postmaster, global users would likely be \n> > DBA's, although they might not be. These users are going to be the \n> > exception, not the rule -- thus a character to tag their 'exceptional' \n> > nature.\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 27 Aug 2002 16:20:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Tue, 2002-08-27 at 21:05, Lamar Owen wrote:\n> On Tuesday 27 August 2002 03:43 pm, Bruce Momjian wrote:\n> > Lamar Owen wrote:\n> > > On Tuesday 27 August 2002 03:19 pm, Bruce Momjian wrote:\n> > > I thought it WAS resolved, to do:\n> \n> > > > Tom likes this because it is the fewer global users who have to append\n> > > > the '@'.\n> \n> > > At least that was my perception of the uneasy consensus reached.\n> \n> > OK, you have now split the vote because we have two for the change, and\n> > two against. Why do you prefer to tag the globals? Is it Tom's\n> > argument? I think it is kind of strange to tag the globals when it is\n> > the locals who have @ in their username, and when they do:\n> \n> I agree with what Tom said, and understand why he said it. And I thought you \n> did, too -- I have apparently misunderstood (again!) the issue.\n> \n> In the local-enabled scheme, ISTM the majority of users will be local users. \n> The goal is transparent virtual databases -- at least that's what I consider \n> the goal. As far as the user is concerned, the other databases might as well \n> not even exist -- all they are doing is connecting to their database. Since \n> they have to give the database name as part of the connection, it just makes \n> sense that they should have the closest to default behavior.\n> \n> In the case of a virtual hosting postmaster, global users would likely be \n> DBA's, although they might not be. These users are going to be the \n> exception, not the rule -- thus a character to tag their 'exceptional' \n> nature.\n> \n> You may not even want your virtual host local users to realize that there is \n> another user by that name. Thus, the standard notation is the least \n> intrusive for the very users that need uninstrusive notation.\n\nHas this behaviour been carried through into GRANT and REVOKE? If the\nobject is transparency for local users, it should be possible in\ndatabase \"test\" to say \"GRANT ... TO fred\" and have \"fred\" understood as\n\"fred@test\".\n\nIf that is the case, then I will support the current position.\n\n\nIt follows from the objective of transparency that, when reporting a\nuser name, local users should be reported without the database suffix,\ni.e., \"fred\" not \"fred@test\". Global users should be reported with the\ntrailing \"@\". This should cause no problem, because we have no\ncross-database communication; it should be impossible for \"george@dummy\"\nto have any connection with database \"test\".\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"But the end of all things is at hand; be ye therefore \n sober, and watch unto prayer. And above all things \n have fervent love among yourselves; for love shall \n cover the multitude of sins.\" I Peter 4:7,8\n\n",
"msg_date": "27 Aug 2002 22:05:59 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Oliver Elphick wrote:\n> > I agree with what Tom said, and understand why he said it. And I thought you \n> > did, too -- I have apparently misunderstood (again!) the issue.\n> > \n> > In the local-enabled scheme, ISTM the majority of users will be local users. \n> > The goal is transparent virtual databases -- at least that's what I consider \n> > the goal. As far as the user is concerned, the other databases might as well \n> > not even exist -- all they are doing is connecting to their database. Since \n> > they have to give the database name as part of the connection, it just makes \n> > sense that they should have the closest to default behavior.\n> > \n> > In the case of a virtual hosting postmaster, global users would likely be \n> > DBA's, although they might not be. These users are going to be the \n> > exception, not the rule -- thus a character to tag their 'exceptional' \n> > nature.\n> > \n> > You may not even want your virtual host local users to realize that there is \n> > another user by that name. Thus, the standard notation is the least \n> > intrusive for the very users that need uninstrusive notation.\n> \n> Has this behaviour been carried through into GRANT and REVOKE? If the\n> object is transparency for local users, it should be possible in\n> database \"test\" to say \"GRANT ... TO fred\" and have \"fred\" understood as\n> \"fred@test\".\n\nNo changes have been made anywhere except for the username passed by the\nclient. All reporting of user names and all administration go by their\nfull pg_shadow username, so global user dave@ is dave in pg_shadow, and\ndave is dave@db1 in pg_shadow. One goal of this patch was a small\nfootprint.\n\n> If that is the case, then I will support the current position.\n> \n> \n> It follows from the objective of transparency that, when reporting a\n> user name, local users should be reported without the database suffix,\n> i.e., \"fred\" not \"fred@test\". Global users should be reported with the\n> trailing \"@\". This should cause no problem, because we have no\n> cross-database communication; it should be impossible for \"george@dummy\"\n> to have any connection with database \"test\".\n\nNope, none of this is done and I don't think there is a demand to do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 27 Aug 2002 17:11:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Tue, 2002-08-27 at 22:11, Bruce Momjian wrote:\n> Oliver Elphick wrote:\n> > Has this behaviour been carried through into GRANT and REVOKE? If the\n> > object is transparency for local users, it should be possible in\n> > database \"test\" to say \"GRANT ... TO fred\" and have \"fred\" understood as\n> > \"fred@test\".\n> \n> No changes have been made anywhere except for the username passed by the\n> client. All reporting of user names and all administration go by their\n> full pg_shadow username, so global user dave@ is dave in pg_shadow, and\n> dave is dave@db1 in pg_shadow. One goal of this patch was a small\n> footprint.\n\nThat is understandable, but it means that there is an inconsistency of\nusage for _every_ user.\n\nYou connect as \"postgres@\" and \"fred\", but for all other purposes -\nCREATE USER, GRANT, REVOKE, CURRENT_USER, etc. you will be \"postgres\"\nand \"fred@database\". This seems likely to cause users confusion, don't\nyou think? It also means that any applications which test usernames\nwill have to be altered to strip off the \"@database\".\n\nSo it seems to me that you have achieved a small footprint within the\ncode, but potentially at the cost of a larger impact on users.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"But the end of all things is at hand; be ye therefore \n sober, and watch unto prayer. And above all things \n have fervent love among yourselves; for love shall \n cover the multitude of sins.\" I Peter 4:7,8\n\n",
"msg_date": "27 Aug 2002 22:33:35 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> \tglobal usernames are stored just like before, e.g. postgres\n> \tlocal users are stored as user@dbname\n> \twhen connecting, global users add '@' to their names\n> \twhen connecting, local users use just their user name, no @dbname\n\nI'm OK with this in principle. But I must say I was quite confused\nbecause the \"@\" symbol appears in diametrically opposite contexts:\n\na) designate local users on the server\n\nb) designate global users in the client\n\nPerhaps I might have been less confused if meaning (b) used a different\ncharacter, say \"username!\". This might be equally confusing to the next\nperson -- just my observation.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 27 Aug 2002 23:36:36 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> So it seems to me that you have achieved a small footprint within the\n> code, but potentially at the cost of a larger impact on users.\n\nI don't think anyone will deny that this is a kluge. However, we are\nnot going to resurrect the separate-password-file thing (that was a\nworse kluge, especially when used for this purpose), and we are not\ngoing to postpone 7.3 while we think about a nicer solution. So, simple\nis beautiful for now. If there are enough people actually *using* this\nfeature to make it worth improving, we can improve it ... later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Aug 2002 17:44:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Perhaps I might have been less confused if meaning (b) used a different\n> character, say \"username!\".\n\nWell, maybe ... but do we want to create two special characters in\nusernames, instead of one? @ still has to be considered special in\nincoming usernames, else you have no security against local users\nconnecting to other databases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Aug 2002 17:46:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "On Tue, 2002-08-27 at 22:44, Tom Lane wrote:\n> Oliver Elphick <olly@lfix.co.uk> writes:\n> > So it seems to me that you have achieved a small footprint within the\n> > code, but potentially at the cost of a larger impact on users.\n> \n> I don't think anyone will deny that this is a kluge. However, we are\n> not going to resurrect the separate-password-file thing (that was a\n> worse kluge, especially when used for this purpose), and we are not\n> going to postpone 7.3 while we think about a nicer solution. So, simple\n> is beautiful for now. If there are enough people actually *using* this\n> feature to make it worth improving, we can improve it ... later.\n\nCould we then have a TODO item:\n\n * Make local and global user representation consistent throughout.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"But the end of all things is at hand; be ye therefore \n sober, and watch unto prayer. And above all things \n have fervent love among yourselves; for love shall \n cover the multitude of sins.\" I Peter 4:7,8\n\n",
"msg_date": "27 Aug 2002 22:58:30 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> This should cause no problem, because we have no\n> cross-database communication; it should be impossible for \"george@dummy\"\n> to have any connection with database \"test\".\n\nNot so; you need look no further than the owner column of pg_database\nto find a case where people can see usernames that might be local to\nother databases. Group membership lists might well contain users\nfrom multiple databases, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Aug 2002 18:10:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> Could we then have a TODO item:\n> * Make local and global user representation consistent throughout.\n\nThat's hardly an appropriately expansive TODO item. I prefer\n\n * Provide a real solution for database-local users\n\n;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Aug 2002 18:17:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
},
{
"msg_contents": "Tom Lane wrote:\n> Oliver Elphick <olly@lfix.co.uk> writes:\n> > Could we then have a TODO item:\n> > * Make local and global user representation consistent throughout.\n> \n> That's hardly an appropriately expansive TODO item. I prefer\n> \n> * Provide a real solution for database-local users\n\nI say let's get it out in the field and see what people ask for. For\nall we know, they may be very happy with this, nor not use it at all.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 27 Aug 2002 21:07:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > \tglobal usernames are stored just like before, e.g. postgres\n> > \tlocal users are stored as user@dbname\n> > \twhen connecting, global users add '@' to their names\n> > \twhen connecting, local users use just their user name, no @dbname\n> \n> I'm OK with this in principle. But I must say I was quite confused\n> because the \"@\" symbol appears in diametrically opposite contexts:\n> \n> a) designate local users on the server\n> \n> b) designate global users in the client\n> \n> Perhaps I might have been less confused if meaning (b) used a different\n> character, say \"username!\". This might be equally confusing to the next\n> person -- just my observation.\n\nThere is no question it is 100% confusing. You are not alone.\n\nWhat keeps us from unconfusing it is the desire to make local usernames\nclean looking, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 27 Aug 2002 21:08:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Tue, 2002-08-27 at 23:10, Tom Lane wrote: \n> Oliver Elphick <olly@lfix.co.uk> writes:\n> > This should cause no problem, because we have no\n> > cross-database communication; it should be impossible for \"george@dummy\"\n> > to have any connection with database \"test\".\n> \n> Not so; you need look no further than the owner column of pg_database\n> to find a case where people can see usernames that might be local to\n> other databases. Group membership lists might well contain users\n> from multiple databases, too.\n\nI suspect I have a different view of the ultimate aim of this feature.\n\nIf we go to a thorough solution for virtual local databases, local users\nof other databases ought to be completely invisible. I suppose that\nmeans that to a local user, pg_database would be a view showing only\ntemplate[01] and the local database. pg_shadow, too, would show only\nglobal users and local users in the same database.\n\nI can't see how a group within a local database could contain users from\nother databases. In the context in which this is being used, each\ndatabase belongs to a different customer; each database needs to be\ninvisible to other customers. How then should it be possible to have\ngroup lists containing users from different local databases? Groups\nshould be local as well as users.\n\nPerhaps I like complicating things too much...\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Use hospitality one to another without grudging.\" \n I Peter 4:9 \n\n",
"msg_date": "28 Aug 2002 08:22:53 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> If we go to a thorough solution for virtual local databases, local users\n> of other databases ought to be completely invisible.\n\nPerhaps. I'm not convinced of that, but it's a defensible position.\n\n> I can't see how a group within a local database could contain users from\n> other databases.\n\nThis presupposes that groups become local to databases, which is not\na foregone conclusion in my mind at all. Perhaps we'll need to invent\nthe concept of local and global groups, to go along with local and\nglobal users.\n\nAnyway, this is all designing far in advance of available use-cases.\nMarc was satisfying his needs (so far as he said, anyway) with a\npassword-based scheme even klugier than what we're going to put in 7.3.\nWe don't have other usage examples at all. And with the availability\nof schemas in 7.3, I think that multiple databases per installation\nis going to become less common to begin with --- people will more often\nuse multiple schemas in one big database if they want the option of\ndata sharing, or completely separate installations if they want airtight\nseparation.\n\nAccordingly, I'm disinclined to start actually inventing features in\nthis area until I see more evidence that it's worth the trouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Aug 2002 09:14:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items "
}
] |
[
{
"msg_contents": "\nOkay ... since this is pretty much going to be 'one camp for, one camp\nagainst' without anything to really back up either camps perspectives /\narguments, I did some research on CVS in order to find a nice, effective\nmiddle ground ... and it actually works quite sweet ...\n\nBasically, CVS let's you \"merge\" modules into one mega-module, which is\nwhat I've just done ...\n\nI pulled out two sub-directories from the pgsql CVS repository ...\ncontrib/earthdistance and src/interfaces/libpqxx ... if you do:\n\ncvs checkout -P libpqxx\n\nyou'll see *just* the libpqxx code ... same with doing a checkout of\n'earthdistance' ...\n\nIf you checkout pgsql, it will get the source tree *minus* libpqxx and\nearthdistance ...\n\nAnd, finally, if you want it all:\n\ncvs checkout -P pgsql-all\n\nNow, for those that are going to yell out about already checked out code\n... don't, I've already tested that too ...\n\n From within your checked out code, remove the above two directories, then\nfrom the top level (ie. if you have your src in src/pgsql, go to the src\ndirectory, *not* the root of the source tree itself) and type:\n\ncvs checkout -P contrib interfaces\n\nand you will be 'in sync' ...\n\nNext thing I'm going to do is modify the scripts that build the .tar.gz\npackages so that they now build a libpqxx.tar.gz and earthdistance.tar.gz\ndistribution, so that they can be downloaded seperately, as well as\npulling out the README files so that ppl know what they are useful for in\nthe first place ...\n\nFor those that like to work on the 'mega source tree', nothing has changed\n... you check out pgsql-all, and commits will commit to the right places\nin the repository ...\n\nIf ppl want to download everything, including the kitchen sink, then they\nwill have that option ... but if ppl want to just download what they need,\nthat option is now open to them as well ...\n\nNow comes the 'tricky part, which I'm hoping someone like Peter might know\nhow to do ... I know there is a way of writing configure to 'run\nconfigure' in sub projects .. for instance, with libpqxx ... at the top\nlevel of pgsql, one could type:\n\n./configure --enable-libpqxx\n\nand it would run the appropriate configure in the libpqxx subdirectory and\nadd libpqxx to the 'targets' in src/interfaces/Makefile ...\n\nIf we can get *that* done, removing something like libpq++ would be a\nsimple matter of removing the option from configure.in at the top level,\nand removing its inclusion in the pgsql-all meta on the CVS repository ...\n\nnice and clean ...\n\nI've only done libpqxx and earthdistance right now ... none of the history\nis lost, but I want to get the packaging issue worked out using those two\nfirst, before I dive into the others ... also, want to make sure that I\nhaven't overlooked anything in my testing ...\n\n*Eventually*, a simple checkout of 'pgsql' should result in a \"server\nonly\" distribution that we can pull bits and pieces into transparently ...\n\nthe really cool thing is that using PHP, I could probably come up with a\nnice simple interface that ppl could create a custom download bundle:\n\n\t\"I want the base server + jdbc + the earth distance module\"\n\nand it would package that up and present it to them to download ... then\neveryone can package whatever they want and download it ...\n\n\n\n\n\n",
"msg_date": "Wed, 31 Jul 2002 22:38:47 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "> Okay ... since this is pretty much going to be 'one camp for, one camp\n> against' without anything to really back up either camps perspectives /\n> arguments, I did some research on CVS in order to find a nice, effective\n> middle ground ... and it actually works quite sweet ...\n\nPersonally, I'd like to be able to have a \"client-only\" distro, but other\nthan that I'm not fussed. I like the convenience of having all the contribs\nin contrib, however some sort of CPgAN thing would be sweet as.\n\nCurrently, the guy who maintains the FreeBSD postgres port has hacked up a\nclient only version that does all the building, but only installs the client\nstuff.\n\nChris\n\n",
"msg_date": "Thu, 1 Aug 2002 10:06:07 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Thu, 1 Aug 2002, Christopher Kings-Lynne wrote:\n\n> > Okay ... since this is pretty much going to be 'one camp for, one camp\n> > against' without anything to really back up either camps perspectives /\n> > arguments, I did some research on CVS in order to find a nice, effective\n> > middle ground ... and it actually works quite sweet ...\n>\n> Personally, I'd like to be able to have a \"client-only\" distro, but other\n> than that I'm not fussed. I like the convenience of having all the contribs\n> in contrib, however some sort of CPgAN thing would be sweet as.\n>\n> Currently, the guy who maintains the FreeBSD postgres port has hacked up a\n> client only version that does all the building, but only installs the client\n> stuff.\n\nYa, which is what the MySQL port does also ... but you still have to\ndownload the complete distro ... the other thing to consider is that we\nhave how many mirrors right now? That are paying for the bandwidth to\nmirror the software and provide it to the community? If 100 ppl that just\nneed, for instance, the libpq libraries could download *just* the\nlibpq.tar.gz file and install it, how much bandwidth does that save\noverall? Bruce figured that the libpq.tar.gz stuff he did for me saved\n~1/10 the space of the whole distro ... so if the distro is ~8Meg, that\nmeans that the libpq is ~800k ... so we're talking about a saving of,\nwhat, ~720Meg of bandwidth? It doesn't sound like alot when you aren't\npaying for it, but it definitely adds up quickly to those that are ...\n\n",
"msg_date": "Wed, 31 Jul 2002 23:26:10 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "...\n> *Eventually*, a simple checkout of 'pgsql' should result in a \"server\n> only\" distribution that we can pull bits and pieces into transparently ...\n\nI'm still not quite sure where this is headed or why, but if nothing\nelse pgsql could and should be the whole thing, and pgsql-server could\nbe the server subset. Server-only makes sense for only some users, and\nnothing apparently makes sense for everyone. But we may as well have the\nold names do the old things...\n\n - Thomas\n",
"msg_date": "Wed, 31 Jul 2002 22:53:09 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Wed, 31 Jul 2002, Thomas Lockhart wrote:\n\n> ...\n> > *Eventually*, a simple checkout of 'pgsql' should result in a \"server\n> > only\" distribution that we can pull bits and pieces into transparently ...\n>\n> I'm still not quite sure where this is headed or why, but if nothing\n> else pgsql could and should be the whole thing, and pgsql-server could\n> be the server subset. Server-only makes sense for only some users, and\n> nothing apparently makes sense for everyone. But we may as well have the\n> old names do the old things...\n\nwill make those changes in the morning ...\n\n\n",
"msg_date": "Thu, 1 Aug 2002 04:16:24 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Wednesday 31 July 2002 09:38 pm, Marc G. Fournier wrote:\n> Okay ... since this is pretty much going to be 'one camp for, one camp\n> against' without anything to really back up either camps perspectives /\n> arguments, I did some research on CVS in order to find a nice, effective\n> middle ground ... and it actually works quite sweet ...\n\nMAn, did I ever pick a bad day to be offline for a whole day. :-)\n\nIf anyone cares to look, or for that matter cares at all, something similar is \nalready being done in the binary RPMs. I have split, sliced, and diced this \ndistribution for over three years. So, let me share some of the experiences \nlearned from that exercise.\n\n> Now comes the 'tricky part, which I'm hoping someone like Peter might know\n> how to do ... I know there is a way of writing configure to 'run\n> configure' in sub projects .. for instance, with libpqxx ... at the top\n> level of pgsql, one could type:\n\n> ./configure --enable-libpqxx\n\nI like this idea, up to an extent. I guess it boils down to this:\n1.)\tWhat is the minimum build environment necessary to build anything in the \nsource distribution;\n2.)\tWhat degree of granularity is desired;\n3.)\tWe must not assume the presence of a full source tree to build _anything_, \nonly the minimum build system, which can then handle everything else as a \nmodule.\n\nThen I could much more easily package a 'postgresql-devel' package that would \nallow, for instance, PostGIS to be built as a server module without having to \nhave the entire source distribution tree in place (which is currently \nrequired). As PostGIS is a separately developed and distributed module, it \nseems reasonable that it should be buildable without the full source tree in \nplace. \n\nAs to getting rid of bloat, I'm all for splitting out contrib modules, client \nlibraries, and the like into separate projects. If the build system is a \nunit, and is not dependent upon the server or libpq source code, then it is \neasier and more consistent to just require it to be in place to build ANY \ncontrib or client module.\n\nWe're too big and too spread out. It was said (by Andrew) that we're hard to \ninstall -- why is that? Is it possible that it is _because_ of the number of \npieces we include?\n\nAnd the sooner our very old perl client goes away, the better I like it. It \nis a good client, don't get me wrong: but DBD:Pg is the standard now.\n\nBut, if you are an RPM user, you can already just download the pieces for a \nminimal client-side system. And you have been able to do so for right at \nthree years, give or take.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 1 Aug 2002 11:25:47 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "> And the sooner our very old perl client goes away, the better I \n> like it. It \n> is a good client, don't get me wrong: but DBD:Pg is the standard now.\n> \n\nThis may sound like a dumb question, but DBD::Pg == DBI right ? not pg.pm\n\nI code perl about 25 hours a week, and DBI has never failed me yet.\n\nJeff.\n",
"msg_date": "Thu, 1 Aug 2002 13:05:28 -0300",
"msg_from": "\"Jeff MacDonald\" <jeff@tsunamicreek.com>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Thursday 01 August 2002 12:05 pm, Jeff MacDonald wrote:\n> > And the sooner our very old perl client goes away, the better I\n> > like it. It\n> > is a good client, don't get me wrong: but DBD:Pg is the standard now.\n\n> This may sound like a dumb question, but DBD::Pg == DBI right ? not pg.pm\n\nRight.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 1 Aug 2002 12:20:05 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Lamar Owen wrote:\n> And the sooner our very old perl client goes away, the better I like it. It \n> is a good client, don't get me wrong: but DBD:Pg is the standard now.\n\nI have been in contact with Edmund about moving DBD into our CVS and\nupdating CPAN ourselves. Should have a final answer soon.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 13:27:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > And the sooner our very old perl client goes away, the better I like it. It \n> > is a good client, don't get me wrong: but DBD:Pg is the standard now.\n> \n> I have been in contact with Edmund about moving DBD into our CVS and\n> updating CPAN ourselves. Should have a final answer soon.\n\nOK, I got the go-ahead from Edmund. We will have DBD:pg in the 7.3\ntarball. I will add it to CVS today or tomorrow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 14:21:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Thu, 1 Aug 2002, Bruce Momjian wrote:\n\n> Bruce Momjian wrote:\n> > Lamar Owen wrote:\n> > > And the sooner our very old perl client goes away, the better I like it. It\n> > > is a good client, don't get me wrong: but DBD:Pg is the standard now.\n> >\n> > I have been in contact with Edmund about moving DBD into our CVS and\n> > updating CPAN ourselves. Should have a final answer soon.\n>\n> OK, I got the go-ahead from Edmund. We will have DBD:pg in the 7.3\n> tarball. I will add it to CVS today or tomorrow.\n\nHmm,\n\naccording README from DBD-Pg ( 1.13 ), it's maintained now by\nJeffrey W. Baker (jwbaker@acm.org)\n\nDBD::Pg -- a PostgreSQL interface for Perl 5.\n\n $Id: README,v 1.3 2002/04/10 02:01:38 jwb Exp $\n\n Copyright (c) 1997,1998,1999,2000 Edmund Mergl\n Copyright (c) 2002 Jeffrey W. Baker\n Portions Copyright (c) 1994,1995,1996,1997 Tim Bunce\n\nI'm a little bit aware about DBD::Pg because it needs many changes and\ntesting to be 7.3 compliant.\n\nJeffrey, wake up :-)\n\n\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 1 Aug 2002 21:52:15 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Oleg Bartunov wrote:\n> On Thu, 1 Aug 2002, Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > > Lamar Owen wrote:\n> > > > And the sooner our very old perl client goes away, the better I like it. It\n> > > > is a good client, don't get me wrong: but DBD:Pg is the standard now.\n> > >\n> > > I have been in contact with Edmund about moving DBD into our CVS and\n> > > updating CPAN ourselves. Should have a final answer soon.\n> >\n> > OK, I got the go-ahead from Edmund. We will have DBD:pg in the 7.3\n> > tarball. I will add it to CVS today or tomorrow.\n> \n> Hmm,\n> \n> according README from DBD-Pg ( 1.13 ), it's maintained now by\n> Jeffrey W. Baker (jwbaker@acm.org)\n> \n> DBD::Pg -- a PostgreSQL interface for Perl 5.\n> \n> $Id: README,v 1.3 2002/04/10 02:01:38 jwb Exp $\n> \n> Copyright (c) 1997,1998,1999,2000 Edmund Mergl\n> Copyright (c) 2002 Jeffrey W. Baker\n> Portions Copyright (c) 1994,1995,1996,1997 Tim Bunce\n> \n> I'm a little bit aware about DBD::Pg because it needs many changes and\n> testing to be 7.3 compliant.\n> \n> Jeffrey, wake up :-)\n\nOh, that's strange. I wonder why Edmund didn't mention that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 14:53:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Thu, 1 Aug 2002, Bruce Momjian wrote:\n\n> Bruce Momjian wrote:\n> > Lamar Owen wrote:\n> > > And the sooner our very old perl client goes away, the better I like it. It\n> > > is a good client, don't get me wrong: but DBD:Pg is the standard now.\n> >\n> > I have been in contact with Edmund about moving DBD into our CVS and\n> > updating CPAN ourselves. Should have a final answer soon.\n>\n> OK, I got the go-ahead from Edmund. We will have DBD:pg in the 7.3\n> tarball. I will add it to CVS today or tomorrow.\n\nIf you know how to, please import it as a seperate module, do not add it\nto the pgsql tree itself ... if you don't know how, please just forward it\nonto me and I'll get it into the repository ...\n\n",
"msg_date": "Thu, 1 Aug 2002 16:11:02 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "\nUmmm ... stupid question, but can we even bring this into the 'core'?\n\n You may distribute under the terms of either the GNU General Public\n License or the Artistic License, as specified in the Perl README file.\n\n\n\nOn Thu, 1 Aug 2002, Oleg Bartunov wrote:\n\n> On Thu, 1 Aug 2002, Bruce Momjian wrote:\n>\n> > Bruce Momjian wrote:\n> > > Lamar Owen wrote:\n> > > > And the sooner our very old perl client goes away, the better I like it. It\n> > > > is a good client, don't get me wrong: but DBD:Pg is the standard now.\n> > >\n> > > I have been in contact with Edmund about moving DBD into our CVS and\n> > > updating CPAN ourselves. Should have a final answer soon.\n> >\n> > OK, I got the go-ahead from Edmund. We will have DBD:pg in the 7.3\n> > tarball. I will add it to CVS today or tomorrow.\n>\n> Hmm,\n>\n> according README from DBD-Pg ( 1.13 ), it's maintained now by\n> Jeffrey W. Baker (jwbaker@acm.org)\n>\n> DBD::Pg -- a PostgreSQL interface for Perl 5.\n>\n> $Id: README,v 1.3 2002/04/10 02:01:38 jwb Exp $\n>\n> Copyright (c) 1997,1998,1999,2000 Edmund Mergl\n> Copyright (c) 2002 Jeffrey W. Baker\n> Portions Copyright (c) 1994,1995,1996,1997 Tim Bunce\n>\n> I'm a little bit aware about DBD::Pg because it needs many changes and\n> testing to be 7.3 compliant.\n>\n> Jeffrey, wake up :-)\n>\n>\n> >\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n",
"msg_date": "Thu, 1 Aug 2002 16:13:51 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Marc G. Fournier wrote:\n> \n> Ummm ... stupid question, but can we even bring this into the 'core'?\n> \n> You may distribute under the terms of either the GNU General Public\n> License or the Artistic License, as specified in the Perl README file.\n\nArtistic License is fine, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 15:19:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Thursday 01 August 2002 02:21 pm, Bruce Momjian wrote:\n> Bruce Momjian wrote:\n> > Lamar Owen wrote:\n> > > And the sooner our very old perl client goes away, the better I like\n> > > it. It is a good client, don't get me wrong: but DBD:Pg is the\n> > > standard now.\n\n> > I have been in contact with Edmund about moving DBD into our CVS and\n> > updating CPAN ourselves. Should have a final answer soon.\n\n> OK, I got the go-ahead from Edmund. We will have DBD:pg in the 7.3\n> tarball. I will add it to CVS today or tomorrow.\n\nUm, is putting it into our tarball necessary, or even desireable? It is \nseparately maintained as part of CPAN. Is there some fundamental reason that \nwe _must_ ship a perl client (the same goes for tcl/tk/python/c++ as well) if \nit is adequately maintained in the standard location? Perlers know to go to \nCPAN. Likewise Pythonistas have their own place.\n\nAnd that's the crux of Marc's message, I believe -- why can't we minimize \nhere?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 1 Aug 2002 15:35:55 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Artistic License is fine, I think.\n\nThe Artistic License doesn't even qualify as Free Software as far as the\nFSF is concerned.\n\nMore generally, it is a different license, and that is a problem.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:37:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, I got the go-ahead from Edmund. We will have DBD:pg in the 7.3\n> tarball. I will add it to CVS today or tomorrow.\n\nPleeeeease, no more Perl modules in our CVS! The ones we have are already\nmessy enough to build.\n\nI thought we were talking about trimming the source tree, not adding more.\nWhy not put it on gborg or somewhere else?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:37:19 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Thursday 01 August 2002 04:37 pm, Peter Eisentraut wrote:\n> I thought we were talking about trimming the source tree, not adding more.\n> Why not put it on gborg or somewhere else?\n\nIt's already in CPAN. A link to CPAN should suffice, IMHO.\n\nI also thought we were discussing trimming the tree; and that was a good \nfeeling.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 1 Aug 2002 16:44:08 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Thu, 1 Aug 2002, Lamar Owen wrote:\n\n> On Thursday 01 August 2002 04:37 pm, Peter Eisentraut wrote:\n> > I thought we were talking about trimming the source tree, not adding more.\n> > Why not put it on gborg or somewhere else?\n>\n> It's already in CPAN. A link to CPAN should suffice, IMHO.\n>\n> I also thought we were discussing trimming the tree; and that was a good\n> feeling.\n\nOh thank you to both you and Peter .... I was feeling sooooo alone out\nhere ...\n\n\n",
"msg_date": "Thu, 1 Aug 2002 18:09:19 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Artistic License is fine, I think.\n> \n> The Artistic License doesn't even qualify as Free Software as far as the\n> FSF is concerned.\n> \n> More generally, it is a different license, and that is a problem.\n\nWell, our ODBC is LGPL. I wonder if Edmund can/would change the\nlicense.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 17:16:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Lamar Owen wrote:\n> On Thursday 01 August 2002 04:37 pm, Peter Eisentraut wrote:\n> > I thought we were talking about trimming the source tree, not adding more.\n> > Why not put it on gborg or somewhere else?\n> \n> It's already in CPAN. A link to CPAN should suffice, IMHO.\n> \n> I also thought we were discussing trimming the tree; and that was a good \n> feeling.\n\nLamar, you said earlier today:\n\n> And the sooner our very old perl client goes away, the better I like it. It \n> is a good client, don't get me wrong: but DBD:Pg is the standard now.\n\nSo I assumed you wanted DBD:Pg. DBD:Pg is a good example of an\ninterface that hasn't advanced a quickly as it would have had it been in\nour CVS tree. I have received a number of bug reports for it, and have\nthem in my mailbox. I have no idea if they made it into the CPAN\nversion. Moving interfaces out can be a problem too unless there is a\nlarge enough group to grow it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 17:22:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "On Thursday 01 August 2002 05:22 pm, Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > It's already in CPAN. A link to CPAN should suffice, IMHO.\n\n> > I also thought we were discussing trimming the tree; and that was a good\n> > feeling.\n\n> Lamar, you said earlier today:\n> > And the sooner our very old perl client goes away, the better I like it. \n> > It is a good client, don't get me wrong: but DBD:Pg is the standard now.\n\n> So I assumed you wanted DBD:Pg.\n\nI'm sorry I gave that impression; I was advocating removing the old Pg module \nin favor of people using the DBD::Pg module, which is distributed separately \nin CPAN. I wasn't advocating bringing DBD::Pg into our distribution; my \napologies for giving the wrong impression. \n\nDBD::Pg is typically distributed separately even in things such as Red Hat \nLinux, where it lives as a separate RPM. There's also qt-PostgreSQL, \nphp-pgsql, mod_auth_pgsql, and others that are doing quite OK outside our \nCVS. The OpenACS/AOLserver PostgreSQL driver is a good example of a client \noutside our CVS that is being very well maintained by its group.\n\nWe should be providing the core client library, the backend, and \ndocumentation. Contrib modules (earthdistance, etc), other clients, and \nthings that don't fit in the core should be separately tarballed -- not \nnecessarily separately CVS'd -- the AOLserver CVS, for instance, has a number \nof modules, all of which are somewhat independent.\n\nThose modules then need to be buildable with a set of headers and makefile \nincludes ONLY. Without assuming any paths.\n\n> DBD:Pg is a good example of an\n> interface that hasn't advanced a quickly as it would have had it been in\n> our CVS tree. I have received a number of bug reports for it, and have\n> them in my mailbox. I have no idea if they made it into the CPAN\n> version. Moving interfaces out can be a problem too unless there is a\n> large enough group to grow it.\n\nEven if it's in a CVS at postgresql.org it doesn't necessarily need to be in \nthe main tarball. Even stuff in our CVS can languish -- witness the pgaccess \nrevival outside the CVS tree.\n\nThe main tarball needs dramatic splitting into independent pieces, with a \nbuild framework that can deal with the pieces. If I want a perl client, the \nbackend, and PostGIS, I should be able to download the build system, the perl \nclient, the backend, and PostGIS and make it work. And each module shouldn't \nrequire the source of the other modules to build -- just the necessary bits \nin headers. If I want just the python client, I should be able to download \nthe client-side development headers, configure, and makefile, then download \nthe python client, and build it.\n\nAnd if I already have headers installed as part of a RPM or from source, I \nshouldn't need config.guess, configure, makefile.global, or anything but a \nset of C headers to get the python client to build. IOW the python client \nshould be somewhat independent.\n\nIt CAN be done -- the AOLServer/OpenACS PostgreSQL driver does it -- all it \nneeds is the path to the headers and to libpq and it's off to the races.\n\nI guess I'm saying that we're too big and too popular to include the kitchen \nsink in the tarball anymore. We need to think more modularly -- and not \nassume a source tree with those modules.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 1 Aug 2002 18:27:13 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "\n\nSomeone said earlier cvsup would have problems but the anonymous cvs would work\nfine. \n\nWell I've just had a weirdness reconfiguring and rebuilding my few weeks old\n7.3dev tree and so deleted it and tried using the anoncvs to get pgsql. Running\nconfigure gives me the error:\n\n./configure: ./src/template/linux: No such file or directory\n\nand all ./src contains is:\n\ntotal 44\ndrwxr-xr-x 2 software software 4096 Aug 1 23:27 CVS\n-rw-r--r-- 1 software software 119 Jul 30 1999 DEVELOPERS\n-rw-r--r-- 1 software software 1039 Jul 30 18:47 Makefile\n-rw-r--r-- 1 software software 13288 Jul 27 21:10 Makefile.global.in\n-rw-r--r-- 1 software software 10853 Jul 27 21:10 Makefile.shlib\ndrwxr-xr-x 23 software software 4096 Aug 1 23:27 backend\n\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Thu, 1 Aug 2002 23:35:31 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Marc G. Fournier wrote:\n>> Ummm ... stupid question, but can we even bring this into the 'core'?\n>> \n>> You may distribute under the terms of either the GNU General Public\n>> License or the Artistic License, as specified in the Perl README file.\n\n> Artistic License is fine, I think.\n\nThe Artistic License is cool, but it's not BSD. Are we going to drop\nour recent agreement to try to move the distribution to all-BSD terms\nso easily?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 00:38:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ... "
},
{
"msg_contents": "On Fri, 2 Aug 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Marc G. Fournier wrote:\n> >> Ummm ... stupid question, but can we even bring this into the 'core'?\n> >>\n> >> You may distribute under the terms of either the GNU General Public\n> >> License or the Artistic License, as specified in the Perl README file.\n>\n> > Artistic License is fine, I think.\n>\n> The Artistic License is cool, but it's not BSD. Are we going to drop\n> our recent agreement to try to move the distribution to all-BSD terms\n> so easily?\n\nthat's kinda what I'm wondering ... but, as someone else already asked\nalso, is there a reason why we are even bringing this into the\ndistirbution, vs, say, moving it to GBorg? I do like the ability for us\nto modify the code itself, but haven't put it into the repository yet\nsince at least one person (or was it two?) questioned whether it should\neven be in there ...\n\nPersonally, I'd love to put it into GBorg and have someone assigned as a\nmaintainer for any patch submissions, or someone that wants to actively\nwork on it ...\n\n\n",
"msg_date": "Fri, 2 Aug 2002 03:48:54 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Trimming the Fat, Part Deux ... "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The Artistic License doesn't even qualify as Free Software as far as the\n> FSF is concerned.\n\nErrr, http://www.fsf.org/licenses/license-list.html lists it as not\nonly Free Software but GPL-compatible.\n\nMike.\n",
"msg_date": "02 Aug 2002 08:47:38 -0400",
"msg_from": "Michael Alan Dorman <mdorman@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat, Part Deux ..."
}
] |
[
{
"msg_contents": "How do I find the current username a query is running as from within the\nbackend? I'm thinking of improving the permission denied errors a bit...\n\nI see GetUserId(), but how do I get a char* of the username...\n\nChris\n\n",
"msg_date": "Thu, 1 Aug 2002 10:09:11 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Question"
},
{
"msg_contents": "On Thu, Aug 01, 2002 at 10:09:11AM +0800, Christopher Kings-Lynne wrote:\n> How do I find the current username a query is running as from within the\n> backend? I'm thinking of improving the permission denied errors a bit...\n> \n> I see GetUserId(), but how do I get a char* of the username...\n\nChecking the code for current_user, I see GetUserName(), which takes a\nUserID and returns the username, as a char *.\n\nRoss\n",
"msg_date": "Tue, 6 Aug 2002 22:15:14 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Question"
}
] |
[
{
"msg_contents": "If you have RelationGetRelationName(rel) to get the name of a relation, how\ndo you get it's fully qualified schema name? Or how do I get the schema\nname for the relation?\n\nChris\n\n",
"msg_date": "Thu, 1 Aug 2002 10:48:46 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Another quick question..."
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> If you have RelationGetRelationName(rel) to get the name of a relation, how\n> do you get it's fully qualified schema name? Or how do I get the schema\n> name for the relation?\n\nWell, you can do get_namespace_name(rel->rd_rel->relnamespace), but\nI don't really agree with changing error messages to *always* quote\nthe schema name. I think that'd be overly verbose. An appropriate\nsolution is to mention the schema name only when it's necessary to\nidentify the relation (ie, the rel would not be found in your current\nsearch path).\n\ngenerate_relation_name() in backend/utils/adt/ruleutils.c illustrates\nhow to do this. Maybe that code ought to be promoted into some more\nwidely useful location. See also the recently added format_procedure()\nand format_operator() in regproc.c.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 00:26:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another quick question... "
},
{
"msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > If you have RelationGetRelationName(rel) to get the name of a\n> relation, how\n> > do you get it's fully qualified schema name? Or how do I get the schema\n> > name for the relation?\n>\n> Well, you can do get_namespace_name(rel->rd_rel->relnamespace), but\n> I don't really agree with changing error messages to *always* quote\n> the schema name. I think that'd be overly verbose. An appropriate\n> solution is to mention the schema name only when it's necessary to\n> identify the relation (ie, the rel would not be found in your current\n> search path).\n\nThe problem I see is that imagine you're browsing your logs. If you see an\nerror message (or a notice) that refers just to a table, you have no idea\nwhich schema the table is in...\n\nChris\n\n",
"msg_date": "Thu, 1 Aug 2002 12:52:58 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Another quick question... "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Well, you can do get_namespace_name(rel->rd_rel->relnamespace), but\n>> I don't really agree with changing error messages to *always* quote\n>> the schema name. I think that'd be overly verbose.\n\n> The problem I see is that imagine you're browsing your logs. If you see an\n> error message (or a notice) that refers just to a table, you have no idea\n> which schema the table is in...\n\nWell, yeah, you may need to do a little bit of detective work to\ninterpret a logged error message. Perhaps the table name in question\nwas dropped (and even recreated) since the logged event. Perhaps it's\nin another database than you thought. Or perhaps it's a long-dead temp\ntable. Then we could move to the same set of issues with respect to\nthe types, functions, operators, etc that might impinge on the error\ncondition.\n\nI tend to think that error messages should be written for the\nconvenience of the interactive user, who has some chance of knowing\nthe context. Being verbose \"for the log\" just makes the log bigger\n(note recent complaints about volume of log output...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 01:20:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another quick question... "
}
] |
[
{
"msg_contents": "Are snapshots still being generated on ftp.postgresql.org (and rsync)?\nI've just noticed that the date for the last /dev/*-snapshot* is May 8th.\nWhat's the deal?\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 917-697-8665 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Wed, 31 Jul 2002 23:49:20 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": true,
"msg_subject": "-snapshot?"
},
{
"msg_contents": "\nthanks for the heads up, fixed ... part of the generation code was flawed,\nin that it tried to move a directory that didn't exist, failed and exited\nthe script *roll eyes* added in an 'if' to make sure the directory\nexists, and am running it manually now ...\n\n\nOn Wed, 31 Jul 2002, bpalmer wrote:\n\n> Are snapshots still being generated on ftp.postgresql.org (and rsync)?\n> I've just noticed that the date for the last /dev/*-snapshot* is May 8th.\n> What's the deal?\n>\n> - Brandon\n>\n>\n> ----------------------------------------------------------------------------\n> c: 917-697-8665 h: 201-798-4983\n> b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Thu, 1 Aug 2002 01:04:13 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: -snapshot?"
}
] |
[
{
"msg_contents": "\nI've just updated the README.cvsup file in order to reflect the changes,\nto provide a sample of how to download the whole thing, as well as\ninstructions on how to do \"just\" a particular module:\n\n========================[ Updated README.cvsup ]=========================\n# This file represents the standard CVSup distribution file\n# for the PostgreSQL ORDBMS project\n#\n# Defaults that apply to all the collections\n*default host=mcvsup.postgresql.org\n*default release=cvs\n*default delete use-rel-suffix\n*default tag=.\n\n# The following config will take the existing CVS repository\n# modules and combine them into a directory structure containing\n# all modules placed correctly in the tree.\n#\n# It is recommended that you only change the base= directory,\n# which is where the top of the tree, and any administrative\n# files that CVSup uses, will be downloaded into. The\n# subsequent prefix= tags are relative to base\n#\n# Any one of the below can be commented out to download *just*\n# that particular module ... the default here is to download\n# and place everything. If, for instance, you are only interested\n# in libpqxx, it is recommended that you comment out everything\n# below this line except for the base= directive, and the module\n# itself, so that it installs the code into $base/libpqxx\n#\n\n*default base=/home/scrappy/cvsup.src\n\npgsql\n\n*default prefix=pgsql/src/interfaces\nlibpqxx\n\n*default prefix=pgsql/contrib\nearthdistance\n\n\n",
"msg_date": "Thu, 1 Aug 2002 00:56:11 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Trimming the Fat: Getting code via CVSup ..."
},
{
"msg_contents": "On Thu, Aug 01, 2002 at 12:56:11AM -0300, Marc G. Fournier wrote:\n> I've just updated the README.cvsup file in order to reflect the changes,\n> to provide a sample of how to download the whole thing, as well as\n> instructions on how to do \"just\" a particular module:\n\nI'm using the following cvsup file, and the 'pgsql' module is empty\n(i.e. contains zero files):\n\n*default host=mcvsup.postgresql.org\n*default compress\n*default delete\n*default release=cvs\n*default delete use-rel-suffix\n*default base=/mnt/vol2/cvsup/pgsql\n\n*default prefix=/var/lib/cvs\npgsql\n*default prefix=/var/lib/cvs/pgsql/src/interfaces\nlibpqxx\n*default prefix=/var/lib/cvs/pgsql/contrib\nearthdistance\n\nReplacing 'pgsql' with 'pgsql-server' yields a 'no such module' warning\nmessage and the module is skipped.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Thu, 1 Aug 2002 12:47:57 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat: Getting code via CVSup ..."
},
{
"msg_contents": "\nTry now ... the download isn't going to work right though, as CVSup\ndoesn't honor the modules file in CVSROOT which defines how 'pgsql' is\n*supposed* to work ... I'm going to work on this later tonight and see if\nI can find something in the docs that will allow it to work properly, but\nright now, other then gettin gat the sub-modules themselves, CVSup is\nbroken ... anon-cvs will fair you better ...\n\n\nOn Thu, 1 Aug 2002, Neil Conway wrote:\n\n> On Thu, Aug 01, 2002 at 12:56:11AM -0300, Marc G. Fournier wrote:\n> > I've just updated the README.cvsup file in order to reflect the changes,\n> > to provide a sample of how to download the whole thing, as well as\n> > instructions on how to do \"just\" a particular module:\n>\n> I'm using the following cvsup file, and the 'pgsql' module is empty\n> (i.e. contains zero files):\n>\n> *default host=mcvsup.postgresql.org\n> *default compress\n> *default delete\n> *default release=cvs\n> *default delete use-rel-suffix\n> *default base=/mnt/vol2/cvsup/pgsql\n>\n> *default prefix=/var/lib/cvs\n> pgsql\n> *default prefix=/var/lib/cvs/pgsql/src/interfaces\n> libpqxx\n> *default prefix=/var/lib/cvs/pgsql/contrib\n> earthdistance\n>\n> Replacing 'pgsql' with 'pgsql-server' yields a 'no such module' warning\n> message and the module is skipped.\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n>\n\n",
"msg_date": "Thu, 1 Aug 2002 14:00:50 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Trimming the Fat: Getting code via CVSup ..."
},
{
"msg_contents": "> Try now ... the download isn't going to work right though, as CVSup\n> doesn't honor the modules file in CVSROOT which defines how 'pgsql' is\n> *supposed* to work ... I'm going to work on this later tonight and see if\n> I can find something in the docs that will allow it to work properly, but\n> right now, other then gettin gat the sub-modules themselves, CVSup is\n> broken ... anon-cvs will fair you better ...\n\nGoodness no! CVSup will work just fine, you just need to make sure that\nthe modules \"package\" is available to be downloaded too. So the\nclient-side CVSup configuration file will change a bit, but nothing else\nneeds to worry.\n\nThis should not stay broken for more than a few hours, otherwise we\nshould revert the changes until we've worked out more details.\n\nIt may be that the server-side CVSup can be configured to include both\npackages, but I would guess not since CVS organizes modules above (or\nbeside) all packages, rather than underneath any of them.\n\n - Thomas\n",
"msg_date": "Thu, 01 Aug 2002 10:21:34 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Trimming the Fat: Getting code via CVSup ..."
}
] |
[
{
"msg_contents": "Neil Conway said:\n>> FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n>\n>Until someone takes the time to determine what the performance\n>implications of this change will be, I don't think we should\n>change this. Given that no one has done any testing, I'm not\n>convinced that there's a lot of demand for this anyway.\n\n\nThere's a huge demand for this from the folks involved with OpenACS. \nAlready many of the functions have run up against the 16 column limit.\nOverloading is an ugly cludge for some functions which have 'default'\nargs, but it's not a complete solution.\n\nNot that it has proven to be slower, but if it were but the difference\nwas small, I'd say that forcing a recomplile to eek out a little extra\nperformance is better than forcing it to make code work in the first\nplace.\n\n32 args, please!\n\nCheers.\n\n\n",
"msg_date": "31 Jul 2002 22:42:17 -0600",
"msg_from": "Stephen Deasey <stephen@bollocks.net>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 2002-08-01 at 00:42, Stephen Deasey wrote:\n> Neil Conway said:\n> >> FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n> >\n> >Until someone takes the time to determine what the performance\n> >implications of this change will be, I don't think we should\n> >change this. Given that no one has done any testing, I'm not\n> >convinced that there's a lot of demand for this anyway.\n> \n> \n> There's a huge demand for this from the folks involved with OpenACS. \n> Already many of the functions have run up against the 16 column limit.\n> Overloading is an ugly cludge for some functions which have 'default'\n> args, but it's not a complete solution.\n> \n> Not that it has proven to be slower, but if it were but the difference\n> was small, I'd say that forcing a recomplile to eek out a little extra\n> performance is better than forcing it to make code work in the first\n> place.\n> \n> 32 args, please!\n\n32 at a bare minimum. Someone needs to dig out what the problem is and\nmake the cost increase with length. > 128 args is easily feasibly given\nsome Oracle systems I've seen -- DB functions as middleware.\n\n",
"msg_date": "02 Aug 2002 14:09:58 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Iavor Raytchev [mailto:iavor.raytchev@verysmall.org] \n> Sent: 31 July 2002 22:12\n> To: pgsql-hackers\n> Cc: pgaccess - developers\n> Subject: Re: [HACKERS] Open 7.3 items\n> \n> \n> \n> > > psql is very definitely not ready, nor is pgaccess.\n> \n> I could not really trace who said this.\n> \n> To my understanding nobody is currently testing how pgaccess \n> is dealing with 7.3 Am I wrong?\n\nIf my experience with pgAdmin is anything to go on then you've got a\n*huge* amount of work to do for 7.3 if you haven't done anything yet.\n\nRegards, Dave.\n",
"msg_date": "Thu, 1 Aug 2002 08:44:52 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "\n> > > NAMEDATALEN - disk/performance penalty for increase, 64, 128?\n> > > FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?\n> > \n> > At the moment I don't see a lot of solid evidence that increasing\n> > NAMEDATALEN has any performance penalty. Someone reported about\n> > a 10% slowdown on pgbench with NAMEDATALEN=128 ... but Neil Conway\n> > tried to reproduce the result, and got about a 10% *speedup*.\n> > Personally I think 10% is well within the noise spectrum for\n> > pgbench, and so it's difficult to claim that we have established\n> > any performance difference at all. I have not tried to measure\n> > FUNC_MAX_ARGS differences.\n> \n> Yes, we need someone to benchmark both the NAMEDATALEN and FUNC_MAX_ARGS\n> to prove we are not causing performance problems.\n\nI think a valid NAMEDATALEN benchmark would need to use a lot of tables,\nlike 1000-6000 with 10-100 columns each. The last bench was iirc done with \npgbench that only uses a few tables. (The name type is fixed length) \n\nAndreas\n",
"msg_date": "Thu, 1 Aug 2002 10:22:25 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "\n> But the message I was replying to was a similar union query, and I was\n> thinking that that person might be having a similar initial intuitive\n> reaction, \"well, it looks kinda the same.\" I just wanted to note that\n> you need to check this stuff with explain, rather than \n> blindly assuming\n> you know what's going on.\n\nI had a \"union all\" view, which is actually a quite different animal than \na \"union\" view which needs to eliminate duplicates before further processing.\n\nAndreas\n",
"msg_date": "Thu, 1 Aug 2002 12:23:03 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Thu, 1 Aug 2002, Zeugswetter Andreas SB SD wrote:\n\n> I had a \"union all\" view, which is actually a quite different animal than\n> a \"union\" view which needs to eliminate duplicates before further processing.\n\nI had the same problem with UNION ALL.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 1 Aug 2002 19:29:04 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Thu, 2002-08-01 at 12:29, Curt Sampson wrote:\n> On Thu, 1 Aug 2002, Zeugswetter Andreas SB SD wrote:\n> \n> > I had a \"union all\" view, which is actually a quite different animal than\n> > a \"union\" view which needs to eliminate duplicates before further processing.\n> \n> I had the same problem with UNION ALL.\n>\n\nCould someone give an example where it is not safe to push the WHERE\nclause down to individual parts of UNION (or UNION ALL) wher these parts\nare simple (non-aggregate) queries?\n\nI can see that it has to be made into HAVING in subquery if UNION's\nsubqueries are aggregate (GROUP BY) queries, but can anyone give an\nexample where the meaning of the query changes for non-aggregate\nsubqueries.\n\n---------------\nHannu\n",
"msg_date": "01 Aug 2002 15:58:45 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views"
},
{
"msg_contents": "On 1 Aug 2002, Hannu Krosing wrote:\n\n> On Thu, 2002-08-01 at 12:29, Curt Sampson wrote:\n> > On Thu, 1 Aug 2002, Zeugswetter Andreas SB SD wrote:\n> >\n> > > I had a \"union all\" view, which is actually a quite different animal than\n> > > a \"union\" view which needs to eliminate duplicates before further processing.\n> >\n> > I had the same problem with UNION ALL.\n> >\n>\n> Could someone give an example where it is not safe to push the WHERE\n> clause down to individual parts of UNION (or UNION ALL) wher these parts\n> are simple (non-aggregate) queries?\n\nFor union, queries that want to do something like use a temporary\nsequence to act sort of like rownum and do row limiting. Admittedly\nthat's already pretty much unspecified behavior, but it does change\nthe behavior in the place of duplicate removal. In addition, I think\nusing bits of the spec we don't completely support you can have the\nsame issue with the undefined behavior of which duplicate is returned\nfor values that aren't the same but are equal, for example where the\nduplicate removal is in one collation but the outer comparison has\na different explicitly given one.\n\nI haven't come up with any useful examples, and not really any for\nunion all, however.\n\n\n",
"msg_date": "Thu, 1 Aug 2002 07:43:57 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views"
},
{
"msg_contents": "\nOn Thu, 1 Aug 2002, Stephan Szabo wrote:\n\n> On 1 Aug 2002, Hannu Krosing wrote:\n>\n> > On Thu, 2002-08-01 at 12:29, Curt Sampson wrote:\n> > > On Thu, 1 Aug 2002, Zeugswetter Andreas SB SD wrote:\n> > >\n> > > > I had a \"union all\" view, which is actually a quite different animal than\n> > > > a \"union\" view which needs to eliminate duplicates before further processing.\n> > >\n> > > I had the same problem with UNION ALL.\n> > >\n> >\n> > Could someone give an example where it is not safe to push the WHERE\n> > clause down to individual parts of UNION (or UNION ALL) wher these parts\n> > are simple (non-aggregate) queries?\n>\n> For union, queries that want to do something like use a temporary\n> sequence to act sort of like rownum and do row limiting. Admittedly\n> that's already pretty much unspecified behavior, but it does change\n> the behavior in the place of duplicate removal. In addition, I think\n> using bits of the spec we don't completely support you can have the\n> same issue with the undefined behavior of which duplicate is returned\n> for values that aren't the same but are equal, for example where the\n> duplicate removal is in one collation but the outer comparison has\n> a different explicitly given one.\n\nReplying to myself, you can do this right now with char columns if you\njust push the conditions down blindly, something like:\n\ncreate table t1(a char(5));\ncreate table t2(a char(6));\n\ninsert into t1 values ('aaaaa');\ninsert into t2 values ('aaaaa');\n\nselect * from (select * from t2 union select * from t1) as f where\n a::text='aaaaa';\nselect * from (select * from t2 where a::text='aaaaa' union\n select * from t1 where a::text='aaaaa') as f;\n\nThe first select gives no rows, the second gives one. We'd have\nto transform the second where clause to something like\ncast(a as char(6))::text='aaaaa' in order to get the same effect\nI think.\n\n",
"msg_date": "Thu, 1 Aug 2002 08:04:03 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> For union, queries that want to do something like use a temporary\n> sequence to act sort of like rownum and do row limiting. Admittedly\n> that's already pretty much unspecified behavior, but it does change\n> the behavior in the place of duplicate removal. In addition, I think\n> using bits of the spec we don't completely support you can have the\n> same issue with the undefined behavior of which duplicate is returned\n> for values that aren't the same but are equal, for example where the\n> duplicate removal is in one collation but the outer comparison has\n> a different explicitly given one.\n\nHmm. I think this consideration boils down to whether the WHERE clause\ncan give different results for rows that appear equal under the rules of\nUNION/EXCEPT/INTERSECT. If it gives the same result for any two such\nrows, then it's safe to push down; otherwise not.\n\nIt's not too difficult to come up with examples. I invite you to play\nwith\n\nselect z,length(z) from\n(select 'abc '::char(7) as z intersect\nselect 'abc '::char(8) as z) ss;\n\nand contemplate the effects of pushing down a qual involving length(z).\n\nWhether this particular case is very important in the real world is hard\nto say. But there might be more-important cases out there.\n\nAnd yet, I think we can do it anyway. The score card looks like this to\nme:\n\nUNION ALL: always safe to push down, since the rows will be passed\nindependently to the outer WHERE anyway.\n\nUNION: it's unspecified which of a set of \"equal\" rows will be returned,\nand therefore the behavior would be unspecified anyway if the outer\nWHERE can distinguish the rows - you might get 1 row of the set out or\nnone. If we push down, then we create a situation where the returned\nrow will always be one that passes the outer WHERE, but that is a legal\nbehavior.\n\nINTERSECT: again it's unspecified which of a set of \"equal\" rows will be\nreturned, and so you might get 1 row out or none. If we push down then\nit's still unspecified whether you get a row out (example: if the outer\nWHERE will pass only for rows of the left table and not the right, then\npush down will result in no rows of the \"equal\" set being emitted, but\nthat's a legal behavior).\n\nINTERSECT ALL: if a set of \"equal\" rows contains M rows from the left\ntable and N from the right table, you're supposed to get min(M,N) rows\nof the set out of the INTERSECT ALL. Again you can't say which of the\nset you will get, so the outer WHERE might let anywhere between 0 and\nmin(M,N) rows out. With push down, M and N will be reduced by the WHERE\nbefore we do the intersection, so you still have 0 to min(M,N) rows out.\nThe behavior will change, but it's still legal per spec AFAICT.\n\nEXCEPT, EXCEPT ALL: the same sort of analysis seems to hold.\n\nIn short, it looks to me like the spec was carefully designed to allow\npush down. Pushing down a condition of this sort *does* change the\nbehavior, but the new behavior is still within spec.\n\nThe above analysis assumes that the WHERE condition is \"stable\", ie its\nresults for a row don't depend on the order in which the rows are tested\nor anything as weird as that. But we're assuming that already when we\npush down a qual in a non-set-operation case, I think.\n\nComments? Are there any other considerations to worry about?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 12:02:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "\nOn Thu, 1 Aug 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > For union, queries that want to do something like use a temporary\n> > sequence to act sort of like rownum and do row limiting. Admittedly\n> > that's already pretty much unspecified behavior, but it does change\n> > the behavior in the place of duplicate removal. In addition, I think\n> > using bits of the spec we don't completely support you can have the\n> > same issue with the undefined behavior of which duplicate is returned\n> > for values that aren't the same but are equal, for example where the\n> > duplicate removal is in one collation but the outer comparison has\n> > a different explicitly given one.\n>\n> Hmm. I think this consideration boils down to whether the WHERE clause\n> can give different results for rows that appear equal under the rules of\n> UNION/EXCEPT/INTERSECT. If it gives the same result for any two such\n> rows, then it's safe to push down; otherwise not.\n>\n> It's not too difficult to come up with examples. I invite you to play\n> with\n>\n> select z,length(z) from\n> (select 'abc '::char(7) as z intersect\n> select 'abc '::char(8) as z) ss;\n>\n> and contemplate the effects of pushing down a qual involving length(z).\n>\n> Whether this particular case is very important in the real world is hard\n> to say. But there might be more-important cases out there.\n>\n> And yet, I think we can do it anyway. The score card looks like this to\n> me:\n>\n> UNION ALL: always safe to push down, since the rows will be passed\n> independently to the outer WHERE anyway.\n>\n> UNION: it's unspecified which of a set of \"equal\" rows will be returned,\n> and therefore the behavior would be unspecified anyway if the outer\n> WHERE can distinguish the rows - you might get 1 row of the set out or\n> none. If we push down, then we create a situation where the returned\n> row will always be one that passes the outer WHERE, but that is a legal\n> behavior.\n>\n> INTERSECT: again it's unspecified which of a set of \"equal\" rows will be\n> returned, and so you might get 1 row out or none. If we push down then\n> it's still unspecified whether you get a row out (example: if the outer\n> WHERE will pass only for rows of the left table and not the right, then\n> push down will result in no rows of the \"equal\" set being emitted, but\n> that's a legal behavior).\n>\n> INTERSECT ALL: if a set of \"equal\" rows contains M rows from the left\n> table and N from the right table, you're supposed to get min(M,N) rows\n> of the set out of the INTERSECT ALL. Again you can't say which of the\n> set you will get, so the outer WHERE might let anywhere between 0 and\n> min(M,N) rows out. With push down, M and N will be reduced by the WHERE\n> before we do the intersection, so you still have 0 to min(M,N) rows out.\n> The behavior will change, but it's still legal per spec AFAICT.\n>\n\n> EXCEPT, EXCEPT ALL: the same sort of analysis seems to hold.\n\nActually I think in except you may only push down to the left, since in\nthis case you know that any duplicate from the right will not be\nreturned (since there must be none). So, you can't potentially drop\na row from the right side that may have been a duplicate of a left\nside row that does match the condition.\n\nIf we assume two collations one case sensitive one not with the\nexcept in the non-sensitive and the where in the sensitive and\na left with 'A' and right with 'a', it'd be incorrect to push a\ncase sensitive where foo='A' down to the right since that'd change the\noutput from zero rows to one.\n\nSomething similar for except all since lowering the number of rows\non the right can increase the number of returned rows above\nm-n (if say all m dups match the condition and none of n do)\n\n\n> The above analysis assumes that the WHERE condition is \"stable\", ie its\n> results for a row don't depend on the order in which the rows are tested\n> or anything as weird as that. But we're assuming that already when we\n> push down a qual in a non-set-operation case, I think.\n\nIn which case we don't have to worry about the nextval() case.\n\n",
"msg_date": "Thu, 1 Aug 2002 09:27:58 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Actually I think in except you may only push down to the left, since in\n> this case you know that any duplicate from the right will not be\n> returned (since there must be none). So, you can't potentially drop\n> a row from the right side that may have been a duplicate of a left\n> side row that does match the condition.\n\nBut we *want* to push down --- the point is to get some selectivity\ninto the bottom queries. You're right that in a plain EXCEPT it would\nbe possible to push only to the left, but that doesn't give the\nperformance improvement we want.\n\n> If we assume two collations one case sensitive one not with the\n> except in the non-sensitive and the where in the sensitive and\n> a left with 'A' and right with 'a', it'd be incorrect to push a\n> case sensitive where foo='A' down to the right since that'd change the\n> output from zero rows to one.\n\nYou missed my point. Per spec, either zero or one rows out of the whole\nthing is okay, because either the 'A' or the 'a' row might be returned\nas the representative row for the group by the EXCEPT. Yes, the\nbehavior may change, but it's still within spec.\n\n> In which case we don't have to worry about the nextval() case.\n\nYeah, I think nextval() and random() and so forth can be ignored;\nthe transformations we already do will confuse the results for such\ncases, so one more isn't gonna make it worse.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 12:42:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Thu, 1 Aug 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n\n> > If we assume two collations one case sensitive one not with the\n> > except in the non-sensitive and the where in the sensitive and\n> > a left with 'A' and right with 'a', it'd be incorrect to push a\n> > case sensitive where foo='A' down to the right since that'd change the\n> > output from zero rows to one.\n>\n> You missed my point. Per spec, either zero or one rows out of the whole\n> thing is okay, because either the 'A' or the 'a' row might be returned\n> as the representative row for the group by the EXCEPT. Yes, the\n> behavior may change, but it's still within spec.\n\nExcept can't return 'A' or 'a', there is no representative row because\nn>0. That's the difference with UNION and INTERSECT.\n\n\"If EXCEPT is specified, then\n Case:\n A) If m>0 and n=0, then T contains exactly one duplicate of R.\n B) Otherwise, T contains no duplicate of R.\"\n\nSo if T1 has a #dups>0 and T2 has a #dups>0 we should get\nno rows, but what if T1' (with the clause) has a #dups>0 but\nT2' has a #dups=0?\n\n\n",
"msg_date": "Thu, 1 Aug 2002 10:19:07 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "On Thu, 2002-08-01 at 18:02, Tom Lane wrote:\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > For union, queries that want to do something like use a temporary\n> > sequence to act sort of like rownum and do row limiting. Admittedly\n> > that's already pretty much unspecified behavior, but it does change\n> > the behavior in the place of duplicate removal. In addition, I think\n> > using bits of the spec we don't completely support you can have the\n> > same issue with the undefined behavior of which duplicate is returned\n> > for values that aren't the same but are equal, for example where the\n> > duplicate removal is in one collation but the outer comparison has\n> > a different explicitly given one.\n> \n> Hmm. I think this consideration boils down to whether the WHERE clause\n> can give different results for rows that appear equal under the rules of\n> UNION/EXCEPT/INTERSECT.\n\nYes. I originally started to ponder this when trying to draw up a plan\nfor automatic generation of ON UPDATE DO INSTEAD rules for views. While\npushing down the WHERE clause is just a performance thing for SELECT it\nis essential for ON UPDATE rules.\n\n> If it gives the same result for any two such\n> rows, then it's safe to push down; otherwise not.\n> \n> It's not too difficult to come up with examples. I invite you to play\n> with\n> \n> select z,length(z) from\n> (select 'abc '::char(7) as z intersect\n> select 'abc '::char(8) as z) ss;\n> \n> and contemplate the effects of pushing down a qual involving length(z).\n\nI guess the pushdown must also push implicit conversions done to parts\nof union.\n\nif that conversion were applied to z's in both parts of UNION then the\nresult should be the same.\n\n\nselect z,length(z) from\n (\n select 'abc '::char(7) as z\n union\n select 'abc '::char(8) as z\n ) ss where length(z) = 7;\n\nbecomes:\n\nselect z,length(z) from\n (\n select 'abc '::char(7) as z\n where length(cast('abc '::char(7) as char(7))) = 7\n union\n select 'abc '::char(8) as z\n where length(cast('abc '::char(8) as char(7))) = 7\n ) ss ;\n\nwhich both return 'abc ', 7\n\nOf course it is beneficial to detect when the conversion is not needed,\nso that indexes will be used if available. \n\n---------------\nHannu\n\n",
"msg_date": "01 Aug 2002 19:58:29 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> So if T1 has a #dups>0 and T2 has a #dups>0 we should get\n> no rows, but what if T1' (with the clause) has a #dups>0 but\n> T2' has a #dups=0?\n\nUm, you're right --- pushing down into the right-hand side would reduce\nN, thereby possibly *increasing* the number of output rows not reducing\nit. My mistake ... should have worked out the EXCEPT case in more\ndetail.\n\nThis says that we can't push down at all in the EXCEPT ALL case, I\nthink, and I'm leery about whether we should push for EXCEPT. But\nthe UNION and INTERSECT cases are probably the important ones anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 14:59:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
},
{
"msg_contents": "\nOn Thu, 1 Aug 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > So if T1 has a #dups>0 and T2 has a #dups>0 we should get\n> > no rows, but what if T1' (with the clause) has a #dups>0 but\n> > T2' has a #dups=0?\n>\n> Um, you're right --- pushing down into the right-hand side would reduce\n> N, thereby possibly *increasing* the number of output rows not reducing\n> it. My mistake ... should have worked out the EXCEPT case in more\n> detail.\n>\n> This says that we can't push down at all in the EXCEPT ALL case, I\n> think, and I'm leery about whether we should push for EXCEPT. But\n> the UNION and INTERSECT cases are probably the important ones anyway.\n\nI think that we can push to the left in both (should is a separate issue).\n\nIf the condition is true for all of the left hand dups, we can\nchoose to have emitted such rows as the output of the EXCEPT ALL in\nthe theoretical case so that the output is the same, max(0, m-n) rows.\nIf the condition is false for any of the left hand dups, we can safely\nreturn any number of rows between 0 and max(0,m-n) rows since we can\nsay that the difference were rows that failed the where clause. If\nwe push the condition down, we'll get some number m1 rows that succeed\nthe condition (with m1<m), so returning max(0, m1-n) should be safe.\nIf the condition is false for all of the rows, m1=0 so we'll correctly\nreturn no rows.\n\nI think.\n\n\n",
"msg_date": "Thu, 1 Aug 2002 13:01:00 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Rules and Views "
}
] |
[
{
"msg_contents": "\n... is once more 'normal' ...\n\nthere are three modules right now setup:\n\nearthdistance\nlibpqxx\npgsql-server\n\npgsql combines all three of the above to transparently give the equivalent\nof the whole distribution from its component parts ...\n\n\n\n",
"msg_date": "Thu, 1 Aug 2002 10:24:09 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "cvs checkout pgsql "
},
{
"msg_contents": "Marc G. Fournier wrote:\n> \n> ... is once more 'normal' ...\n> \n> there are three modules right now setup:\n> \n> earthdistance\n> libpqxx\n> pgsql-server\n> \n> pgsql combines all three of the above to transparently give the equivalent\n> of the whole distribution from its component parts ...\n\nMarc, I have to ask, why did you split them up that way. I thought\n/contrib, /interfaces, /doc would be the way to go.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 13:45:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> ... is once more 'normal' ...\n\nUh, it's completely broken as far as I can tell.\n\n$ pwd\n/home/postgres/pgsql/src/bin/pg_dump\n$ cvs status\ncvs server: Examining .\ncvs server: failed to create lock directory for `/cvsroot/pgsql/src/bin/pg_dump'\n (/cvsroot/pgsql/src/bin/pg_dump/#cvs.lock): No such file or directory\ncvs server: failed to obtain dir lock in repository `/cvsroot/pgsql/src/bin/pg_dump'\ncvs [server aborted]: read lock failed - giving up\n$\n\nAlso, http://developer.postgresql.org/cvsweb.cgi/pgsql/ isn't working.\n\nThis makes it a little difficult to get any work done :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 14:41:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > ... is once more 'normal' ...\n> \n> Uh, it's completely broken as far as I can tell.\n> \n> $ pwd\n> /home/postgres/pgsql/src/bin/pg_dump\n> $ cvs status\n> cvs server: Examining .\n> cvs server: failed to create lock directory for `/cvsroot/pgsql/src/bin/pg_dump'\n> (/cvsroot/pgsql/src/bin/pg_dump/#cvs.lock): No such file or directory\n> cvs server: failed to obtain dir lock in repository `/cvsroot/pgsql/src/bin/pg_dump'\n> cvs [server aborted]: read lock failed - giving up\n> $\n> \n> Also, http://developer.postgresql.org/cvsweb.cgi/pgsql/ isn't working.\n> \n> This makes it a little difficult to get any work done :-(\n\nYes, I just deleted my CVS tree and re-checked out.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 14:41:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql"
},
{
"msg_contents": "On Thu, 1 Aug 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > ... is once more 'normal' ...\n>\n> Uh, it's completely broken as far as I can tell.\n>\n> $ pwd\n> /home/postgres/pgsql/src/bin/pg_dump\n> $ cvs status\n> cvs server: Examining .\n> cvs server: failed to create lock directory for `/cvsroot/pgsql/src/bin/pg_dump'\n> (/cvsroot/pgsql/src/bin/pg_dump/#cvs.lock): No such file or directory\n> cvs server: failed to obtain dir lock in repository `/cvsroot/pgsql/src/bin/pg_dump'\n> cvs [server aborted]: read lock failed - giving up\n> $\n>\n> Also, http://developer.postgresql.org/cvsweb.cgi/pgsql/ isn't working.\n>\n> This makes it a little difficult to get any work done :-(\n\nI made a large change this morning based on a comment by Thomas ... the\nway I *had* it was to get the whole thing, you had to checkout pgsql-all\n... Thomas suggested leaving it as pgsql, with pgsql-server being 'the\nbase', but in order to do that, I had to move the pgsql repository to\npgsql-server, since otherwise modules makes it recursive on my ... its a\n'one time only change', but it requires a fresh checkout to get the\npatches in properly ... same thing that I explained yesterday about how\nyou'll get a 'No such file or directory' is you tried to update, say,\nlibpqxx, since its path is no longer\n/cvsroot/pgsql/src/interfaces/libpqxx, but is /cvsroot/interfaces/libpqxx\n... the fact that its different modules is transparent to the programmer\n*unless*, of course, its already checked out at the tiem :(\n\n Script started on Thu Aug 1 15:15:48 2002\n> setenv CVSROOT /cvsroot\n> cvs -q checkout -P pgsql\nU pgsql/COPYRIGHT\nU pgsql/GNUmakefile.in\nU pgsql/HISTORY\nU pgsql/INSTALL\n<alot of lines deleted>\nU pgsql/src/interfaces/libpqxx/test/test5.cxx\nU pgsql/src/interfaces/libpqxx/test/test6.cxx\nU pgsql/src/interfaces/libpqxx/test/test7.cxx\nU pgsql/src/interfaces/libpqxx/test/test8.cxx\nU pgsql/src/interfaces/libpqxx/test/test9.cxx\n> cd pgsql/src/bin/pg_dump\ncvs s> cvs status\ncvs status: Examining .\n===================================================================\nFile: Makefile Status: Up-to-date\n\n Working revision: 1.36 Sat Jul 27 20:10:05 2002\n Repository revision: 1.36 /cvsroot/pgsql-server/src/bin/pg_dump/Makefile,v\n Sticky Tag: (none)\n Sticky Date: (none)\n Sticky Options: (none)\n\n===================================================================\nFile: README Status: Up-to-date\n\n Working revision: 1.5 Fri Jul 21 11:40:08 2000\n Repository revision: 1.5 /cvsroot/pgsql-server/src/bin/pg_dump/README,v\n Sticky Tag: (none)\n Sticky Date: (none)\n Sticky Options: (none)\n\n<alot more lines deleted>\n\n===================================================================\nFile: zh_CN.po Status: Up-to-date\n\n Working revision: 1.5 Mon Dec 10 18:45:57 2001\n Repository revision: 1.5 /cvsroot/pgsql-server/src/bin/pg_dump/zh_CN.po,v\n Sticky Tag: (none)\n Sticky Date: (none)\n Sticky Options: (none)\n\n===================================================================\nFile: zh_TW.po Status: Up-to-date\n\n Working revision: 1.3 Thu Nov 29 18:59:28 2001\n Repository revision: 1.3 /cvsroot/pgsql-server/src/bin/pg_dump/zh_TW.po,v\n Sticky Tag: (none)\n Sticky Date: (none)\n Sticky Options: (none)\n\n> exit\nexit\n\nScript done on Thu Aug 1 15:16:51 2002\n\n\n",
"msg_date": "Thu, 1 Aug 2002 16:22:31 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: cvs checkout pgsql "
},
{
"msg_contents": "On Thu, 1 Aug 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > ... is once more 'normal' ...\n>\n> Uh, it's completely broken as far as I can tell.\n>\n> $ pwd\n> /home/postgres/pgsql/src/bin/pg_dump\n> $ cvs status\n> cvs server: Examining .\n> cvs server: failed to create lock directory for `/cvsroot/pgsql/src/bin/pg_dump'\n> (/cvsroot/pgsql/src/bin/pg_dump/#cvs.lock): No such file or directory\n> cvs server: failed to obtain dir lock in repository `/cvsroot/pgsql/src/bin/pg_dump'\n> cvs [server aborted]: read lock failed - giving up\n> $\n>\n> Also, http://developer.postgresql.org/cvsweb.cgi/pgsql/ isn't working.\n>\n> This makes it a little difficult to get any work done :-(\n\nWhat'r you typin about? It works fine. Ok, ok.. It does *NOW*. :)\nThere's probably still stuff that's broke that I haven't discovered\nyet.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 1 Aug 2002 15:33:23 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql "
},
{
"msg_contents": "On Thu, 2002-08-01 at 15:33, Vince Vielhaber wrote:\n> On Thu, 1 Aug 2002, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > ... is once more 'normal' ...\n> >\n> > Uh, it's completely broken as far as I can tell.\n> >\n> > $ pwd\n> > /home/postgres/pgsql/src/bin/pg_dump\n> > $ cvs status\n> > cvs server: Examining .\n> > cvs server: failed to create lock directory for `/cvsroot/pgsql/src/bin/pg_dump'\n> > (/cvsroot/pgsql/src/bin/pg_dump/#cvs.lock): No such file or directory\n> > cvs server: failed to obtain dir lock in repository `/cvsroot/pgsql/src/bin/pg_dump'\n> > cvs [server aborted]: read lock failed - giving up\n> > $\n> >\n> > Also, http://developer.postgresql.org/cvsweb.cgi/pgsql/ isn't working.\n> >\n> > This makes it a little difficult to get any work done :-(\n> \n> What'r you typin about? It works fine. Ok, ok.. It does *NOW*. :)\n> There's probably still stuff that's broke that I haven't discovered\n> yet.\n\nWell, of course that specific URL doesn't work because it's actually\npgsql-server.\n\n",
"msg_date": "01 Aug 2002 15:48:22 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql"
},
{
"msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Also, http://developer.postgresql.org/cvsweb.cgi/pgsql/ isn't working.\n>> \n>> What'r you typin about? It works fine. Ok, ok.. It does *NOW*. :)\n\n> Well, of course that specific URL doesn't work because it's actually\n> pgsql-server.\n\nWell, it did work before, and I'd really like it to work again. I do\nnot want to think about how Marc's broken apart the distribution when\nI'm looking at my local tree, and I don't want to think about it when\nI'm looking at cvsweb either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 15:53:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql "
},
{
"msg_contents": "On 1 Aug 2002, Rod Taylor wrote:\n\n> On Thu, 2002-08-01 at 15:33, Vince Vielhaber wrote:\n> > On Thu, 1 Aug 2002, Tom Lane wrote:\n> >\n> > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > ... is once more 'normal' ...\n> > >\n> > > Uh, it's completely broken as far as I can tell.\n> > >\n> > > $ pwd\n> > > /home/postgres/pgsql/src/bin/pg_dump\n> > > $ cvs status\n> > > cvs server: Examining .\n> > > cvs server: failed to create lock directory for `/cvsroot/pgsql/src/bin/pg_dump'\n> > > (/cvsroot/pgsql/src/bin/pg_dump/#cvs.lock): No such file or directory\n> > > cvs server: failed to obtain dir lock in repository `/cvsroot/pgsql/src/bin/pg_dump'\n> > > cvs [server aborted]: read lock failed - giving up\n> > > $\n> > >\n> > > Also, http://developer.postgresql.org/cvsweb.cgi/pgsql/ isn't working.\n> > >\n> > > This makes it a little difficult to get any work done :-(\n> >\n> > What'r you typin about? It works fine. Ok, ok.. It does *NOW*. :)\n> > There's probably still stuff that's broke that I haven't discovered\n> > yet.\n>\n> Well, of course that specific URL doesn't work because it's actually\n> pgsql-server.\n\nBut the developer's web page is updated with the new info.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 1 Aug 2002 16:00:12 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql"
},
{
"msg_contents": "On Thu, 1 Aug 2002, Tom Lane wrote:\n\n> Rod Taylor <rbt@zort.ca> writes:\n> > Also, http://developer.postgresql.org/cvsweb.cgi/pgsql/ isn't working.\n> >>\n> >> What'r you typin about? It works fine. Ok, ok.. It does *NOW*. :)\n>\n> > Well, of course that specific URL doesn't work because it's actually\n> > pgsql-server.\n>\n> Well, it did work before, and I'd really like it to work again. I do\n> not want to think about how Marc's broken apart the distribution when\n> I'm looking at my local tree, and I don't want to think about it when\n> I'm looking at cvsweb either.\n\nYou don't have to think about anything ... god, does *nobody* read what\nthey are replying to??\n\nif you do a cvs checkout of pgsql, you will get exactly what you are used\nto ... you can update, commit, look at status, look at logs, etc ... cvs\nitself is setup to handle pulling in and placing the required modules when\nyou do the checkout of pgsql ...\n\nI could move docs into $CVSROOT/this/is/a/stupid/directory/structure/docs\nand except for the fact that you already have a copy checked out pointing\nto the old path, a fresh checkout would still place that in pgsql/docs,\nwhere you've grown used to it being ...\n\n\n\n",
"msg_date": "Thu, 1 Aug 2002 18:12:16 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: cvs checkout pgsql "
},
{
"msg_contents": "Looks like I replied to the wrong thread...here's a repeat...\n\nSeems the CVS server is not working correctly. I just deleted my CVS\ntree and did a fresh checkout of the pgsql module. Everything seemingly\nwent well. After the check out completed, I did:\n\n[gcope@mouse pgsql]$ ./configure --with-tcl --with-java --with-python\n--with-perl \nchecking build system type... i686-pc-linux-gnu\nchecking host system type... i686-pc-linux-gnu\nchecking which template to use... linux\nchecking whether to build with 64-bit integer date/time support... no\nchecking whether to build with recode support... no\nchecking whether NLS is wanted... no\nchecking for default port number... 5432\nchecking for default soft limit on number of connections... 32\nchecking for gcc... gcc\nchecking for C compiler default output... a.out\nchecking whether the C compiler works... yes\nchecking whether we are cross compiling... no\nchecking for suffix of executables... \nchecking for suffix of object files... o\nchecking whether we are using the GNU C compiler... yes\nchecking whether gcc accepts -g... yes\n./configure: ./src/template/linux: No such file or directory\n\nSo, I did, \"./configure\" which yields the same result. So, thinking\nmaybe I just had a poorly timed checkout, I did an update. Doing so,\nlooks like this:\n[gcope@mouse pgsql]$ cvs -z3 update -dP\n? config.log\ncvs server: Updating .\ncvs server: Updating ChangeLogs\ncvs server: Updating MIGRATION\ncvs server: Updating config\n.\n.\n.\nsrc/backend/utils/mb/conversion_procs/utf8_and_iso8859_1\ncvs server: Updating\nsrc/backend/utils/mb/conversion_procs/utf8_and_johab\ncvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_sjis\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_sjis: No such file or directory\ncvs server: skipping directory\nsrc/backend/utils/mb/conversion_procs/utf8_and_sjis\ncvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_tcvn\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_tcvn: No such file or directory\ncvs server: skipping directory\nsrc/backend/utils/mb/conversion_procs/utf8_and_tcvn\ncvs server: Updating src/backend/utils/mb/conversion_procs/utf8_and_uhc\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_uhc: No such file or directory\ncvs server: skipping directory\nsrc/backend/utils/mb/conversion_procs/utf8_and_uhc\ncvs server: Updating src/backend/utils/misc\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/misc: No such file or\ndirectory\ncvs server: skipping directory src/backend/utils/misc\ncvs server: Updating src/backend/utils/mmgr\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/mmgr: No such file or\ndirectory\ncvs server: skipping directory src/backend/utils/mmgr\ncvs server: Updating src/backend/utils/sort\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/sort: No such file or\ndirectory\ncvs server: skipping directory src/backend/utils/sort\ncvs server: Updating src/backend/utils/time\ncvs server: cannot open directory\n/projects/cvsroot/pgsql/src/backend/utils/time: No such file or\ndirectory\ncvs server: skipping directory src/backend/utils/time\ncvs server: Updating src/bin\ncvs server: cannot open directory /projects/cvsroot/pgsql/src/bin: No\nsuch file or directory\ncvs server: skipping directory src/bin\n\n\nSo, I'm fairly sure something is awry.\n\nGreg\n\nUnrelated quote:\n> I could move docs into $CVSROOT/this/is/a/stupid/directory/structure/docs\n> and except for the fact that you already have a copy checked out pointing\n> to the old path, a fresh checkout would still place that in pgsql/docs,\n> where you've grown used to it being ...\n> \n\nYou do realize that by moving modules outside of the base project,\nyou're forcing more work for the masses. That is, if you plan to have\n/doc, /pgsql-server, /contrib, etc, people will now have to create a new\n.../pgsql directory locally which means, locally, people will have\n.../pgsql/pgsql-server, .../pgsql/contrib, etc...why force onto the\ndeveloper what CVS should already be doing it. I don't know about you\nguys, but when I check out pgsql, I certainly expect everything to be\nthere. If that's not what I want, then I should be more explicate in\nwhat I pick for checkout.\n\nJust some food for thought...this is a common peeve of mine when people\ndecide they need to restructure their repository...seems like this is\nalways done and almost always a poor choice that isn't realized until\nit's all done and over with.\n\nGreg",
"msg_date": "01 Aug 2002 17:45:39 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> Well, it did work before, and I'd really like it to work again. I do\n>> not want to think about how Marc's broken apart the distribution when\n>> I'm looking at my local tree, and I don't want to think about it when\n>> I'm looking at cvsweb either.\n\n> You don't have to think about anything ... god, does *nobody* read what\n> they are replying to??\n\n> if you do a cvs checkout of pgsql, you will get exactly what you are used\n> to ...\n\nYes, I did a re-checkout, remerge of the rather large patch I'm working\non, rebuild, etc. I've more or less recovered here, after wasting an\nhour or two that I had other uses for. But cvsweb is still broken:\nthere is AFAICT no page that presents a unified view of the CVS tree\nanymore. If it's merely moved, how about telling me where?\n\nI quite concur with Peter's remarks: some discussion of this sort of\nchange would have been appropriate in advance, rather than after.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 00:07:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cvs checkout pgsql "
}
] |
[
{
"msg_contents": "I have discussed the idea of contributing our Win32 work to the\nPostgreSQL project with management.\n\nWe have also converted all of the utilities (initdb, psql, pg_dump,\npg_restore, pg_id, pg_passwd, etc.)\n\nManagement is (rightfully) concerned about recouping the many thousands\nof dollars invested in the Win32 conversion.\n\nI pointed out that future versions would contain our enhancements and\ntherefore benefit us directly.\n\nI pointed out that maintenance is 80% of the cost of a software system\nand a world-wide team of good programmers would be maintaining the code\nfor all to benefit.\n\nAnd last but not least, good computer Kharma.\n;-)\n\nThey will have to cogitate over it a bit, I think. If they agree, I\nwill make a file with our source tree available on an ftp site.\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Wednesday, July 31, 2002 1:48 PM\n> To: Dann Corbit\n> Cc: Tom Lane; PostgreSQL-development\n> Subject: Re: [HACKERS] Open 7.3 items\n> \n> \n> \n> If you can contribute it, I think it would be valuable to the \n> two other\n> Win32 projects that are working on porting the 7.3 code to Win32.\n> \n> I don't think they will have any code ready for 7.3 but they \n> may have a\n> few pieces they want to get in to make their 7.3 patching job easier,\n> like renaming macros or variables or something.\n> \n> \n> --------------------------------------------------------------\n> -------------\n> \n> Dann Corbit wrote:\n> > > -----Original Message-----\n> > > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > > Sent: Tuesday, July 30, 2002 9:50 PM\n> > > To: Bruce Momjian\n> > > Cc: PostgreSQL-development\n> > > Subject: Re: [HACKERS] Open 7.3 items \n> > [snip]\n> > \n> > > > Win32 - timefame?\n> > \n> > I may be able to contribute the Win32 stuff we have done here. (Not\n> > sure of it, but they do seem more open to the idea now). \n> It's only for\n> > 7.1.3, and so I am not sure how helpful it would be. There \n> is also a\n> > bunch of stuff that looks like this in the code:\n> > \n> > #ifdef ICKY_WIN32_KLUDGE\n> > /* Bletcherous hack to make it work in Win32 goes here... */\n> > #else\n> > /* Normal code goes here... */\n> > #endif\n> > \n> > Let me know if you are interested.\n> > \n> > ---------------------------(end of \n> broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, \n> Pennsylvania 19026\n> \n",
"msg_date": "Thu, 1 Aug 2002 11:41:11 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 1 Aug 2002, Dann Corbit wrote:\n\n> I have discussed the idea of contributing our Win32 work to the\n> PostgreSQL project with management.\n>\n> We have also converted all of the utilities (initdb, psql, pg_dump,\n> pg_restore, pg_id, pg_passwd, etc.)\n>\n> Management is (rightfully) concerned about recouping the many thousands\n> of dollars invested in the Win32 conversion.\n\nAsk them if they are willing to pay us for the many thousands of dollars\nwe've all invested in giving them something to even convert? *grin*\n\n\n\n",
"msg_date": "Thu, 1 Aug 2002 15:58:29 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
},
{
"msg_contents": "On Thu, 2002-08-01 at 20:41, Dann Corbit wrote:\n> I have discussed the idea of contributing our Win32 work to the\n> PostgreSQL project with management.\n> \n> We have also converted all of the utilities (initdb, psql, pg_dump,\n> pg_restore, pg_id, pg_passwd, etc.)\n> \n> Management is (rightfully) concerned about recouping the many thousands\n> of dollars invested in the Win32 conversion.\n> \n> I pointed out that future versions would contain our enhancements and\n> therefore benefit us directly.\n\nAlso, having some other win32 port as an official version would make it\neven harder for them to recoup their money. Saying that your verison is\nthe base of the \"official\" is always a good selling point.\n\nThey can advance (a little) their recouping only in case when not\ncontributing delays the native win32 port by some significant amount of\ntime and at the same time postgreSQL somehow magically becomes popular\namong Win32 folks.\n\nI doubt that the last two can happen simultaneously.\n\n> I pointed out that maintenance is 80% of the cost of a software system\n> and a world-wide team of good programmers would be maintaining the code\n> for all to benefit.\n\nIt also gives your customers a guarantee for the case you company goes\nbelly-up, which /could/ also be a selling point ;)\n\n> And last but not least, good computer Kharma.\n> ;-)\n\nYou could also mention the argument of \"having bigger pies vs. having a\nbigger slice of a tiny pie\" ;)\n\n---------------\nHannu\n\n",
"msg_date": "02 Aug 2002 12:45:25 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "Do we have any way to disable foreign key constraints? If not, I would\nlike to add it to TODO. I was asked for this feature several times at\nO'Reilly. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 14:45:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Disable foreign key constraints"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do we have any way to disable foreign key constraints?\n\nYou can drop and re-add the constraint relatively painlessly in 7.3.\nFormerly it took hacking on pg_trigger.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Aug 2002 15:19:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Disable foreign key constraints "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Do we have any way to disable foreign key constraints?\n> \n> You can drop and re-add the constraint relatively painlessly in 7.3.\n> Formerly it took hacking on pg_trigger.\n\nOK, let's see if it comes up after 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 15:21:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Disable foreign key constraints"
}
] |
[
{
"msg_contents": "UNSUSCRIBE\n",
"msg_date": "Thu, 1 Aug 2002 20:48:34 +0200",
"msg_from": "\"Bernardo Pons\" <bernardo@atlas-iap.es>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "All this talk of modularity reminds me of a pet peeve: doing\ndump/restore upgrades when your databases include extension functions is\nhighly clunky, because extension functions include the fully qualified\npath to the linking library. So, for example\n\ncreate function geometry_in(opaque)\n RETURNS GEOMETRY\n AS '/opt/pgsql72/lib/contrib/libpostgis.so.0.7'\n LANGUAGE 'c' with (isstrict);\n\nIf I do a pg_dumpall on an old database and try to pipe into a new\ndatabase, things can get messy pretty fast. It would be nice if pgsql\nhad a 'default library location' which it tried to load linking\nlibraries from, in much the same way apache uses libexec. Then my\ndefinition could just be:\n\ncreate function geometry_in(opaque)\n RETURNS GEOMETRY\n AS 'libpostgis.so.0.7'\n LANGUAGE 'c' with (isstrict);\n\nWhich would be alot more portable across installations. I mean, right\nnow I can render my database inoperative just by moving my executable\ninstallation tree to a new path. Nice.\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Thu, 01 Aug 2002 13:01:15 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Module Portability"
},
{
"msg_contents": "Contrib modules does have such possibility.\nFor example:\n\nCREATE FUNCTION ltree_in(opaque)\nRETURNS opaque\nAS 'MODULE_PATHNAME'\nLANGUAGE 'c' with (isstrict);\n\n\n\tOleg\nOn Thu, 1 Aug 2002, Paul Ramsey wrote:\n\n> All this talk of modularity reminds me of a pet peeve: doing\n> dump/restore upgrades when your databases include extension functions is\n> highly clunky, because extension functions include the fully qualified\n> path to the linking library. So, for example\n>\n> create function geometry_in(opaque)\n> RETURNS GEOMETRY\n> AS '/opt/pgsql72/lib/contrib/libpostgis.so.0.7'\n> LANGUAGE 'c' with (isstrict);\n>\n> If I do a pg_dumpall on an old database and try to pipe into a new\n> database, things can get messy pretty fast. It would be nice if pgsql\n> had a 'default library location' which it tried to load linking\n> libraries from, in much the same way apache uses libexec. Then my\n> definition could just be:\n>\n> create function geometry_in(opaque)\n> RETURNS GEOMETRY\n> AS 'libpostgis.so.0.7'\n> LANGUAGE 'c' with (isstrict);\n>\n> Which would be alot more portable across installations. I mean, right\n> now I can render my database inoperative just by moving my executable\n> installation tree to a new path. Nice.\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 1 Aug 2002 23:02:00 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Module Portability"
},
{
"msg_contents": "Correct me if I am wrong (I often am) but isn't MODULE_PATHNAME replaced\nby the fully qualified module path during the build process? I mean, the\nsource code is movable, but a running installed system, with real data,\nis not movable, because MODULE_PATHNAME gets mapped to\n/usr/local/mypgsql/lib/blah.so or somesuch.\n\nOleg Bartunov wrote:\n> \n> Contrib modules does have such possibility.\n> For example:\n> \n> CREATE FUNCTION ltree_in(opaque)\n> RETURNS opaque\n> AS 'MODULE_PATHNAME'\n> LANGUAGE 'c' with (isstrict);\n> \n> Oleg\n> On Thu, 1 Aug 2002, Paul Ramsey wrote:\n> \n> > All this talk of modularity reminds me of a pet peeve: doing\n> > dump/restore upgrades when your databases include extension functions is\n> > highly clunky, because extension functions include the fully qualified\n> > path to the linking library. So, for example\n> >\n> > create function geometry_in(opaque)\n> > RETURNS GEOMETRY\n> > AS '/opt/pgsql72/lib/contrib/libpostgis.so.0.7'\n> > LANGUAGE 'c' with (isstrict);\n> >\n> > If I do a pg_dumpall on an old database and try to pipe into a new\n> > database, things can get messy pretty fast. It would be nice if pgsql\n> > had a 'default library location' which it tried to load linking\n> > libraries from, in much the same way apache uses libexec. Then my\n> > definition could just be:\n> >\n> > create function geometry_in(opaque)\n> > RETURNS GEOMETRY\n> > AS 'libpostgis.so.0.7'\n> > LANGUAGE 'c' with (isstrict);\n> >\n> > Which would be alot more portable across installations. I mean, right\n> > now I can render my database inoperative just by moving my executable\n> > installation tree to a new path. Nice.\n> >\n> >\n> \n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Thu, 01 Aug 2002 13:08:12 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: Module Portability"
},
{
"msg_contents": "Paul Ramsey writes:\n\n> It would be nice if pgsql had a 'default library location'\n\nSure. That's why it was implemented in 7.2.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 1 Aug 2002 22:36:50 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Module Portability"
},
{
"msg_contents": "Color me embarrased. /\n\n\nPeter Eisentraut wrote:\n> \n> Paul Ramsey writes:\n> \n> > It would be nice if pgsql had a 'default library location'\n> \n> Sure. That's why it was implemented in 7.2.\n> \n> --\n> Peter Eisentraut peter_e@gmx.net\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Thu, 01 Aug 2002 14:04:35 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: Module Portability"
},
{
"msg_contents": "Just use $libdir...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Paul Ramsey\n> Sent: Friday, 2 August 2002 4:01 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Module Portability\n> \n> \n> All this talk of modularity reminds me of a pet peeve: doing\n> dump/restore upgrades when your databases include extension functions is\n> highly clunky, because extension functions include the fully qualified\n> path to the linking library. So, for example\n> \n> create function geometry_in(opaque)\n> RETURNS GEOMETRY\n> AS '/opt/pgsql72/lib/contrib/libpostgis.so.0.7'\n> LANGUAGE 'c' with (isstrict);\n> \n> If I do a pg_dumpall on an old database and try to pipe into a new\n> database, things can get messy pretty fast. It would be nice if pgsql\n> had a 'default library location' which it tried to load linking\n> libraries from, in much the same way apache uses libexec. Then my\n> definition could just be:\n> \n> create function geometry_in(opaque)\n> RETURNS GEOMETRY\n> AS 'libpostgis.so.0.7'\n> LANGUAGE 'c' with (isstrict);\n> \n> Which would be alot more portable across installations. I mean, right\n> now I can render my database inoperative just by moving my executable\n> installation tree to a new path. Nice.\n> \n> -- \n> __\n> /\n> | Paul Ramsey\n> | Refractions Research\n> | Email: pramsey@refractions.net\n> | Phone: (250) 885-0632\n> \\_\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n",
"msg_date": "Fri, 2 Aug 2002 09:37:43 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Module Portability"
}
] |
[
{
"msg_contents": "Ok, here are some crude benchmarks to attempt to measure the effect of\nchanging FUNC_MAX_ARGS. The benchmark script executed:\n\nCREATE FUNCTION test_func(int, int, int, int, int, int, int, int)\nRETURNS INTEGER AS 'SELECT $1 + $2 + $3 + $4 + $5 + $6 + $7 + $8'\nLANGUAGE 'sql' VOLATILE;\n\nFollowed by 30,000 calls of:\n\nSELECT test_func(i, i, i, i, i, i, i, i);\n\n(Where i was the iteration number)\n\nI ran the test several times and averaged the results -- the wall-clock\ntime remained very consistent throughout the runs. Each execution of the\nscript took about 30 seconds. The machine was otherwise idle, and all\nother PostgreSQL settings were at their default values.\n\nWith FUNC_MAX_ARGS=16:\n\n28.832\n28.609\n28.726\n28.680\n\n(average = 28.6 seconds)\n\nWith FUNC_MAX_ARGS=32:\n\n29.097\n29.337\n29.138\n28.985\n29.231\n\n(average = 29.15 seconds)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Thu, 1 Aug 2002 18:23:38 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": true,
"msg_subject": "FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "\nThanks. That looks acceptable to me, and a good test.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> Ok, here are some crude benchmarks to attempt to measure the effect of\n> changing FUNC_MAX_ARGS. The benchmark script executed:\n> \n> CREATE FUNCTION test_func(int, int, int, int, int, int, int, int)\n> RETURNS INTEGER AS 'SELECT $1 + $2 + $3 + $4 + $5 + $6 + $7 + $8'\n> LANGUAGE 'sql' VOLATILE;\n> \n> Followed by 30,000 calls of:\n> \n> SELECT test_func(i, i, i, i, i, i, i, i);\n> \n> (Where i was the iteration number)\n> \n> I ran the test several times and averaged the results -- the wall-clock\n> time remained very consistent throughout the runs. Each execution of the\n> script took about 30 seconds. The machine was otherwise idle, and all\n> other PostgreSQL settings were at their default values.\n> \n> With FUNC_MAX_ARGS=16:\n> \n> 28.832\n> 28.609\n> 28.726\n> 28.680\n> \n> (average = 28.6 seconds)\n> \n> With FUNC_MAX_ARGS=32:\n> \n> 29.097\n> 29.337\n> 29.138\n> 28.985\n> 29.231\n> \n> (average = 29.15 seconds)\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 18:25:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "> > With FUNC_MAX_ARGS=16:\n> > (average = 28.6 seconds)\n> > With FUNC_MAX_ARGS=32:\n> > (average = 29.15 seconds)\n\nThat is almost a 2 percent cost. Shall we challenge someone to get us\nback 2 percent from somewhere before the 7.3 release? Optimizing a hot\nspot might do it...\n\n - Thomas\n",
"msg_date": "Thu, 01 Aug 2002 22:06:59 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "On Thu, 1 Aug 2002, Thomas Lockhart wrote:\n\n> > > With FUNC_MAX_ARGS=16:\n> > > (average = 28.6 seconds)\n> > > With FUNC_MAX_ARGS=32:\n> > > (average = 29.15 seconds)\n>\n> That is almost a 2 percent cost. Shall we challenge someone to get us\n> back 2 percent from somewhere before the 7.3 release? Optimizing a hot\n> spot might do it...\n\nThe other side of the coin ... have you, in the past, gained enough\nperformance to allow us a 2% slip for v7.3?\n\nSomeone mentioned that the SQL spec called for a 128byte NAMELENTH ... is\nthere similar for FUNC_MAX_ARGS that we aren't adhering to? Or is that\none semi-arbitrary?\n\n",
"msg_date": "Fri, 2 Aug 2002 03:55:01 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "On Fri, 2002-08-02 at 09:28, Marc G. Fournier wrote:\n> On Fri, 2 Aug 2002, Andrew Sullivan wrote:\n> \n> > On Fri, Aug 02, 2002 at 10:39:47AM -0400, Tom Lane wrote:\n> > >\n> > > Actually, plpgsql is pretty expensive too. The thing to be benchmarking\n> > > is applications of plain old built-in-C functions and operators.\n> >\n> > I thought part of the justification for this was for the OpenACS\n> > guys; don't they write everything in TCL?\n> \n> Nope, the OpenACS stuff relies on plpgsql functions ... the 'TCL' is the\n> web pages themselves, vs using something like PHP ... I may be wrong, but\n> I do not believe any of the functions are in TCL ...\n> \n\nNope, some are written in Tcl. Most are in plpgsql, mainly I believe so\nthat the port between Oracle and PG is easier.\n\n --brett\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n-- \nBrett Schwarz\nbrett_schwarz AT yahoo.com\n\n",
"msg_date": "02 Aug 2002 05:13:15 -0700",
"msg_from": "Brett Schwarz <brett_schwarz@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Perhaps I'm not remembering correctly, but don't SQL functions still\nhave an abnormally high cost of execution compared to plpgsql? \n\nWant to try the same thing with a plpgsql function?\n\n\nOn Thu, 2002-08-01 at 18:23, Neil Conway wrote: \n> Ok, here are some crude benchmarks to attempt to measure the effect of\n> changing FUNC_MAX_ARGS. The benchmark script executed:\n> \n> CREATE FUNCTION test_func(int, int, int, int, int, int, int, int)\n> RETURNS INTEGER AS 'SELECT $1 + $2 + $3 + $4 + $5 + $6 + $7 + $8'\n> LANGUAGE 'sql' VOLATILE;\n> \n> Followed by 30,000 calls of:\n> \n> SELECT test_func(i, i, i, i, i, i, i, i);\n> \n> (Where i was the iteration number)\n> \n> I ran the test several times and averaged the results -- the wall-clock\n> time remained very consistent throughout the runs. Each execution of the\n> script took about 30 seconds. The machine was otherwise idle, and all\n> other PostgreSQL settings were at their default values.\n> \n> With FUNC_MAX_ARGS=16:\n> \n> 28.832\n> 28.609\n> 28.726\n> 28.680\n> \n> (average = 28.6 seconds)\n> \n> With FUNC_MAX_ARGS=32:\n> \n> 29.097\n> 29.337\n> 29.138\n> 28.985\n> 29.231\n> \n> (average = 29.15 seconds)\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n",
"msg_date": "02 Aug 2002 10:18:50 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Perhaps I'm not remembering correctly, but don't SQL functions still\n> have an abnormally high cost of execution compared to plpgsql? \n\n> Want to try the same thing with a plpgsql function?\n\nActually, plpgsql is pretty expensive too. The thing to be benchmarking\nis applications of plain old built-in-C functions and operators.\n\nAlso, there are two components that I'd be worried about: one is the\nparser's costs of operator/function lookup, and the other is runtime\noverhead. Runtime overhead is most likely concentrated in the fmgr.c\ninterface functions, which tend to do MemSets to zero out function\ncall records. I had had a todo item to eliminate the memset in favor\nof just zeroing what needs to be zeroed, at least in the one- and two-\nargument cases which are the most heavily trod code paths. This will\nbecome significantly more important if FUNC_MAX_ARGS increases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 10:39:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "On Fri, Aug 02, 2002 at 10:39:47AM -0400, Tom Lane wrote:\n> \n> Actually, plpgsql is pretty expensive too. The thing to be benchmarking\n> is applications of plain old built-in-C functions and operators.\n\nI thought part of the justification for this was for the OpenACS\nguys; don't they write everything in TCL?\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 2 Aug 2002 11:32:07 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "On Fri, 2 Aug 2002, Andrew Sullivan wrote:\n\n> On Fri, Aug 02, 2002 at 10:39:47AM -0400, Tom Lane wrote:\n> >\n> > Actually, plpgsql is pretty expensive too. The thing to be benchmarking\n> > is applications of plain old built-in-C functions and operators.\n>\n> I thought part of the justification for this was for the OpenACS\n> guys; don't they write everything in TCL?\n\nNope, the OpenACS stuff relies on plpgsql functions ... the 'TCL' is the\nweb pages themselves, vs using something like PHP ... I may be wrong, but\nI do not believe any of the functions are in TCL ...\n\n",
"msg_date": "Fri, 2 Aug 2002 13:28:48 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": ">>>>> \"Marc\" == Marc G Fournier <scrappy@hub.org> writes:\n\n Marc> On Fri, 2 Aug 2002, Andrew Sullivan wrote:\n >> On Fri, Aug 02, 2002 at 10:39:47AM -0400, Tom Lane wrote: > >\n >> Actually, plpgsql is pretty expensive too. The thing to be\n >> benchmarking > is applications of plain old built-in-C\n >> functions and operators.\n >> \n >> I thought part of the justification for this was for the\n >> OpenACS guys; don't they write everything in TCL?\n\n Marc> Nope, the OpenACS stuff relies on plpgsql functions ... the\n Marc> 'TCL' is the web pages themselves, vs using something like\n Marc> PHP ... I may be wrong, but I do not believe any of the\n Marc> functions are in TCL ...\n\nThat's true. We have intentionally avoided adding pl/tcl functions\ninto the db. The postgresql db stuff relies extensively on plpgsql,\nwhile the oracle db stuff relies on pl/sql to provide equivalent\nfunctionality. \n\nOn the other hand, all of the web server stuff is implemented on Aolserver\nwhich uses Tcl as a scripting language.\n\nRegards,\n\nDan\n",
"msg_date": "Fri, 2 Aug 2002 12:35:10 -0400 (EDT)",
"msg_from": "Daniel Wickstrom <danw@rtp.ericsson.se>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "On Fri, Aug 02, 2002 at 12:35:10PM -0400, Daniel Wickstrom wrote:\n> \n> On the other hand, all of the web server stuff is implemented on Aolserver\n> which uses Tcl as a scripting language.\n\nI think this is why I was confused. Thanks, all.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Fri, 2 Aug 2002 14:05:56 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> On Fri, Aug 02, 2002 at 10:39:47AM -0400, Tom Lane wrote:\n>> Actually, plpgsql is pretty expensive too. The thing to be benchmarking\n>> is applications of plain old built-in-C functions and operators.\n\n> I thought part of the justification for this was for the OpenACS\n> guys; don't they write everything in TCL?\n\nNot relevant. The concern about increasing FUNC_MAX_ARGS is the\noverhead it might add to existing functions that don't need any\nmore arguments. Worst case for that (percentagewise) will be\nsmall built-in functions, like say int4add.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 15:16:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > > With FUNC_MAX_ARGS=16:\n> > > (average = 28.6 seconds)\n> > > With FUNC_MAX_ARGS=32:\n> > > (average = 29.15 seconds)\n> \n> That is almost a 2 percent cost. Shall we challenge someone to get us\n> back 2 percent from somewhere before the 7.3 release? Optimizing a hot\n> spot might do it...\n\nI wasn't terribly concerned because this wasn't a 2% on normal workload\ntest, it was a 2% bang on function calls as fast as you can test.\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Aug 2002 21:17:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "...\n> I wasn't terribly concerned because this wasn't a 2% on normal workload\n> test, it was a 2% bang on function calls as fast as you can test.\n\nYeah, good point. But if we can get it back somehow that would be even\nbetter :)\n\n - Thomas\n",
"msg_date": "Fri, 02 Aug 2002 19:44:34 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I wasn't terribly concerned because this wasn't a 2% on normal workload\n> test, it was a 2% bang on function calls as fast as you can test.\n\nNo, it was a 2% hit on rather slow functions with only one call made\nper query issued by the client. This is not much of a stress test.\n\nA more impressive comparison would be\n\nselect 2+2+2+2+2+2+ ... (iterate 10000 times or so)\n\nand see how much that slows down.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 00:05:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> No, it was a 2% hit on rather slow functions with only one call made\n> per query issued by the client. This is not much of a stress test.\n> \n> A more impressive comparison would be\n> \n> select 2+2+2+2+2+2+ ... (iterate 10000 times or so)\n> \n> and see how much that slows down.\n\nI ran a crude test as follows (using a PHP script on the same machine. \nNothing else going on at the same time):\n\ndo 100 times\n select 2+2+2+2+2+2+ ... iterated 9901 times\n\n\n#define INDEX_MAX_KEYS\t\t16, 32, 64, & 128\n#define FUNC_MAX_ARGS\t\tINDEX_MAX_KEYS\nmake all\nmake install\ninitdb\n\nThe results were as follows:\nINDEX_MAX_KEYS 16 32 64 128\n -----+-------+------+--------\nTime in seconds 48 49 51 55\n\n\nJoe\n\n",
"msg_date": "Fri, 02 Aug 2002 23:00:47 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I ran a crude test as follows (using a PHP script on the same machine. \n> Nothing else going on at the same time):\n\n> do 100 times\n> select 2+2+2+2+2+2+ ... iterated 9901 times\n\n> The results were as follows:\n> INDEX_MAX_KEYS 16 32 64 128\n> -----+-------+------+--------\n> Time in seconds 48 49 51 55\n\nOkay, that seems like a good basic test.\n\nDid you happen to make any notes about the disk space occupied by the\ndatabase? One thing I was worried about was the bloat that'd occur\nin pg_proc, pg_index, and pg_proc_proname_args_nsp_index. Aside from\ncosting disk space, this would indirectly slow things down due to more\nI/O to read these tables --- an effect that probably your test couldn't\nmeasure, since it wasn't touching very many entries in any of those\ntables.\n\nLooks like we could go for 32 without much problem, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 12:41:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> Did you happen to make any notes about the disk space occupied by the\n> database? One thing I was worried about was the bloat that'd occur\n> in pg_proc, pg_index, and pg_proc_proname_args_nsp_index. Aside from\n> costing disk space, this would indirectly slow things down due to more\n> I/O to read these tables --- an effect that probably your test couldn't\n> measure, since it wasn't touching very many entries in any of those\n> tables.\n\nNo, but it's easy enough to repeat. I'll do that today sometime.\n\nJoe\n\n\n",
"msg_date": "Sat, 03 Aug 2002 10:11:50 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> How hard would it be to change pg_proc.proargtypes from oidvector to _oid\n\nLack of btree index support for _oid would be the first hurdle.\n\nEven if we wanted to do that work, there'd be some serious breakage\nof client queries because of the historical differences in output format\nand subscripting. (oidvector indexes from 0, _oid from 1. Which is\npretty bogus, but if the regression tests are anything to judge by there\nare probably a lot of queries out there that know this.)\n\n> This could also get the requested 2% speedup,\n\nI'm not convinced that _oid would be faster.\n\nAll in all, it doesn't seem worth the trouble compared to just kicking\nFUNC_MAX_ARGS up a notch. At least not right now. I think we've\ncreated quite enough system-catalog changes for one release cycle ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 14:20:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "On Sat, 2002-08-03 at 18:41, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > I ran a crude test as follows (using a PHP script on the same machine. \n> > Nothing else going on at the same time):\n> \n> > do 100 times\n> > select 2+2+2+2+2+2+ ... iterated 9901 times\n> \n> > The results were as follows:\n> > INDEX_MAX_KEYS 16 32 64 128\n> > -----+-------+------+--------\n> > Time in seconds 48 49 51 55\n> \n> Okay, that seems like a good basic test.\n> \n> Did you happen to make any notes about the disk space occupied by the\n> database? One thing I was worried about was the bloat that'd occur\n> in pg_proc, pg_index, and pg_proc_proname_args_nsp_index. Aside from\n> costing disk space, this would indirectly slow things down due to more\n> I/O to read these tables --- an effect that probably your test couldn't\n> measure, since it wasn't touching very many entries in any of those\n> tables.\n\nHow hard would it be to change pg_proc.proargtypes from oidvector to _oid and hope \nthat toasting will take care of the rest ?\n\nThis could also get the requested 2% speedup, not to mention the\npotential for up to 64K arguments ;)\n\n---------------\nHannu\n",
"msg_date": "03 Aug 2002 20:56:56 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Tom Lane wrote:\n> Did you happen to make any notes about the disk space occupied by the\n> database? One thing I was worried about was the bloat that'd occur\n> in pg_proc, pg_index, and pg_proc_proname_args_nsp_index. Aside from\n> costing disk space, this would indirectly slow things down due to more\n> I/O to read these tables --- an effect that probably your test couldn't\n> measure, since it wasn't touching very many entries in any of those\n> tables.\n\n#define INDEX_MAX_KEYS 16\n#define FUNC_MAX_ARGS INDEX_MAX_KEYS\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n2.7M /opt/data/pgsql/data/base/1\n2.7M /opt/data/pgsql/data/base/16862\n2.7M /opt/data/pgsql/data/base/16863\n2.7M /opt/data/pgsql/data/base/16864\n3.2M /opt/data/pgsql/data/base/16865\n2.7M /opt/data/pgsql/data/base/16866\n17M /opt/data/pgsql/data/base\n\n#define INDEX_MAX_KEYS 32\n#define FUNC_MAX_ARGS INDEX_MAX_KEYS\n du -h --max-depth=1 /opt/data/pgsql/data/base/\n3.1M /opt/data/pgsql/data/base/1\n3.1M /opt/data/pgsql/data/base/16862\n3.1M /opt/data/pgsql/data/base/16863\n3.1M /opt/data/pgsql/data/base/16864\n3.6M /opt/data/pgsql/data/base/16865\n3.1M /opt/data/pgsql/data/base/16866\n19M /opt/data/pgsql/data/base\n\n#define INDEX_MAX_KEYS 64\n#define FUNC_MAX_ARGS INDEX_MAX_KEYS\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n3.9M /opt/data/pgsql/data/base/1\n3.9M /opt/data/pgsql/data/base/16862\n3.9M /opt/data/pgsql/data/base/16863\n3.9M /opt/data/pgsql/data/base/16864\n4.4M /opt/data/pgsql/data/base/16865\n3.9M /opt/data/pgsql/data/base/16866\n24M /opt/data/pgsql/data/base\n\n#define INDEX_MAX_KEYS 128\n#define FUNC_MAX_ARGS INDEX_MAX_KEYS\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n5.7M /opt/data/pgsql/data/base/1\n5.7M /opt/data/pgsql/data/base/16862\n5.7M /opt/data/pgsql/data/base/16863\n5.7M /opt/data/pgsql/data/base/16864\n6.3M /opt/data/pgsql/data/base/16865\n5.7M /opt/data/pgsql/data/base/16866\n35M /opt/data/pgsql/data/base\n\n\nJoe\n\n",
"msg_date": "Sat, 03 Aug 2002 14:24:45 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "On Sat, 2002-08-03 at 23:20, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > How hard would it be to change pg_proc.proargtypes from oidvector to _oid\n> \n> Lack of btree index support for _oid would be the first hurdle.\n\nIs that index really needed, or is it there just to enforce uniqueness ?\n\nWould the lookup not be in some internal cache most of the time ?\n\nAlso, (imho ;) btree index support should be done for all array types\nwhich have comparison ops for elements at once (with semantics similar\nto string) not one by one for individual types. It should be in some\nways quite similar to multi-key indexes, so perhaps some code could be\nborrowed from there.\n\nOtoh, It should be a SMOP to write support for b-tree indexes just for\n_oid :-p , most likely one could re-use code from oidvector ;)\n\n> Even if we wanted to do that work, there'd be some serious breakage\n> of client queries because of the historical differences in output format\n> and subscripting. (oidvector indexes from 0, _oid from 1. Which is\n> pretty bogus, but if the regression tests are anything to judge by there\n> are probably a lot of queries out there that know this.)\n\nI would guess that oidvector is sufficiently obscure type and that\nnobody actually uses oidvector for user tables. \n\nIt is also only used in two tables and one index in system tables:\n\nhannu=# select relname,relkind from pg_class where oid in (\nhannu-# select attrelid from pg_attribute where atttypid=30);\n relname | relkind \n---------------------------------+---------\n pg_index | r\n pg_proc_proname_narg_type_index | i\n pg_proc | r\n(3 rows)\n\n> > This could also get the requested 2% speedup,\n> \n> I'm not convinced that _oid would be faster.\n\nNeither am I, but it _may_ be that having generally shorter oid arrays\nwins us enough ;)\n\n> All in all, it doesn't seem worth the trouble compared to just kicking\n> FUNC_MAX_ARGS up a notch. At least not right now. I think we've\n> created quite enough system-catalog changes for one release cycle ;-)\n\nBut going to _oid will free us from arbitrary limits on argument count.\nOr at least from small arbitrary limits, as there will probably still be\nthe at-least-three-btree-keys-must-fit-in-page limit (makes > 2600\nargs/function) and maybe some other internal limits as well.\n\n------------------\nHannu\n\n",
"msg_date": "04 Aug 2002 02:38:01 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Hannu Krosing wrote:\n> On Sat, 2002-08-03 at 23:20, Tom Lane wrote:\n> > Hannu Krosing <hannu@tm.ee> writes:\n> > > How hard would it be to change pg_proc.proargtypes from oidvector to _oid\n> > \n> > Lack of btree index support for _oid would be the first hurdle.\n> \n> Is that index really needed, or is it there just to enforce uniqueness ?\n\nNeeded to look up functions based on their args.\n\nThe big issue of using arrays is that we don't have cache capability for\nvariable length fields. Until we get that, we are stuck with\nNAMEDATALEN taking the full length, and oidvector taking the full\nlength.\n\nAnd if we went with variable length, there may be a performance penalty.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 20:40:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "\nOK, time to get moving folks. Looks like the increase in the function\nargs to 32 and the NAMEDATALEN to 128 has been sufficiently tested. Tom\nhas some ideas on removing some memset() calls for function args to\nspeed things up, but we don't have to wait for that go get going. The\nend of August is nearing.\n\nIs there any reason to delay the change further?\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Did you happen to make any notes about the disk space occupied by the\n> > database? One thing I was worried about was the bloat that'd occur\n> > in pg_proc, pg_index, and pg_proc_proname_args_nsp_index. Aside from\n> > costing disk space, this would indirectly slow things down due to more\n> > I/O to read these tables --- an effect that probably your test couldn't\n> > measure, since it wasn't touching very many entries in any of those\n> > tables.\n> \n> #define INDEX_MAX_KEYS 16\n> #define FUNC_MAX_ARGS INDEX_MAX_KEYS\n> du -h --max-depth=1 /opt/data/pgsql/data/base/\n> 2.7M /opt/data/pgsql/data/base/1\n> 2.7M /opt/data/pgsql/data/base/16862\n> 2.7M /opt/data/pgsql/data/base/16863\n> 2.7M /opt/data/pgsql/data/base/16864\n> 3.2M /opt/data/pgsql/data/base/16865\n> 2.7M /opt/data/pgsql/data/base/16866\n> 17M /opt/data/pgsql/data/base\n> \n> #define INDEX_MAX_KEYS 32\n> #define FUNC_MAX_ARGS INDEX_MAX_KEYS\n> du -h --max-depth=1 /opt/data/pgsql/data/base/\n> 3.1M /opt/data/pgsql/data/base/1\n> 3.1M /opt/data/pgsql/data/base/16862\n> 3.1M /opt/data/pgsql/data/base/16863\n> 3.1M /opt/data/pgsql/data/base/16864\n> 3.6M /opt/data/pgsql/data/base/16865\n> 3.1M /opt/data/pgsql/data/base/16866\n> 19M /opt/data/pgsql/data/base\n> \n> #define INDEX_MAX_KEYS 64\n> #define FUNC_MAX_ARGS INDEX_MAX_KEYS\n> du -h --max-depth=1 /opt/data/pgsql/data/base/\n> 3.9M /opt/data/pgsql/data/base/1\n> 3.9M /opt/data/pgsql/data/base/16862\n> 3.9M /opt/data/pgsql/data/base/16863\n> 3.9M /opt/data/pgsql/data/base/16864\n> 4.4M /opt/data/pgsql/data/base/16865\n> 3.9M /opt/data/pgsql/data/base/16866\n> 24M /opt/data/pgsql/data/base\n> \n> #define INDEX_MAX_KEYS 128\n> #define FUNC_MAX_ARGS INDEX_MAX_KEYS\n> du -h --max-depth=1 /opt/data/pgsql/data/base/\n> 5.7M /opt/data/pgsql/data/base/1\n> 5.7M /opt/data/pgsql/data/base/16862\n> 5.7M /opt/data/pgsql/data/base/16863\n> 5.7M /opt/data/pgsql/data/base/16864\n> 6.3M /opt/data/pgsql/data/base/16865\n> 5.7M /opt/data/pgsql/data/base/16866\n> 35M /opt/data/pgsql/data/base\n> \n> \n> Joe\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 21:04:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> Lack of btree index support for _oid would be the first hurdle.\n\n> Is that index really needed, or is it there just to enforce uniqueness ?\n\nBoth.\n\n> Also, (imho ;) btree index support should be done for all array types\n> which have comparison ops for elements at once (with semantics similar\n> to string) not one by one for individual types.\n\nFine, send a patch ;-)\n\n>> Even if we wanted to do that work, there'd be some serious breakage\n>> of client queries because of the historical differences in output format\n>> and subscripting. (oidvector indexes from 0, _oid from 1. Which is\n>> pretty bogus, but if the regression tests are anything to judge by there\n>> are probably a lot of queries out there that know this.)\n\n> I would guess that oidvector is sufficiently obscure type and that\n> nobody actually uses oidvector for user tables. \n\nNo, you miss my point: client queries that do subscripting on\nproargtypes will break. Since the regression tests find this a useful\nthing to do, I suspect there are clients out there that do too.\n\n> But going to _oid will free us from arbitrary limits on argument count.\n\nI didn't say it wouldn't be a good idea in the long run. I'm saying I\ndon't think it's happening for 7.3, given that Aug 31 is not that far\naway anymore and that a lot of cleanup work remains undone on other\nalready-committed features. FUNC_MAX_ARGS=32 could happen for 7.3, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 21:38:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, time to get moving folks. Looks like the increase in the function\n> args to 32 and the NAMEDATALEN to 128 has been sufficiently tested.\n\nI'm convinced by Joe's numbers that FUNC_MAX_ARGS = 32 shouldn't hurt\ntoo much. But have we done equivalent checks on NAMEDATALEN? In\nparticular, do we know what it does to the size of template1?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 22:15:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, time to get moving folks. Looks like the increase in the function\n> > args to 32 and the NAMEDATALEN to 128 has been sufficiently tested.\n> \n> I'm convinced by Joe's numbers that FUNC_MAX_ARGS = 32 shouldn't hurt\n> too much. But have we done equivalent checks on NAMEDATALEN? In\n> particular, do we know what it does to the size of template1?\n\nNo, I thought we saw the number, was 30%? No, we did a test for 64.\nCan someone get us that number for 128?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 22:54:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n>>I'm convinced by Joe's numbers that FUNC_MAX_ARGS = 32 shouldn't hurt\n>>too much. But have we done equivalent checks on NAMEDATALEN? In\n>>particular, do we know what it does to the size of template1?\n> No, I thought we saw the number, was 30%? No, we did a test for 64.\n> Can someone get us that number for 128?\n> \n\nI'll do 32, 64, and 128 and report back on template1 size.\n\nJoe\n\n",
"msg_date": "Sat, 03 Aug 2002 23:58:59 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> \n>>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>>\n>>>OK, time to get moving folks. Looks like the increase in the function\n>>>args to 32 and the NAMEDATALEN to 128 has been sufficiently tested.\n>>\n>>I'm convinced by Joe's numbers that FUNC_MAX_ARGS = 32 shouldn't hurt\n>>too much. But have we done equivalent checks on NAMEDATALEN? In\n>>particular, do we know what it does to the size of template1?\n> \n> \n> No, I thought we saw the number, was 30%? No, we did a test for 64.\n> Can someone get us that number for 128?\n> \n\nThese are all with FUNC_MAX_ARGS = 16.\n\n#define NAMEDATALEN 32\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n2.7M /opt/data/pgsql/data/base/1\n2.7M /opt/data/pgsql/data/base/16862\n2.7M /opt/data/pgsql/data/base/16863\n2.7M /opt/data/pgsql/data/base/16864\n3.2M /opt/data/pgsql/data/base/16865\n2.7M /opt/data/pgsql/data/base/16866\n2.7M /opt/data/pgsql/data/base/17117\n19M /opt/data/pgsql/data/base\n\n#define NAMEDATALEN 64\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n3.0M /opt/data/pgsql/data/base/1\n3.0M /opt/data/pgsql/data/base/16863\n3.0M /opt/data/pgsql/data/base/16864\n3.0M /opt/data/pgsql/data/base/16865\n3.5M /opt/data/pgsql/data/base/16866\n3.0M /opt/data/pgsql/data/base/16867\n19M /opt/data/pgsql/data/base\n\n#define NAMEDATALEN 128\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n3.8M /opt/data/pgsql/data/base/1\n3.8M /opt/data/pgsql/data/base/16863\n3.8M /opt/data/pgsql/data/base/16864\n3.8M /opt/data/pgsql/data/base/16865\n4.4M /opt/data/pgsql/data/base/16866\n3.8M /opt/data/pgsql/data/base/16867\n23M /opt/data/pgsql/data/base\n\n\nJoe\n\n",
"msg_date": "Sun, 04 Aug 2002 00:18:23 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway wrote:\n> These are all with FUNC_MAX_ARGS = 16.\n> \n> #define NAMEDATALEN 32\n> du -h --max-depth=1 /opt/data/pgsql/data/base/\n> 2.7M /opt/data/pgsql/data/base/1\n> 2.7M /opt/data/pgsql/data/base/16862\n> 2.7M /opt/data/pgsql/data/base/16863\n> 2.7M /opt/data/pgsql/data/base/16864\n> 3.2M /opt/data/pgsql/data/base/16865\n> 2.7M /opt/data/pgsql/data/base/16866\n> 2.7M /opt/data/pgsql/data/base/17117\n> 19M /opt/data/pgsql/data/base\n> \n\nFWIW, this is FUNC_MAX_ARGS = 32 *and* NAMEDATALEN 128\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n4.1M /opt/data/pgsql/data/base/1\n4.1M /opt/data/pgsql/data/base/16863\n4.1M /opt/data/pgsql/data/base/16864\n4.1M /opt/data/pgsql/data/base/16865\n4.8M /opt/data/pgsql/data/base/16866\n4.1M /opt/data/pgsql/data/base/16867\n26M /opt/data/pgsql/data/base\n\nJoe\n\n",
"msg_date": "Sun, 04 Aug 2002 16:17:49 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> These are all with FUNC_MAX_ARGS = 16.\n\n> #define NAMEDATALEN 32\n> 2.7M /opt/data/pgsql/data/base/1\n\n> #define NAMEDATALEN 64\n> 3.0M /opt/data/pgsql/data/base/1\n\n> #define NAMEDATALEN 128\n> 3.8M /opt/data/pgsql/data/base/1\n\nBased on Joe's numbers, I'm kind of thinking that we should go for\nFUNC_MAX_ARGS=32 and NAMEDATALEN=64 as defaults in 7.3.\n\nAlthough NAMEDATALEN=128 would be needed for full SQL compliance,\nthe space penalty seems severe. I'm thinking we should back off\nuntil someone wants to do the legwork needed to make the name type\nbe truly variable-length.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 23:56:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>These are all with FUNC_MAX_ARGS = 16.\n> \n> \n>>#define NAMEDATALEN 32\n>>2.7M /opt/data/pgsql/data/base/1\n> \n> \n>>#define NAMEDATALEN 64\n>>3.0M /opt/data/pgsql/data/base/1\n> \n> \n>>#define NAMEDATALEN 128\n>>3.8M /opt/data/pgsql/data/base/1\n> \n> \n> Based on Joe's numbers, I'm kind of thinking that we should go for\n> FUNC_MAX_ARGS=32 and NAMEDATALEN=64 as defaults in 7.3.\n> \n> Although NAMEDATALEN=128 would be needed for full SQL compliance,\n> the space penalty seems severe. I'm thinking we should back off\n> until someone wants to do the legwork needed to make the name type\n> be truly variable-length.\n\nFWIW, I reran the speed benchmark (select 2+2+2...) with \nFUNC_MAX_ARGS=32 and NAMEDATALEN=128 and still got 49 seconds, i.e. \nNAMEDATALEN=128 didn't impact performance of that particular test.\n\nThe results were as follows:\nINDEX_MAX_KEYS 16 32 64 128\n -----+-------+------+--------\nTime in seconds 48 49 51 55\n ^^^^^^^^\n reran with NAMEDATALEN=128, same result\n\nWhat will the impact be on a medium to large production database? In \nother words, is the bloat strictly to the system catalogs based on how \nextensive your database schema (bad choice of words now, but I don't \nknow a better term for this) is? Or will the bloat scale with the size \nof the database including data?\n\nJoe\n\n",
"msg_date": "Sun, 04 Aug 2002 21:08:25 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > These are all with FUNC_MAX_ARGS = 16.\n> \n> > #define NAMEDATALEN 32\n> > 2.7M /opt/data/pgsql/data/base/1\n> \n> > #define NAMEDATALEN 64\n> > 3.0M /opt/data/pgsql/data/base/1\n> \n> > #define NAMEDATALEN 128\n> > 3.8M /opt/data/pgsql/data/base/1\n> \n> Based on Joe's numbers, I'm kind of thinking that we should go for\n> FUNC_MAX_ARGS=32 and NAMEDATALEN=64 as defaults in 7.3.\n> \n> Although NAMEDATALEN=128 would be needed for full SQL compliance,\n> the space penalty seems severe. I'm thinking we should back off\n> until someone wants to do the legwork needed to make the name type\n> be truly variable-length.\n\nI prefer 64 for NAMEDATALEN myself. Standards compliance is nice, but\nrealistically it seems a shame to waste so much space on an excessive\nlength that will never be used.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 5 Aug 2002 01:21:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian wrote:\n> I prefer 64 for NAMEDATALEN myself. Standards compliance is nice, but\n> realistically it seems a shame to waste so much space on an excessive\n> length that will never be used.\n> \n\nBut is the space wasted really never more than a few MB's, even if the \ndatabase itself is say 1 GB? If so, and if the speed penalty is small to \nnon-existent, I'd rather be spec compliant. That way nobody has a good \nbasis for complaining ;-)\n\nI guess I'll try another test with a larger data-set.\n\nJoe\n\n",
"msg_date": "Sun, 04 Aug 2002 23:08:17 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway wrote:\n> But is the space wasted really never more than a few MB's, even if the \n> database itself is say 1 GB? If so, and if the speed penalty is small to \n> non-existent, I'd rather be spec compliant. That way nobody has a good \n> basis for complaining ;-)\n> \n> I guess I'll try another test with a larger data-set.\n> \n\nStarting with pg_dumpall file at 138M.\n\n\n#define INDEX_MAX_KEYS\t\t16\n#define FUNC_MAX_ARGS\t\tINDEX_MAX_KEYS\n#define NAMEDATALEN 32\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n2.7M /opt/data/pgsql/data/base/1\n2.7M /opt/data/pgsql/data/base/16862\n119M /opt/data/pgsql/data/base/16863\n3.1M /opt/data/pgsql/data/base/696496\n3.1M /opt/data/pgsql/data/base/696623\n3.1M /opt/data/pgsql/data/base/696750\n2.8M /opt/data/pgsql/data/base/696877\n2.8M /opt/data/pgsql/data/base/696889\n2.8M /opt/data/pgsql/data/base/696901\n2.8M /opt/data/pgsql/data/base/696912\n18M /opt/data/pgsql/data/base/696924\n3.0M /opt/data/pgsql/data/base/878966\n2.7M /opt/data/pgsql/data/base/881056\n2.7M /opt/data/pgsql/data/base/881075\n2.8M /opt/data/pgsql/data/base/881078\n3.1M /opt/data/pgsql/data/base/881093\n3.1M /opt/data/pgsql/data/base/881225\n2.8M /opt/data/pgsql/data/base/881604\n3.3M /opt/data/pgsql/data/base/881620\n31M /opt/data/pgsql/data/base/881807\n31M /opt/data/pgsql/data/base/1031939\n32M /opt/data/pgsql/data/base/1181250\n31M /opt/data/pgsql/data/base/1332676\n309M /opt/data/pgsql/data/base\n\n\n#define INDEX_MAX_KEYS\t\t32\n#define FUNC_MAX_ARGS\t\tINDEX_MAX_KEYS\n#define NAMEDATALEN 128\ndu -h --max-depth=1 /opt/data/pgsql/data/base/\n4.1M /opt/data/pgsql/data/base/1\n4.1M /opt/data/pgsql/data/base/16863\n121M /opt/data/pgsql/data/base/16864\n4.6M /opt/data/pgsql/data/base/696497\n4.6M /opt/data/pgsql/data/base/696624\n4.6M /opt/data/pgsql/data/base/696751\n4.2M /opt/data/pgsql/data/base/696878\n4.2M /opt/data/pgsql/data/base/696890\n4.2M /opt/data/pgsql/data/base/696902\n4.2M /opt/data/pgsql/data/base/696913\n20M /opt/data/pgsql/data/base/696925\n4.5M /opt/data/pgsql/data/base/878967\n4.2M /opt/data/pgsql/data/base/881057\n4.1M /opt/data/pgsql/data/base/881076\n4.2M /opt/data/pgsql/data/base/881079\n4.7M /opt/data/pgsql/data/base/881094\n4.7M /opt/data/pgsql/data/base/881226\n4.2M /opt/data/pgsql/data/base/881605\n4.9M /opt/data/pgsql/data/base/881621\n33M /opt/data/pgsql/data/base/881808\n33M /opt/data/pgsql/data/base/1031940\n33M /opt/data/pgsql/data/base/1181251\n33M /opt/data/pgsql/data/base/1332677\n343M /opt/data/pgsql/data/base\n\nSo the 119MB database only grows to 121MB. In fact, each of the > 10MB \ndatabases seems to grow only about 2MB. Based on this, I'd go with:\n\n#define INDEX_MAX_KEYS\t\t32\n#define FUNC_MAX_ARGS\t\tINDEX_MAX_KEYS\n#define NAMEDATALEN 128\n\nand take spec compliance.\n\nJoe\n\n",
"msg_date": "Sun, 04 Aug 2002 23:36:50 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> Although NAMEDATALEN=128 would be needed for full SQL compliance,\n>> the space penalty seems severe.\n\n> What will the impact be on a medium to large production database? In \n> other words, is the bloat strictly to the system catalogs based on how \n> extensive your database schema (bad choice of words now, but I don't \n> know a better term for this) is? Or will the bloat scale with the size \n> of the database including data?\n\nThe bloat would scale with the size of your schema, not with the amount\nof data in your tables (unless you have \"name\" columns in your user\ntables, which is something we've always discouraged). template1 is\nclearly a worst-case scenario, percentagewise, for NAMEDATALEN.\n\nI'm quite prepared to believe that the net cost is \"a couple megs per\ndatabase\" more or less independent of how much data you store. Maybe\nthat's negligible these days, or maybe it isn't ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 09:33:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> The bloat would scale with the size of your schema, not with the amount\n> of data in your tables (unless you have \"name\" columns in your user\n> tables, which is something we've always discouraged). template1 is\n> clearly a worst-case scenario, percentagewise, for NAMEDATALEN.\n> \n> I'm quite prepared to believe that the net cost is \"a couple megs per\n> database\" more or less independent of how much data you store. Maybe\n> that's negligible these days, or maybe it isn't ...\n\nSeems to me it's negligible for the vast majority of applications. I \n*know* it is for any appplication that I have.\n\nWe can always tell people who are doing embedded application work to \nbump *down* NAMEDATALEN.\n\nJoe\n\n\n",
"msg_date": "Mon, 05 Aug 2002 08:18:38 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> We can always tell people who are doing embedded application work to \n> bump *down* NAMEDATALEN.\n\nGood point. Okay, I'm OK with 128 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 11:26:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > We can always tell people who are doing embedded application work to \n> > bump *down* NAMEDATALEN.\n> \n> Good point. Okay, I'm OK with 128 ...\n\nYes, good point. I think the major issue is pushing stuff out of the\ncache because we have longer names. Did we see performance hit at 128? \nSeems it more that just disk space.\n\nI don't have trouble with 128, but other than standards compliance, I\ncan't see many people getting >64 names.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 5 Aug 2002 12:21:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't have trouble with 128, but other than standards compliance, I\n> can't see many people getting >64 names.\n\nOne nice thing about 128 is you can basically forget about the weird\ntruncation behavior on generated sequence names for serial columns\n--- \"tablename_colname_seq\" will be correct for essentially all\npractical cases. At 64 you might still need to think about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 12:25:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't have trouble with 128, but other than standards compliance, I\n> > can't see many people getting >64 names.\n> \n> One nice thing about 128 is you can basically forget about the weird\n> truncation behavior on generated sequence names for serial columns\n> --- \"tablename_colname_seq\" will be correct for essentially all\n> practical cases. At 64 you might still need to think about it.\n\nOh, good point. Does anyone remember the performance hit for 64 vs 128\nnamedatalen?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 5 Aug 2002 12:54:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway writes:\n\n> I'd rather be spec compliant. That way nobody has a good basis for\n> complaining ;-)\n\nHow long until someone figures out that to be spec-compliant you need\nNAMEDATALEN 129? ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 5 Aug 2002 21:40:30 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Well, in fact it's not just a question of disk space.\n\nThe following numbers are stats for total elapsed time of \"make\ninstallcheck\" over ten trials:\n\nNAMEDATALEN = 32, FUNC_MAX_ARGS = 16\n\n min | max | avg | stddev\n-------+-------+--------+-------------------\n 25.59 | 27.61 | 26.612 | 0.637003401351409\n\nNAMEDATALEN = 64, FUNC_MAX_ARGS = 32\n\n min | max | avg | stddev\n-------+-------+--------+-----------------\n 26.32 | 29.27 | 27.415 | 1.0337982824947\n\nNAMEDATALEN = 128, FUNC_MAX_ARGS = 32\n\n min | max | avg | stddev\n-------+-------+--------+------------------\n 27.44 | 30.79 | 29.603 | 1.26148105195622\n\nI'm not sure about the trend of increasing standard deviation --- that\nmay reflect more disk I/O being done, and perhaps more checkpoints\noccurring during the test. But in any case it's clear that there's a\nnontrivial runtime cost here. Does a 10% slowdown bother you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 17:31:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> Well, in fact it's not just a question of disk space.\n> \n> The following numbers are stats for total elapsed time of \"make\n> installcheck\" over ten trials:\n> \n<snip>\n> I'm not sure about the trend of increasing standard deviation --- that\n> may reflect more disk I/O being done, and perhaps more checkpoints\n> occurring during the test. But in any case it's clear that there's a\n> nontrivial runtime cost here. Does a 10% slowdown bother you?\n\nHmmm -- didn't Neil do some kind of test that had different results, \ni.e. not much performance difference? I wonder if the large number of \nDDL commands in installcheck doesn't skew the results against longer \nNAMEDATALEN compared to other benchmarks?\n\n# pwd\n/opt/src/pgsql/src/test/regress/sql\n# grep -i 'CREATE\\|DROP' * | wc -l\n 1114\n\n\nJoe\n\n",
"msg_date": "Mon, 05 Aug 2002 16:08:40 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n>> I'm not sure about the trend of increasing standard deviation --- that\n>> may reflect more disk I/O being done, and perhaps more checkpoints\n>> occurring during the test. But in any case it's clear that there's a\n>> nontrivial runtime cost here. Does a 10% slowdown bother you?\n\n> Hmmm -- didn't Neil do some kind of test that had different results, \n> i.e. not much performance difference?\n\nWell, one person had reported a 10% slowdown in pgbench, but Neil saw\na 10% speedup. Given the well-known difficulty of getting any\nreproducible numbers out of pgbench, I don't trust either number very\nfar; but unless some other folk are willing to repeat the experiment\nI think we can only conclude that pgbench isn't affected much by\nNAMEDATALEN.\n\n> I wonder if the large number of \n> DDL commands in installcheck doesn't skew the results against longer \n> NAMEDATALEN compared to other benchmarks?\n\nDepends on what you consider skewed, I suppose. pgbench touches only a\nvery small number of relations, and starts no new backends over the\nlength of its run, thus everything gets cached and stays cached. At\nbest I'd consider it an existence proof that some applications won't be\nhurt.\n\nDo you have another application you'd consider a more representative\nbenchmark?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 19:46:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Tom Lane wrote:\n> Depends on what you consider skewed, I suppose. pgbench touches only a\n> very small number of relations, and starts no new backends over the\n> length of its run, thus everything gets cached and stays cached. At\n> best I'd consider it an existence proof that some applications won't be\n> hurt.\n> \n> Do you have another application you'd consider a more representative\n> benchmark?\n\nI'm not sure. Maybe OSDB? I'll see if I can get it running over the next \nfew days. Anyone else have other suggestions?\n\nJoe\n\n",
"msg_date": "Mon, 05 Aug 2002 17:45:33 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "> I don't have trouble with 128, but other than standards compliance, I\n> can't see many people getting >64 names.\n\nDon't forget that 128 is for *bytes*, not for characters(this is still\nture with 7.3). In CJK(Chinese, Japanese and Korean) single character\ncan eat up to 3 bytes if the encoding is UTF-8.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 06 Aug 2002 11:08:41 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> I don't have trouble with 128, but other than standards compliance, I\n>> can't see many people getting >64 names.\n\n> Don't forget that 128 is for *bytes*, not for characters(this is still\n> ture with 7.3). In CJK(Chinese, Japanese and Korean) single character\n> can eat up to 3 bytes if the encoding is UTF-8.\n\nTrue, but in those languages a typical name would be many fewer\ncharacters than it is in Western alphabets, no? I'd guess (with\nno evidence though) that the effect would more or less cancel out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 22:54:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "\nAs long as we allocate the full length for the funcarg and name types,\nwe are going to have performance/space issues with increasing them,\nespecially since we are looking at doubling or quadrupling those values.\n\nYou can say that the test below isn't a representative benchmark, but I\nam sure it is typical of _some_ of our users, so it may still be a\nsignificant test. We don't get good benchmark numbers by accident. It\nis this type of analysis that keeps us sharp.\n\nI think funcargs of 32 and name of 64 is the way to go for 7.3. If we\nfind we need longer names or we find we can make them variable length,\nwe can revisit the issue. However, variable length has a performance\ncost as well, so it is not certain we will ever make them variable\nlength.\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Well, in fact it's not just a question of disk space.\n> \n> The following numbers are stats for total elapsed time of \"make\n> installcheck\" over ten trials:\n> \n> NAMEDATALEN = 32, FUNC_MAX_ARGS = 16\n> \n> min | max | avg | stddev\n> -------+-------+--------+-------------------\n> 25.59 | 27.61 | 26.612 | 0.637003401351409\n> \n> NAMEDATALEN = 64, FUNC_MAX_ARGS = 32\n> \n> min | max | avg | stddev\n> -------+-------+--------+-----------------\n> 26.32 | 29.27 | 27.415 | 1.0337982824947\n> \n> NAMEDATALEN = 128, FUNC_MAX_ARGS = 32\n> \n> min | max | avg | stddev\n> -------+-------+--------+------------------\n> 27.44 | 30.79 | 29.603 | 1.26148105195622\n> \n> I'm not sure about the trend of increasing standard deviation --- that\n> may reflect more disk I/O being done, and perhaps more checkpoints\n> occurring during the test. But in any case it's clear that there's a\n> nontrivial runtime cost here. Does a 10% slowdown bother you?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 02:10:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian wrote:\n> As long as we allocate the full length for the funcarg and name types,\n> we are going to have performance/space issues with increasing them,\n> especially since we are looking at doubling or quadrupling those values.\n> \n> You can say that the test below isn't a representative benchmark, but I\n> am sure it is typical of _some_ of our users, so it may still be a\n> significant test. We don't get good benchmark numbers by accident. It\n> is this type of analysis that keeps us sharp.\n\nI'm running the OSDB benchmark right now. So far the Single user test \nresults are done, and the overall results is like this:\n\nNAMEDATALEN = 32, FUNC_MAX_ARGS = 32\n\"Single User Test\" 2205.89 seconds (0:36:45.89)\n\nNAMEDATALEN = 128, FUNC_MAX_ARGS = 32\n\"Single User Test\" 2256.16 seconds (0:37:36.16)\n\nSo the difference in performance for this benchmark is not nearly so \nlarge, more like 2%. The multi-user portion of the second test is \nrunning right now, so I'll report final results in the morning. I might \nalso run this on the same machine against 7.2.1 to see where we would \nstand in comparison to the last release. But that won't happen until \ntomorrow some time.\n\nJoe\n\n",
"msg_date": "Mon, 05 Aug 2002 23:20:24 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> \n>> As long as we allocate the full length for the funcarg and name types,\n>> we are going to have performance/space issues with increasing them,\n>> especially since we are looking at doubling or quadrupling those values.\n>>\n>> You can say that the test below isn't a representative benchmark, but I\n>> am sure it is typical of _some_ of our users, so it may still be a\n>> significant test. We don't get good benchmark numbers by accident. It\n>> is this type of analysis that keeps us sharp.\n> \n> \n> I'm running the OSDB benchmark right now. So far the Single user test \n> results are done, and the overall results is like this:\n> \n> NAMEDATALEN = 32, FUNC_MAX_ARGS = 32\n> \"Single User Test\" 2205.89 seconds (0:36:45.89)\n> \n> NAMEDATALEN = 128, FUNC_MAX_ARGS = 32\n> \"Single User Test\" 2256.16 seconds (0:37:36.16)\n> \n> So the difference in performance for this benchmark is not nearly so \n> large, more like 2%. The multi-user portion of the second test is \n> running right now, so I'll report final results in the morning. I might \n> also run this on the same machine against 7.2.1 to see where we would \n> stand in comparison to the last release. But that won't happen until \n> tomorrow some time.\n> \n\nHere's the multi-user test summary. Very little difference. The details \nof the OSDB output are attached.\n\nNAMEDATALEN = 32, FUNC_MAX_ARGS = 32\n\"Multi-User Test\" 3403.84 seconds (0:56:43.84)\n\nNAMEDATALEN = 128, FUNC_MAX_ARGS = 32\n\"Multi-User Test\" 3412.18 seconds (0:56:52.18)\n\nJoe",
"msg_date": "Tue, 06 Aug 2002 00:00:29 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Hi,\n\n> Here's the multi-user test summary. Very little difference. The details\n> of the OSDB output are attached.\n>\n> NAMEDATALEN = 32, FUNC_MAX_ARGS = 32\n> \"Multi-User Test\" 3403.84 seconds (0:56:43.84)\n>\n> NAMEDATALEN = 128, FUNC_MAX_ARGS = 32\n> \"Multi-User Test\" 3412.18 seconds (0:56:52.18)\n\nSeeing these numbers, I would definitely vote for standards-compliance with\nNAMEDATALEN = 128. 8.34 seconds more on 3400 seconds is just a 0.25%\nincrease.\n\nSander.\n\n\n\n",
"msg_date": "Tue, 6 Aug 2002 11:05:52 +0200",
"msg_from": "\"Sander Steffann\" <steffann@nederland.net>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Bruce Momjian wrote:\n\n> \n> As long as we allocate the full length for the funcarg and name types,\n> we are going to have performance/space issues with increasing them,\n> especially since we are looking at doubling or quadrupling those values.\n> \n> [snip]\n> \n> I think funcargs of 32 and name of 64 is the way to go for 7.3. If we\n> find we need longer names or we find we can make them variable length,\n> we can revisit the issue. However, variable length has a performance\n> cost as well, so it is not certain we will ever make them variable\n> length.\n\nI was thinking of looking at turning names to varchars/text in order to test\nthe performance hit [in the first instance]. However doing a\n\n find . -name \\*\\.\\[ch\\] | xargs grep NAMEDATALEN | wc -l\n\ngives 185 hits and some of those are setting other macros. It seems to me there\nis a fair amount of work involved in just getting variable length names into\nthe system so that they can be tested.\n\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Tue, 6 Aug 2002 10:47:21 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> I was thinking of looking at turning names to varchars/text in order to test\n> the performance hit [in the first instance]. However doing a\n> find . -name \\*\\.\\[ch\\] | xargs grep NAMEDATALEN | wc -l\n> gives 185 hits and some of those are setting other macros. It seems to\n> me there is a fair amount of work involved in just getting variable\n> length names into the system so that they can be tested.\n\nAnd that is not even the tip of the iceberg. The real reason that NAME\nis fixed-length is so that it can be accessed as a member of a C\nstructure. Moving NAME into the variable-length category would make it\nmuch more painful to access than it is now, and would require\nrearranging the field order in every system catalog that has a name field.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 09:36:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I'm not pretending to know anything about it, but can't this be made\n> into a pointer that is accessed as a member of a C structure. This\n> should not need rearranging the field order.\n\nYou can't store pointers on disk. At least not usefully.\n\n> From what I remember the main concern was lack of support for varlen\n> types in cache manager (whatever it means) ?\n\nThat would be a localized fix; I'm not very worried about it. A\nsystem-wide change in notation for getting at NAMEs would be quite\npainful, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 10:17:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "On Tue, 2002-08-06 at 15:36, Tom Lane wrote:\n> \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > I was thinking of looking at turning names to varchars/text in order to test\n> > the performance hit [in the first instance]. However doing a\n> > find . -name \\*\\.\\[ch\\] | xargs grep NAMEDATALEN | wc -l\n> > gives 185 hits and some of those are setting other macros. It seems to\n> > me there is a fair amount of work involved in just getting variable\n> > length names into the system so that they can be tested.\n> \n> And that is not even the tip of the iceberg. The real reason that NAME\n> is fixed-length is so that it can be accessed as a member of a C\n> structure.\n\nI'm not pretending to know anything about it, but can't this be made\ninto a pointer that is accessed as a member of a C structure. This\nshould not need rearranging the field order.\n\nAnd if we were lucky enough, then change from char[32] to char* will be\ninvisible for most places that use it.\n\n> Moving NAME into the variable-length category would make it\n> much more painful to access than it is now, and would require\n> rearranging the field order in every system catalog that has a name field.\n\n From what I remember the main concern was lack of support for varlen\ntypes in cache manager (whatever it means) ?\n\n---------------\nHannu\n\n",
"msg_date": "06 Aug 2002 17:02:16 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Joe Conway writes:\n\n> Here's the multi-user test summary. Very little difference. The details\n> of the OSDB output are attached.\n\nThe fact that the OSDB benchmark has just about the least possible test\ncoverage of identifier handling and you still get a 2% performance drop is\nsomething I'm concerned about.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 6 Aug 2002 23:18:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Joe Conway writes:\n>>Here's the multi-user test summary. Very little difference. The details\n>>of the OSDB output are attached.\n> \n> The fact that the OSDB benchmark has just about the least possible test\n> coverage of identifier handling and you still get a 2% performance drop is\n> something I'm concerned about.\n> \n\nOf course that's on the single user test only. In the multi-user test \nthe two are neck-and-neck. If you really want to be concerned, see the \nattached. This lines up results from:\n\nREL7_2_STABLE with NAMEDATALEN = 32 and FUNC_MAX_ARGS = 16\n7.3devel with NAMEDATALEN = 32 and FUNC_MAX_ARGS = 32\n7.3devel with NAMEDATALEN = 128 and FUNC_MAX_ARGS = 32\n\nIn the single-user test, REL7_2_STABLE is best by about 10%. But in the \nmulti-user test (10 users), *both* 7.3devel tests are about 3.5% faster \nthan 7.2.\n\nJoe",
"msg_date": "Tue, 06 Aug 2002 14:25:40 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "> > Don't forget that 128 is for *bytes*, not for characters(this is still\n> > ture with 7.3). In CJK(Chinese, Japanese and Korean) single character\n> > can eat up to 3 bytes if the encoding is UTF-8.\n> \n> True, but in those languages a typical name would be many fewer\n> characters than it is in Western alphabets, no? I'd guess (with\n> no evidence though) that the effect would more or less cancel out.\n\nThat's only true for \"kanji\" characters. There are alphabet like\nphonogram characters called \"katakana\" and \"hiragana\". The former is\noften used to express things imported from foreign languages (That\nmeans Japanse has more and more things expressed in katakana than\nbefore). Since they are phonogram, they tend to be longer\ncharacters. For example, if I would like to have \"object id\" column\nand want to name it using \"katakana\", it would be around 8 characters,\nthat is 24 bytes in UTF-8 encoding.\n\nI'm not sure if Chinese or Korean has similar things though.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 07 Aug 2002 21:56:20 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "I'm not sure if could explain welll, but...\n\n> Is this process irreversible ?\n> \n> I.e. will words like \"mirku\" or \"taikin katchuretchu\" (if i remember\n> correctly my reading form an old dictionary, these were imported words\n> for \"milk\" and \"chicken cutlets\") never get \"kanji\" characters ?\n\nI guess \"mirk\" --> \"mi-ru-ku\" (3 katakana), \"taikin katchuretchu\" -->\n\"chi-ki-n ka-tsu-re-tsu\" (3 + 4 katakana).\n\nI don't think it's not irreversible. For example, we have kanji\ncharacters \"gyuu nyuu\" (2 kanji characters) having same meaning as\nmilk = miruku, but we cannot interchange \"gyuu nyuu\" with \"miruku\" in\nmost cases.\n\n> BTW, it seems that even with 3 bytes/char tai-kin is shorter than\n> chicken ;)\n\nDepends. For example, \"pu-ro-se-su\" (= process) will be totally 12\nbytes in UTF-8.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 08 Aug 2002 00:00:53 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Off-topic: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "On Wed, 2002-08-07 at 14:56, Tatsuo Ishii wrote:\n> > > Don't forget that 128 is for *bytes*, not for characters(this is still\n> > > ture with 7.3). In CJK(Chinese, Japanese and Korean) single character\n> > > can eat up to 3 bytes if the encoding is UTF-8.\n> > \n> > True, but in those languages a typical name would be many fewer\n> > characters than it is in Western alphabets, no? I'd guess (with\n> > no evidence though) that the effect would more or less cancel out.\n> \n> That's only true for \"kanji\" characters. There are alphabet like\n> phonogram characters called \"katakana\" and \"hiragana\". The former is\n> often used to express things imported from foreign languages (That\n> means Japanse has more and more things expressed in katakana than\n> before).\n\nIs this process irreversible ?\n\nI.e. will words like \"mirku\" or \"taikin katchuretchu\" (if i remember\ncorrectly my reading form an old dictionary, these were imported words\nfor \"milk\" and \"chicken cutlets\") never get \"kanji\" characters ?\n\nBTW, it seems that even with 3 bytes/char tai-kin is shorter than\nchicken ;)\n\n-------------\nHannu\n\n",
"msg_date": "07 Aug 2002 17:30:11 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Off-topic: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "\nOK, seems we have not come to a decision yet on this.\n\nDo we have agreement to increate FUNC_MAX_ARGS to 32?\n\nNAMEDATALEN will be 64 or 128 in 7.3. At this point, we better decide\nwhich one we prefer.\n\nThe conservative approach would be to go for 64 and perhaps increase it\nagain in 7.4 after we get feedback and real-world usage. If we go to\n128, we will have trouble decreasing it if there are performance\nproblems.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> >> I'm not sure about the trend of increasing standard deviation --- that\n> >> may reflect more disk I/O being done, and perhaps more checkpoints\n> >> occurring during the test. But in any case it's clear that there's a\n> >> nontrivial runtime cost here. Does a 10% slowdown bother you?\n> \n> > Hmmm -- didn't Neil do some kind of test that had different results, \n> > i.e. not much performance difference?\n> \n> Well, one person had reported a 10% slowdown in pgbench, but Neil saw\n> a 10% speedup. Given the well-known difficulty of getting any\n> reproducible numbers out of pgbench, I don't trust either number very\n> far; but unless some other folk are willing to repeat the experiment\n> I think we can only conclude that pgbench isn't affected much by\n> NAMEDATALEN.\n> \n> > I wonder if the large number of \n> > DDL commands in installcheck doesn't skew the results against longer \n> > NAMEDATALEN compared to other benchmarks?\n> \n> Depends on what you consider skewed, I suppose. pgbench touches only a\n> very small number of relations, and starts no new backends over the\n> length of its run, thus everything gets cached and stays cached. At\n> best I'd consider it an existence proof that some applications won't be\n> hurt.\n> \n> Do you have another application you'd consider a more representative\n> benchmark?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 10 Aug 2002 19:21:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do we have agreement to increate FUNC_MAX_ARGS to 32?\n\nI believe so.\n\n> NAMEDATALEN will be 64 or 128 in 7.3. At this point, we better decide\n> which one we prefer.\n> The conservative approach would be to go for 64 and perhaps increase it\n> again in 7.4 after we get feedback and real-world usage. If we go to\n> 128, we will have trouble decreasing it if there are performance\n> problems.\n\nIt seems fairly clear to me that there *are* performance problems,\nat least in some scenarios. I think we should go to 64. There doesn't\nseem to be a lot of real-world demand for more than that, despite what\nthe spec says ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 10 Aug 2002 20:22:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "Bruce Momjian wrote:\n> OK, seems we have not come to a decision yet on this.\n> \n> Do we have agreement to increate FUNC_MAX_ARGS to 32?\n> \n> NAMEDATALEN will be 64 or 128 in 7.3. At this point, we better decide\n> which one we prefer.\n> \n> The conservative approach would be to go for 64 and perhaps increase it\n> again in 7.4 after we get feedback and real-world usage. If we go to\n> 128, we will have trouble decreasing it if there are performance\n> problems.\n\nI guess I'd also agree with:\n FUNC_MAX_ARGS 32\n NAMEDATALEN 64\nand work on the performance issues for 7.4.\n\nJoe\n\n\n\n\n",
"msg_date": "Sat, 10 Aug 2002 18:20:37 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "> > NAMEDATALEN will be 64 or 128 in 7.3. At this point, we better decide\n> > which one we prefer.\n> >\n> > The conservative approach would be to go for 64 and perhaps increase it\n> > again in 7.4 after we get feedback and real-world usage. If we go to\n> > 128, we will have trouble decreasing it if there are performance\n> > problems.\n>\n> I guess I'd also agree with:\n> FUNC_MAX_ARGS 32\n> NAMEDATALEN 64\n> and work on the performance issues for 7.4.\n\nI agree too.\n\nChris\n\n\n",
"msg_date": "Sun, 11 Aug 2002 17:38:56 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "\nI am working on a patch to increase these as agreed. I found this\ninteresting, from the 6.3 release notes:\n\n Increase 16 char limit on system table/index names to 32 characters(Bruce)\n\nThe limited to be 16 chars until 6.3 in 1998-03-01.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > > NAMEDATALEN will be 64 or 128 in 7.3. At this point, we better decide\n> > > which one we prefer.\n> > >\n> > > The conservative approach would be to go for 64 and perhaps increase it\n> > > again in 7.4 after we get feedback and real-world usage. If we go to\n> > > 128, we will have trouble decreasing it if there are performance\n> > > problems.\n> >\n> > I guess I'd also agree with:\n> > FUNC_MAX_ARGS 32\n> > NAMEDATALEN 64\n> > and work on the performance issues for 7.4.\n> \n> I agree too.\n> \n> Chris\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 14:16:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "I have applied the attached patch which changes NAMEDATALEN to 64 and\nFUNC_MAX_ARGS/INDEX_MAX_KEYS to 32. Hopefully this will keep people\nhappy for a few more years.\n\ninitdb required.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > > NAMEDATALEN will be 64 or 128 in 7.3. At this point, we better decide\n> > > which one we prefer.\n> > >\n> > > The conservative approach would be to go for 64 and perhaps increase it\n> > > again in 7.4 after we get feedback and real-world usage. If we go to\n> > > 128, we will have trouble decreasing it if there are performance\n> > > problems.\n> >\n> > I guess I'd also agree with:\n> > FUNC_MAX_ARGS 32\n> > NAMEDATALEN 64\n> > and work on the performance issues for 7.4.\n> \n> I agree too.\n> \n> Chris\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/FAQ_DEV\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/FAQ_DEV,v\nretrieving revision 1.43\ndiff -c -r1.43 FAQ_DEV\n*** doc/FAQ_DEV\t17 Apr 2002 05:12:39 -0000\t1.43\n--- doc/FAQ_DEV\t13 Aug 2002 20:17:54 -0000\n***************\n*** 560,566 ****\n Table, column, type, function, and view names are stored in system\n tables in columns of type Name. Name is a fixed-length,\n null-terminated type of NAMEDATALEN bytes. (The default value for\n! NAMEDATALEN is 32 bytes.)\n typedef struct nameData\n {\n char data[NAMEDATALEN];\n--- 560,566 ----\n Table, column, type, function, and view names are stored in system\n tables in columns of type Name. Name is a fixed-length,\n null-terminated type of NAMEDATALEN bytes. (The default value for\n! NAMEDATALEN is 64 bytes.)\n typedef struct nameData\n {\n char data[NAMEDATALEN];\nIndex: doc/src/sgml/datatype.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/datatype.sgml,v\nretrieving revision 1.97\ndiff -c -r1.97 datatype.sgml\n*** doc/src/sgml/datatype.sgml\t5 Aug 2002 19:43:30 -0000\t1.97\n--- doc/src/sgml/datatype.sgml\t13 Aug 2002 20:17:56 -0000\n***************\n*** 914,920 ****\n <productname>PostgreSQL</productname>. The <type>name</type> type\n exists <emphasis>only</emphasis> for storage of internal catalog\n names and is not intended for use by the general user. Its length\n! is currently defined as 32 bytes (31 usable characters plus terminator)\n but should be referenced using the macro\n <symbol>NAMEDATALEN</symbol>. The length is set at compile time\n (and is therefore adjustable for special uses); the default\n--- 914,920 ----\n <productname>PostgreSQL</productname>. The <type>name</type> type\n exists <emphasis>only</emphasis> for storage of internal catalog\n names and is not intended for use by the general user. Its length\n! is currently defined as 64 bytes (63 usable characters plus terminator)\n but should be referenced using the macro\n <symbol>NAMEDATALEN</symbol>. The length is set at compile time\n (and is therefore adjustable for special uses); the default\n***************\n*** 943,950 ****\n </row>\n <row>\n \t<entry>name</entry>\n! \t<entry>32 bytes</entry>\n! \t<entry>Thirty-one character internal type</entry>\n </row>\n </tbody>\n </tgroup>\n--- 943,950 ----\n </row>\n <row>\n \t<entry>name</entry>\n! \t<entry>64 bytes</entry>\n! \t<entry>Sixty-three character internal type</entry>\n </row>\n </tbody>\n </tgroup>\nIndex: doc/src/sgml/indices.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/indices.sgml,v\nretrieving revision 1.35\ndiff -c -r1.35 indices.sgml\n*** doc/src/sgml/indices.sgml\t30 Jul 2002 17:34:37 -0000\t1.35\n--- doc/src/sgml/indices.sgml\t13 Aug 2002 20:17:56 -0000\n***************\n*** 236,242 ****\n \n <para>\n Currently, only the B-tree and GiST implementations support multicolumn\n! indexes. Up to 16 columns may be specified. (This limit can be\n altered when building <productname>PostgreSQL</productname>; see the\n file <filename>pg_config.h</filename>.)\n </para>\n--- 236,242 ----\n \n <para>\n Currently, only the B-tree and GiST implementations support multicolumn\n! indexes. Up to 32 columns may be specified. (This limit can be\n altered when building <productname>PostgreSQL</productname>; see the\n file <filename>pg_config.h</filename>.)\n </para>\nIndex: doc/src/sgml/manage.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/manage.sgml,v\nretrieving revision 1.22\ndiff -c -r1.22 manage.sgml\n*** doc/src/sgml/manage.sgml\t10 Aug 2002 19:35:00 -0000\t1.22\n--- doc/src/sgml/manage.sgml\t13 Aug 2002 20:17:57 -0000\n***************\n*** 70,76 ****\n You automatically become the\n database administrator of the database you just created. \n Database names must have an alphabetic first\n! character and are limited to 31 characters in length.\n <ProductName>PostgreSQL</ProductName> allows you to create any number of\n databases at a given site. \n </Para>\n--- 70,76 ----\n You automatically become the\n database administrator of the database you just created. \n Database names must have an alphabetic first\n! character and are limited to 63 characters in length.\n <ProductName>PostgreSQL</ProductName> allows you to create any number of\n databases at a given site. \n </Para>\nIndex: doc/src/sgml/start.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/start.sgml,v\nretrieving revision 1.23\ndiff -c -r1.23 start.sgml\n*** doc/src/sgml/start.sgml\t10 Aug 2002 19:35:00 -0000\t1.23\n--- doc/src/sgml/start.sgml\t13 Aug 2002 20:17:57 -0000\n***************\n*** 231,237 ****\n You can also create databases with other names.\n <productname>PostgreSQL</productname> allows you to create any\n number of databases at a given site. Database names must have an\n! alphabetic first character and are limited to 31 characters in\n length. A convenient choice is to create a database with the same\n name as your current user name. Many tools assume that database\n name as the default, so it can save you some typing. To create\n--- 231,237 ----\n You can also create databases with other names.\n <productname>PostgreSQL</productname> allows you to create any\n number of databases at a given site. Database names must have an\n! alphabetic first character and are limited to 63 characters in\n length. A convenient choice is to create a database with the same\n name as your current user name. Many tools assume that database\n name as the default, so it can save you some typing. To create\nIndex: doc/src/sgml/syntax.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/syntax.sgml,v\nretrieving revision 1.65\ndiff -c -r1.65 syntax.sgml\n*** doc/src/sgml/syntax.sgml\t10 Aug 2002 19:01:53 -0000\t1.65\n--- doc/src/sgml/syntax.sgml\t13 Aug 2002 20:17:58 -0000\n***************\n*** 120,127 ****\n The system uses no more than <symbol>NAMEDATALEN</symbol>-1\n characters of an identifier; longer names can be written in\n commands, but they will be truncated. By default,\n! <symbol>NAMEDATALEN</symbol> is 32 so the maximum identifier length\n! is 31 (but at the time the system is built,\n <symbol>NAMEDATALEN</symbol> can be changed in\n <filename>src/include/postgres_ext.h</filename>).\n </para>\n--- 120,127 ----\n The system uses no more than <symbol>NAMEDATALEN</symbol>-1\n characters of an identifier; longer names can be written in\n commands, but they will be truncated. By default,\n! <symbol>NAMEDATALEN</symbol> is 64 so the maximum identifier length\n! is 63 (but at the time the system is built,\n <symbol>NAMEDATALEN</symbol> can be changed in\n <filename>src/include/postgres_ext.h</filename>).\n </para>\nIndex: doc/src/sgml/ref/create_index.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/ref/create_index.sgml,v\nretrieving revision 1.35\ndiff -c -r1.35 create_index.sgml\n*** doc/src/sgml/ref/create_index.sgml\t30 Jul 2002 17:34:37 -0000\t1.35\n--- doc/src/sgml/ref/create_index.sgml\t13 Aug 2002 20:17:58 -0000\n***************\n*** 339,345 ****\n \n <para>\n Currently, only the B-tree and gist access methods support multicolumn\n! indexes. Up to 16 keys may be specified by default (this limit\n can be altered when building\n <application>PostgreSQL</application>). Only B-tree currently supports\n unique indexes.\n--- 339,345 ----\n \n <para>\n Currently, only the B-tree and gist access methods support multicolumn\n! indexes. Up to 32 keys may be specified by default (this limit\n can be altered when building\n <application>PostgreSQL</application>). Only B-tree currently supports\n unique indexes.\nIndex: doc/src/sgml/ref/current_user.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/ref/current_user.sgml,v\nretrieving revision 1.6\ndiff -c -r1.6 current_user.sgml\n*** doc/src/sgml/ref/current_user.sgml\t21 Apr 2002 19:02:39 -0000\t1.6\n--- doc/src/sgml/ref/current_user.sgml\t13 Aug 2002 20:17:59 -0000\n***************\n*** 77,83 ****\n Notes\n </TITLE>\n <PARA>\n! Data type \"name\" is a non-standard 31-character type for storing\n system identifiers.\n </PARA>\n </REFSECT2>\n--- 77,83 ----\n Notes\n </TITLE>\n <PARA>\n! Data type \"name\" is a non-standard 63-character type for storing\n system identifiers.\n </PARA>\n </REFSECT2>\nIndex: doc/src/sgml/ref/listen.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/ref/listen.sgml,v\nretrieving revision 1.13\ndiff -c -r1.13 listen.sgml\n*** doc/src/sgml/ref/listen.sgml\t21 Apr 2002 19:02:39 -0000\t1.13\n--- doc/src/sgml/ref/listen.sgml\t13 Aug 2002 20:17:59 -0000\n***************\n*** 146,152 ****\n it need not correspond to the name of any actual table. If\n <replaceable class=\"PARAMETER\">notifyname</replaceable>\n is enclosed in double-quotes, it need not even be a syntactically\n! valid name, but can be any string up to 31 characters long.\n </para>\n <para>\n In some previous releases of\n--- 146,152 ----\n it need not correspond to the name of any actual table. If\n <replaceable class=\"PARAMETER\">notifyname</replaceable>\n is enclosed in double-quotes, it need not even be a syntactically\n! valid name, but can be any string up to 63 characters long.\n </para>\n <para>\n In some previous releases of\nIndex: doc/src/sgml/ref/notify.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/ref/notify.sgml,v\nretrieving revision 1.17\ndiff -c -r1.17 notify.sgml\n*** doc/src/sgml/ref/notify.sgml\t21 Apr 2002 19:02:39 -0000\t1.17\n--- doc/src/sgml/ref/notify.sgml\t13 Aug 2002 20:17:59 -0000\n***************\n*** 180,186 ****\n it need not correspond to the name of any actual table. If\n <replaceable class=\"PARAMETER\">name</replaceable>\n is enclosed in double-quotes, it need not even be a syntactically\n! valid name, but can be any string up to 31 characters long.\n </para>\n <para>\n In some previous releases of\n--- 180,186 ----\n it need not correspond to the name of any actual table. If\n <replaceable class=\"PARAMETER\">name</replaceable>\n is enclosed in double-quotes, it need not even be a syntactically\n! valid name, but can be any string up to 63 characters long.\n </para>\n <para>\n In some previous releases of\nIndex: doc/src/sgml/ref/unlisten.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/ref/unlisten.sgml,v\nretrieving revision 1.18\ndiff -c -r1.18 unlisten.sgml\n*** doc/src/sgml/ref/unlisten.sgml\t21 Apr 2002 19:02:39 -0000\t1.18\n--- doc/src/sgml/ref/unlisten.sgml\t13 Aug 2002 20:17:59 -0000\n***************\n*** 114,120 ****\n <para>\n <replaceable class=\"PARAMETER\">notifyname</replaceable>\n need not be a valid class name but can be any string valid\n! as a name up to 32 characters long.\n </para>\n <para>\n The backend does not complain if you UNLISTEN something you were not\n--- 114,120 ----\n <para>\n <replaceable class=\"PARAMETER\">notifyname</replaceable>\n need not be a valid class name but can be any string valid\n! as a name up to 64 characters long.\n </para>\n <para>\n The backend does not complain if you UNLISTEN something you were not\nIndex: src/bin/psql/command.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/psql/command.c,v\nretrieving revision 1.75\ndiff -c -r1.75 command.c\n*** src/bin/psql/command.c\t10 Aug 2002 03:56:23 -0000\t1.75\n--- src/bin/psql/command.c\t13 Aug 2002 20:18:01 -0000\n***************\n*** 1513,1519 ****\n \tsys = malloc(strlen(editorName) + strlen(fname) + 32 + 1);\n \tif (!sys)\n \t\treturn false;\n! \tsprintf(sys, \"exec %s %s\", editorName, fname);\n \tresult = system(sys);\n \tif (result == -1)\n \t\tpsql_error(\"could not start editor %s\\n\", editorName);\n--- 1513,1519 ----\n \tsys = malloc(strlen(editorName) + strlen(fname) + 32 + 1);\n \tif (!sys)\n \t\treturn false;\n! \tsnprintf(sys, 32, \"exec %s %s\", editorName, fname);\n \tresult = system(sys);\n \tif (result == -1)\n \t\tpsql_error(\"could not start editor %s\\n\", editorName);\nIndex: src/include/pg_config.h.in\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/pg_config.h.in,v\nretrieving revision 1.26\ndiff -c -r1.26 pg_config.h.in\n*** src/include/pg_config.h.in\t31 Jul 2002 17:19:54 -0000\t1.26\n--- src/include/pg_config.h.in\t13 Aug 2002 20:18:02 -0000\n***************\n*** 162,168 ****\n * switch statement in fmgr_oldstyle() in src/backend/utils/fmgr/fmgr.c.\n * But consider converting such functions to new-style instead...\n */\n! #define INDEX_MAX_KEYS\t\t16\n #define FUNC_MAX_ARGS\t\tINDEX_MAX_KEYS\n \n /*\n--- 162,168 ----\n * switch statement in fmgr_oldstyle() in src/backend/utils/fmgr/fmgr.c.\n * But consider converting such functions to new-style instead...\n */\n! #define INDEX_MAX_KEYS\t\t32\n #define FUNC_MAX_ARGS\t\tINDEX_MAX_KEYS\n \n /*\nIndex: src/include/postgres_ext.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/postgres_ext.h,v\nretrieving revision 1.10\ndiff -c -r1.10 postgres_ext.h\n*** src/include/postgres_ext.h\t30 Apr 2002 19:53:03 -0000\t1.10\n--- src/include/postgres_ext.h\t13 Aug 2002 20:18:02 -0000\n***************\n*** 41,46 ****\n *\n * NOTE that databases with different NAMEDATALEN's cannot interoperate!\n */\n! #define NAMEDATALEN 32\n \n #endif\n--- 41,46 ----\n *\n * NOTE that databases with different NAMEDATALEN's cannot interoperate!\n */\n! #define NAMEDATALEN 64\n \n #endif\nIndex: src/include/catalog/catversion.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/catalog/catversion.h,v\nretrieving revision 1.147\ndiff -c -r1.147 catversion.h\n*** src/include/catalog/catversion.h\t9 Aug 2002 16:45:14 -0000\t1.147\n--- src/include/catalog/catversion.h\t13 Aug 2002 20:18:02 -0000\n***************\n*** 53,58 ****\n */\n \n /*\t\t\t\t\t\t\tyyyymmddN */\n! #define CATALOG_VERSION_NO\t200208091\n \n #endif\n--- 53,58 ----\n */\n \n /*\t\t\t\t\t\t\tyyyymmddN */\n! #define CATALOG_VERSION_NO\t200208131\n \n #endif\nIndex: src/interfaces/jdbc/org/postgresql/errors.properties\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/jdbc/org/postgresql/errors.properties,v\nretrieving revision 1.13\ndiff -c -r1.13 errors.properties\n*** src/interfaces/jdbc/org/postgresql/errors.properties\t24 Jun 2002 06:16:27 -0000\t1.13\n--- src/interfaces/jdbc/org/postgresql/errors.properties\t13 Aug 2002 20:18:02 -0000\n***************\n*** 61,67 ****\n postgresql.res.colrange:The column index is out of range.\n postgresql.res.nextrequired:Result set not positioned properly, perhaps you need to call next().\n postgresql.serial.interface:You cannot serialize an interface.\n! postgresql.serial.namelength:Class & Package name length cannot be longer than 32 characters. {0} is {1} characters.\n postgresql.serial.noclass:No class found for {0}\n postgresql.serial.table:The table for {0} is not in the database. Contact the DBA, as the database is in an inconsistent state.\n postgresql.serial.underscore:Class names may not have _ in them. You supplied {0}.\n--- 61,67 ----\n postgresql.res.colrange:The column index is out of range.\n postgresql.res.nextrequired:Result set not positioned properly, perhaps you need to call next().\n postgresql.serial.interface:You cannot serialize an interface.\n! postgresql.serial.namelength:Class & Package name length cannot be longer than 64 characters. {0} is {1} characters.\n postgresql.serial.noclass:No class found for {0}\n postgresql.serial.table:The table for {0} is not in the database. Contact the DBA, as the database is in an inconsistent state.\n postgresql.serial.underscore:Class names may not have _ in them. You supplied {0}.\nIndex: src/interfaces/jdbc/org/postgresql/util/Serialize.java\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/jdbc/org/postgresql/util/Serialize.java,v\nretrieving revision 1.11\ndiff -c -r1.11 Serialize.java\n*** src/interfaces/jdbc/org/postgresql/util/Serialize.java\t23 Jul 2002 03:59:55 -0000\t1.11\n--- src/interfaces/jdbc/org/postgresql/util/Serialize.java\t13 Aug 2002 20:18:03 -0000\n***************\n*** 57,63 ****\n * There are a number of limitations placed on the java class to be\n * used by Serialize:\n * <ul>\n! * <li>The class name must be less than 32 chars long and must be all lowercase.\n * This is due to limitations in Postgres about the size of table names.\n * The name must be all lowercase since table names in Postgres are\n * case insensitive and the relname is stored in lowercase. Unless some\n--- 57,63 ----\n * There are a number of limitations placed on the java class to be\n * used by Serialize:\n * <ul>\n! * <li>The class name must be less than 64 chars long and must be all lowercase.\n * This is due to limitations in Postgres about the size of table names.\n * The name must be all lowercase since table names in Postgres are\n * case insensitive and the relname is stored in lowercase. Unless some\n***************\n*** 577,583 ****\n \t *\n \t * Because of this, a Class name may not have _ in the name.<p>\n \t * Another limitation, is that the entire class name (including packages)\n! \t * cannot be longer than 32 characters (a limit forced by PostgreSQL).\n \t *\n \t * @param name Class name\n \t * @return PostgreSQL table name\n--- 577,583 ----\n \t *\n \t * Because of this, a Class name may not have _ in the name.<p>\n \t * Another limitation, is that the entire class name (including packages)\n! \t * cannot be longer than 64 characters (a limit forced by PostgreSQL).\n \t *\n \t * @param name Class name\n \t * @return PostgreSQL table name\n***************\n*** 590,605 ****\n \t\tif (name.indexOf(\"_\") > -1)\n \t\t\tthrow new PSQLException(\"postgresql.serial.underscore\");\n \n! \t\t// Postgres table names can only be 32 character long.\n! \t\t// Reserve 1 char, so allow only up to 31 chars.\n \t\t// If the full class name with package is too long\n \t\t// then just use the class name. If the class name is\n \t\t// too long throw an exception.\n \t\t//\n! \t\tif ( name.length() > 31 )\n \t\t{\n \t\t\tname = name.substring(name.lastIndexOf(\".\") + 1);\n! \t\t\tif ( name.length() > 31 )\n \t\t\t\tthrow new PSQLException(\"postgresql.serial.namelength\", name, new Integer(name.length()));\n \t\t}\n \t\treturn name.replace('.', '_');\n--- 590,605 ----\n \t\tif (name.indexOf(\"_\") > -1)\n \t\t\tthrow new PSQLException(\"postgresql.serial.underscore\");\n \n! \t\t// Postgres table names can only be 64 character long.\n! \t\t// Reserve 1 char, so allow only up to 63 chars.\n \t\t// If the full class name with package is too long\n \t\t// then just use the class name. If the class name is\n \t\t// too long throw an exception.\n \t\t//\n! \t\tif ( name.length() > 63 )\n \t\t{\n \t\t\tname = name.substring(name.lastIndexOf(\".\") + 1);\n! \t\t\tif ( name.length() > 63 )\n \t\t\t\tthrow new PSQLException(\"postgresql.serial.namelength\", name, new Integer(name.length()));\n \t\t}\n \t\treturn name.replace('.', '_');\nIndex: src/test/regress/expected/name.out\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/test/regress/expected/name.out,v\nretrieving revision 1.5\ndiff -c -r1.5 name.out\n*** src/test/regress/expected/name.out\t4 Jan 2000 16:19:34 -0000\t1.5\n--- src/test/regress/expected/name.out\t13 Aug 2002 20:18:04 -0000\n***************\n*** 19,104 ****\n --\n --\n CREATE TABLE NAME_TBL(f1 name);\n! INSERT INTO NAME_TBL(f1) VALUES ('ABCDEFGHIJKLMNOP');\n! INSERT INTO NAME_TBL(f1) VALUES ('abcdefghijklmnop');\n INSERT INTO NAME_TBL(f1) VALUES ('asdfghjkl;');\n INSERT INTO NAME_TBL(f1) VALUES ('343f%2a');\n INSERT INTO NAME_TBL(f1) VALUES ('d34aaasdf');\n INSERT INTO NAME_TBL(f1) VALUES ('');\n! INSERT INTO NAME_TBL(f1) VALUES ('1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ');\n SELECT '' AS seven, NAME_TBL.*;\n seven | f1 \n! -------+---------------------------------\n! | ABCDEFGHIJKLMNOP\n! | abcdefghijklmnop\n | asdfghjkl;\n | 343f%2a\n | d34aaasdf\n | \n! | 1234567890ABCDEFGHIJKLMNOPQRSTU\n (7 rows)\n \n! SELECT '' AS six, c.f1 FROM NAME_TBL c WHERE c.f1 <> 'ABCDEFGHIJKLMNOP';\n six | f1 \n! -----+---------------------------------\n! | abcdefghijklmnop\n | asdfghjkl;\n | 343f%2a\n | d34aaasdf\n | \n! | 1234567890ABCDEFGHIJKLMNOPQRSTU\n! (6 rows)\n \n! SELECT '' AS one, c.f1 FROM NAME_TBL c WHERE c.f1 = 'ABCDEFGHIJKLMNOP';\n one | f1 \n! -----+------------------\n! | ABCDEFGHIJKLMNOP\n! (1 row)\n \n! SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 < 'ABCDEFGHIJKLMNOP';\n three | f1 \n! -------+---------------------------------\n! | 343f%2a\n | \n! | 1234567890ABCDEFGHIJKLMNOPQRSTU\n! (3 rows)\n \n! SELECT '' AS four, c.f1 FROM NAME_TBL c WHERE c.f1 <= 'ABCDEFGHIJKLMNOP';\n four | f1 \n! ------+---------------------------------\n! | ABCDEFGHIJKLMNOP\n! | 343f%2a\n | \n! | 1234567890ABCDEFGHIJKLMNOPQRSTU\n! (4 rows)\n \n! SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 > 'ABCDEFGHIJKLMNOP';\n three | f1 \n! -------+------------------\n! | abcdefghijklmnop\n | asdfghjkl;\n | d34aaasdf\n! (3 rows)\n \n! SELECT '' AS four, c.f1 FROM NAME_TBL c WHERE c.f1 >= 'ABCDEFGHIJKLMNOP';\n four | f1 \n! ------+------------------\n! | ABCDEFGHIJKLMNOP\n! | abcdefghijklmnop\n | asdfghjkl;\n | d34aaasdf\n! (4 rows)\n \n SELECT '' AS seven, c.f1 FROM NAME_TBL c WHERE c.f1 ~ '.*';\n seven | f1 \n! -------+---------------------------------\n! | ABCDEFGHIJKLMNOP\n! | abcdefghijklmnop\n | asdfghjkl;\n | 343f%2a\n | d34aaasdf\n | \n! | 1234567890ABCDEFGHIJKLMNOPQRSTU\n (7 rows)\n \n SELECT '' AS zero, c.f1 FROM NAME_TBL c WHERE c.f1 !~ '.*';\n--- 19,104 ----\n --\n --\n CREATE TABLE NAME_TBL(f1 name);\n! INSERT INTO NAME_TBL(f1) VALUES ('1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR');\n! INSERT INTO NAME_TBL(f1) VALUES ('1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopqr');\n INSERT INTO NAME_TBL(f1) VALUES ('asdfghjkl;');\n INSERT INTO NAME_TBL(f1) VALUES ('343f%2a');\n INSERT INTO NAME_TBL(f1) VALUES ('d34aaasdf');\n INSERT INTO NAME_TBL(f1) VALUES ('');\n! INSERT INTO NAME_TBL(f1) VALUES ('1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ');\n SELECT '' AS seven, NAME_TBL.*;\n seven | f1 \n! -------+-----------------------------------------------------------------\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! | 1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopq\n | asdfghjkl;\n | 343f%2a\n | d34aaasdf\n | \n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n (7 rows)\n \n! SELECT '' AS six, c.f1 FROM NAME_TBL c WHERE c.f1 <> '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n six | f1 \n! -----+-----------------------------------------------------------------\n! | 1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopq\n | asdfghjkl;\n | 343f%2a\n | d34aaasdf\n | \n! (5 rows)\n \n! SELECT '' AS one, c.f1 FROM NAME_TBL c WHERE c.f1 = '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n one | f1 \n! -----+-----------------------------------------------------------------\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! (2 rows)\n \n! SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 < '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n three | f1 \n! -------+----\n | \n! (1 row)\n \n! SELECT '' AS four, c.f1 FROM NAME_TBL c WHERE c.f1 <= '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n four | f1 \n! ------+-----------------------------------------------------------------\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n | \n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! (3 rows)\n \n! SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 > '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n three | f1 \n! -------+-----------------------------------------------------------------\n! | 1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopq\n | asdfghjkl;\n+ | 343f%2a\n | d34aaasdf\n! (4 rows)\n \n! SELECT '' AS four, c.f1 FROM NAME_TBL c WHERE c.f1 >= '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n four | f1 \n! ------+-----------------------------------------------------------------\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! | 1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopq\n | asdfghjkl;\n+ | 343f%2a\n | d34aaasdf\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! (6 rows)\n \n SELECT '' AS seven, c.f1 FROM NAME_TBL c WHERE c.f1 ~ '.*';\n seven | f1 \n! -------+-----------------------------------------------------------------\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! | 1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopq\n | asdfghjkl;\n | 343f%2a\n | d34aaasdf\n | \n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n (7 rows)\n \n SELECT '' AS zero, c.f1 FROM NAME_TBL c WHERE c.f1 !~ '.*';\n***************\n*** 108,118 ****\n \n SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 ~ '[0-9]';\n three | f1 \n! -------+---------------------------------\n | 343f%2a\n | d34aaasdf\n! | 1234567890ABCDEFGHIJKLMNOPQRSTU\n! (3 rows)\n \n SELECT '' AS two, c.f1 FROM NAME_TBL c WHERE c.f1 ~ '.*asdf.*';\n two | f1 \n--- 108,120 ----\n \n SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 ~ '[0-9]';\n three | f1 \n! -------+-----------------------------------------------------------------\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! | 1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopq\n | 343f%2a\n | d34aaasdf\n! | 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQ\n! (5 rows)\n \n SELECT '' AS two, c.f1 FROM NAME_TBL c WHERE c.f1 ~ '.*asdf.*';\n two | f1 \nIndex: src/test/regress/sql/name.sql\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/test/regress/sql/name.sql,v\nretrieving revision 1.5\ndiff -c -r1.5 name.sql\n*** src/test/regress/sql/name.sql\t4 Jan 2000 16:21:02 -0000\t1.5\n--- src/test/regress/sql/name.sql\t13 Aug 2002 20:18:04 -0000\n***************\n*** 14,22 ****\n \n CREATE TABLE NAME_TBL(f1 name);\n \n! INSERT INTO NAME_TBL(f1) VALUES ('ABCDEFGHIJKLMNOP');\n \n! INSERT INTO NAME_TBL(f1) VALUES ('abcdefghijklmnop');\n \n INSERT INTO NAME_TBL(f1) VALUES ('asdfghjkl;');\n \n--- 14,22 ----\n \n CREATE TABLE NAME_TBL(f1 name);\n \n! INSERT INTO NAME_TBL(f1) VALUES ('1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR');\n \n! INSERT INTO NAME_TBL(f1) VALUES ('1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopqr');\n \n INSERT INTO NAME_TBL(f1) VALUES ('asdfghjkl;');\n \n***************\n*** 26,47 ****\n \n INSERT INTO NAME_TBL(f1) VALUES ('');\n \n! INSERT INTO NAME_TBL(f1) VALUES ('1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ');\n \n \n SELECT '' AS seven, NAME_TBL.*;\n \n! SELECT '' AS six, c.f1 FROM NAME_TBL c WHERE c.f1 <> 'ABCDEFGHIJKLMNOP';\n \n! SELECT '' AS one, c.f1 FROM NAME_TBL c WHERE c.f1 = 'ABCDEFGHIJKLMNOP';\n \n! SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 < 'ABCDEFGHIJKLMNOP';\n \n! SELECT '' AS four, c.f1 FROM NAME_TBL c WHERE c.f1 <= 'ABCDEFGHIJKLMNOP';\n \n! SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 > 'ABCDEFGHIJKLMNOP';\n \n! SELECT '' AS four, c.f1 FROM NAME_TBL c WHERE c.f1 >= 'ABCDEFGHIJKLMNOP';\n \n SELECT '' AS seven, c.f1 FROM NAME_TBL c WHERE c.f1 ~ '.*';\n \n--- 26,47 ----\n \n INSERT INTO NAME_TBL(f1) VALUES ('');\n \n! INSERT INTO NAME_TBL(f1) VALUES ('1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ');\n \n \n SELECT '' AS seven, NAME_TBL.*;\n \n! SELECT '' AS six, c.f1 FROM NAME_TBL c WHERE c.f1 <> '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n \n! SELECT '' AS one, c.f1 FROM NAME_TBL c WHERE c.f1 = '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n \n! SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 < '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n \n! SELECT '' AS four, c.f1 FROM NAME_TBL c WHERE c.f1 <= '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n \n! SELECT '' AS three, c.f1 FROM NAME_TBL c WHERE c.f1 > '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n \n! SELECT '' AS four, c.f1 FROM NAME_TBL c WHERE c.f1 >= '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABCDEFGHIJKLMNOPQR';\n \n SELECT '' AS seven, c.f1 FROM NAME_TBL c WHERE c.f1 ~ '.*';",
"msg_date": "Tue, 13 Aug 2002 16:39:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have applied the attached patch which changes NAMEDATALEN to 64 and\n> FUNC_MAX_ARGS/INDEX_MAX_KEYS to 32.\n\nWhat is the reasoning behind the following change?\n\nIndex: src/bin/psql/command.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/bin/psql/command.c,v\nretrieving revision 1.75\ndiff -c -r1.75 command.c\n*** src/bin/psql/command.c\t10 Aug 2002 03:56:23 -0000\t1.75\n--- src/bin/psql/command.c\t13 Aug 2002 20:18:01 -0000\n***************\n*** 1513,1519 ****\n \tsys = malloc(strlen(editorName) + strlen(fname) + 32 + 1);\n \tif (!sys)\n \t\treturn false;\n! \tsprintf(sys, \"exec %s %s\", editorName, fname);\n \tresult = system(sys);\n \tif (result == -1)\n \t\tpsql_error(\"could not start editor %s\\n\", editorName);\n--- 1513,1519 ----\n \tsys = malloc(strlen(editorName) + strlen(fname) + 32 + 1);\n \tif (!sys)\n \t\treturn false;\n! \tsnprintf(sys, 32, \"exec %s %s\", editorName, fname);\n \tresult = system(sys);\n \tif (result == -1)\n \t\tpsql_error(\"could not start editor %s\\n\", editorName);\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "13 Aug 2002 17:00:31 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "\nGood question. Looked like a possible buffer overrun to me. Of course,\nI botched it up. I will fix it.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have applied the attached patch which changes NAMEDATALEN to 64 and\n> > FUNC_MAX_ARGS/INDEX_MAX_KEYS to 32.\n> \n> What is the reasoning behind the following change?\n> \n> Index: src/bin/psql/command.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql-server/src/bin/psql/command.c,v\n> retrieving revision 1.75\n> diff -c -r1.75 command.c\n> *** src/bin/psql/command.c\t10 Aug 2002 03:56:23 -0000\t1.75\n> --- src/bin/psql/command.c\t13 Aug 2002 20:18:01 -0000\n> ***************\n> *** 1513,1519 ****\n> \tsys = malloc(strlen(editorName) + strlen(fname) + 32 + 1);\n> \tif (!sys)\n> \t\treturn false;\n> ! \tsprintf(sys, \"exec %s %s\", editorName, fname);\n> \tresult = system(sys);\n> \tif (result == -1)\n> \t\tpsql_error(\"could not start editor %s\\n\", editorName);\n> --- 1513,1519 ----\n> \tsys = malloc(strlen(editorName) + strlen(fname) + 32 + 1);\n> \tif (!sys)\n> \t\treturn false;\n> ! \tsnprintf(sys, 32, \"exec %s %s\", editorName, fname);\n> \tresult = system(sys);\n> \tif (result == -1)\n> \t\tpsql_error(\"could not start editor %s\\n\", editorName);\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 17:03:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "\nIn fact, I now see that there was no such problem. I do wonder why the\n32 is there, though? Shouldn't it be 6 or something like that?\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have applied the attached patch which changes NAMEDATALEN to 64 and\n> > FUNC_MAX_ARGS/INDEX_MAX_KEYS to 32.\n> \n> What is the reasoning behind the following change?\n> \n> Index: src/bin/psql/command.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql-server/src/bin/psql/command.c,v\n> retrieving revision 1.75\n> diff -c -r1.75 command.c\n> *** src/bin/psql/command.c\t10 Aug 2002 03:56:23 -0000\t1.75\n> --- src/bin/psql/command.c\t13 Aug 2002 20:18:01 -0000\n> ***************\n> *** 1513,1519 ****\n> \tsys = malloc(strlen(editorName) + strlen(fname) + 32 + 1);\n> \tif (!sys)\n> \t\treturn false;\n> ! \tsprintf(sys, \"exec %s %s\", editorName, fname);\n> \tresult = system(sys);\n> \tif (result == -1)\n> \t\tpsql_error(\"could not start editor %s\\n\", editorName);\n> --- 1513,1519 ----\n> \tsys = malloc(strlen(editorName) + strlen(fname) + 32 + 1);\n> \tif (!sys)\n> \t\treturn false;\n> ! \tsnprintf(sys, 32, \"exec %s %s\", editorName, fname);\n> \tresult = system(sys);\n> \tif (result == -1)\n> \t\tpsql_error(\"could not start editor %s\\n\", editorName);\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 17:05:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In fact, I now see that there was no such problem. I do wonder why the\n> 32 is there, though? Shouldn't it be 6 or something like that?\n\nWhoever it was was too lazy to count accurately ;-)\n\nI guess I'd vote for changing the code to be\n\n\tsys = malloc(strlen(editorName) + strlen(fname) + 10 + 1);\n\tif (!sys)\n\t\treturn false;\n\tsprintf(sys, \"exec '%s' '%s'\", editorName, fname);\n\n(note the added quotes to provide a little protection against spaces\nand such). Then it's perfectly obvious what the calculation is doing.\nI don't care about wasting 20-some bytes, but confusing readers of the\ncode is worth avoiding.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 17:42:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
},
{
"msg_contents": "\nChange made.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > In fact, I now see that there was no such problem. I do wonder why the\n> > 32 is there, though? Shouldn't it be 6 or something like that?\n> \n> Whoever it was was too lazy to count accurately ;-)\n> \n> I guess I'd vote for changing the code to be\n> \n> \tsys = malloc(strlen(editorName) + strlen(fname) + 10 + 1);\n> \tif (!sys)\n> \t\treturn false;\n> \tsprintf(sys, \"exec '%s' '%s'\", editorName, fname);\n> \n> (note the added quotes to provide a little protection against spaces\n> and such). Then it's perfectly obvious what the calculation is doing.\n> I don't care about wasting 20-some bytes, but confusing readers of the\n> code is worth avoiding.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 01:49:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks"
}
] |
[
{
"msg_contents": "It is possible I may be off line intermittently during the next few\nweeks. I am moving to a new house _and_ my ISP is switching ADSL\nproviders at the same time.\n\nThis may effect my email account and web pages. I have a secondary mail\nhost so I will not lose any mail, but I am not sure I can keep the web\nsites up.\n\nRight now this is just a warning. If something happens, I will send out\nand email or have someone else send it for me.\n\nCurrently, I host the SGML 15 minute snapshot, patches queue,\nTODO.detail, and open items web pages.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 18:33:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "My online status"
}
] |
[
{
"msg_contents": "OK, we need all the interfaces to properly handle schemas within the\nnext month. However, we don't have psql working properly yet. Once we\nget psql working, we can show examples of the changes we made and the\nother interfaces can use that as a model.\n\nIs anyone working on schema fixes for psql?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Aug 2002 18:45:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Schema issues"
}
] |
[
{
"msg_contents": "\n> Hmm. I think this consideration boils down to whether the WHERE clause\n> can give different results for rows that appear equal under the rules of\n> UNION/EXCEPT/INTERSECT. If it gives the same result for any two such\n> rows, then it's safe to push down; otherwise not.\n> \n> It's not too difficult to come up with examples. I invite you to play\n> with\n> \n> select z,length(z) from\n> (select 'abc '::char(7) as z intersect\n> select 'abc '::char(8) as z) ss;\n> \n> and contemplate the effects of pushing down a qual involving \n> length(z).\n\nI guess that is why e.g. Informix returns 3 for both of them. Imho that \nmakes a lot of sense. The trailing spaces in char's are supposed to be \nirrellevant. (But iirc this has already been discussed and rejected)\n\n> Whether this particular case is very important in the real world is hard\n> to say. But there might be more-important cases out there.\n> \n> And yet, I think we can do it anyway. The score card looks \n> like this to\n> me:\n> \n> UNION ALL: always safe to push down, since the rows will be passed\n> independently to the outer WHERE anyway.\n\nYes, that would imho also be the most important optimization.\n\n> UNION: it's unspecified which of a set of \"equal\" rows will be returned,\n> and therefore the behavior would be unspecified anyway if the outer\n> WHERE can distinguish the rows - you might get 1 row of the set out or\n> none. If we push down, then we create a situation where the returned\n> row will always be one that passes the outer WHERE, but that \n> is a legal behavior.\n> \n> INTERSECT: again it's unspecified which of a set of \"equal\" rows will be\n> returned, and so you might get 1 row out or none. If we push down then\n> it's still unspecified whether you get a row out (example: if the outer\n> WHERE will pass only for rows of the left table and not the right, then\n> push down will result in no rows of the \"equal\" set being emitted, but\n> that's a legal behavior).\n> \n> INTERSECT ALL: if a set of \"equal\" rows contains M rows from the left\n> table and N from the right table, you're supposed to get min(M,N) rows\n> of the set out of the INTERSECT ALL. Again you can't say which of the\n> set you will get, so the outer WHERE might let anywhere between 0 and\n> min(M,N) rows out. With push down, M and N will be reduced by the WHERE\n> before we do the intersection, so you still have 0 to \n> min(M,N) rows out.\n> The behavior will change, but it's still legal per spec AFAICT.\n> \n> EXCEPT, EXCEPT ALL: the same sort of analysis seems to hold.\n\nThe imho difficult question is, which select locks down the datatype to use\nfor this column. In a strict sense char(6) and char(7) are not the same\ntype. Since I would certainly not want to be that strict, it imho has to be \ndecided what type the union/intersect... is supposed to use.\nInformix converts them both to the longer char. I do not think it is\nvalid to return variable length char's.\n\ne.g.:\ncreate table atab1 (a char(6)); \ncreate table atab2 (a char(8));\ninsert into atab1 values ('abc');\ninsert into atab2 values ('abc');\ncreate view aview as select * from atab1 union all select * from atab2;\nselect '<'||a||'>' from aview;\nInformix:\n(expression)\n<abc >\n<abc >\nPostgreSQL:\n ?column?\n------------\n <abc >\n <abc >\n\nI am not sure eighter answer is strictly correct. I would probably have \nexpected <abc > <abc > (char(6)) since the first select is supposed to \nlock down the type, no ? \n\n> In short, it looks to me like the spec was carefully designed to allow\n> push down. Pushing down a condition of this sort *does* change the\n> behavior, but the new behavior is still within spec.\n\nI think this would be a great performance boost for views and thus\nworth a change in results that are within spec.\nWould you want to push down always ? There could be outer where clauses, \nthat are so expensive that you would not want to do them twice.\nIf it is all or nothing, I do think pushing down always is better than not.\n\nAndreas\n",
"msg_date": "Fri, 2 Aug 2002 11:57:07 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Rules and Views "
}
] |
[
{
"msg_contents": "I checked out a fresh copy of the dev code today, and get the following\nerrors when doing a configure with no options followed by a make all:\n\nmake[7]: Entering directory `/usr/local/src/pgsql/src/utils'\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../src/include -I/usr/local/include -c -o dllinit.o dllinit.c\nmake[7]: Leaving directory `/usr/local/src/pgsql/src/utils'\ndlltool --export-all --output-def utf8_and_ascii.def utf8_and_ascii.o\ndllwrap -o utf8_and_ascii.dll --dllname utf8_and_ascii.dll --def\nutf8_and_ascii.def utf8_and_ascii.o\n../../../../../../src/utils/dllinit.o -lcygipc -lcrypt -L/usr/local/lib\nutf8_and_ascii.o(.text+0x31):utf8_and_ascii.c: undefined reference to\n`pg_ascii2mic'\nutf8_and_ascii.o(.text+0x55):utf8_and_ascii.c: undefined reference to\n`pg_mic2ascii'\ncollect2: ld returned 1 exit status\ndllwrap: gcc exited with status 1\nmake[6]: *** [utf8_and_ascii.dll] Error 1\nmake[6]: Leaving directory\n`/usr/local/src/pgsql/src/backend/utils/mb/conversion_procs/utf8_and_asc\nii'\nmake[5]: *** [all] Error 2\nmake[5]: Leaving directory\n`/usr/local/src/pgsql/src/backend/utils/mb/conversion_procs'\nmake[4]: *** [SUBSYS.o] Error 2\nmake[4]: Leaving directory `/usr/local/src/pgsql/src/backend/utils/mb'\nmake[3]: *** [mb-recursive] Error 2\nmake[3]: Leaving directory `/usr/local/src/pgsql/src/backend/utils'\nmake[2]: *** [utils-recursive] Error 2\nmake[2]: Leaving directory `/usr/local/src/pgsql/src/backend'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/usr/local/src/pgsql/src'\nmake: *** [all] Error 2\n\nThe OS is CYGWIN_NT-5.1 PC9 1.3.10(0.51/3/2) 2002-02-25 11:14 i686\nunknown.\n\nRegards, Dave.\n",
"msg_date": "Fri, 2 Aug 2002 12:59:20 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Build errors with current CVS"
},
{
"msg_contents": "I think this happens because you are using cygwin envrionment. Under\ncygwin environment a shared object(dll) cannot be built until the\nbackend bild completes. I heard this theory from a cygwin expert in\nJapan. If this is correct, we have to move utils/mb/conversion_procs\nto right under src so that it builds *after* the backend build\nfinishes. Can anyone tell me this is not a wrong direction at least?\nI'm not a user of cygwin, and I cannot confirm it myself.\n--\nTatsuo Ishii\n\n> I checked out a fresh copy of the dev code today, and get the following\n> errors when doing a configure with no options followed by a make all:\n> \n> make[7]: Entering directory `/usr/local/src/pgsql/src/utils'\n> gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../src/include -I/usr/local/include -c -o dllinit.o dllinit.c\n> make[7]: Leaving directory `/usr/local/src/pgsql/src/utils'\n> dlltool --export-all --output-def utf8_and_ascii.def utf8_and_ascii.o\n> dllwrap -o utf8_and_ascii.dll --dllname utf8_and_ascii.dll --def\n> utf8_and_ascii.def utf8_and_ascii.o\n> ../../../../../../src/utils/dllinit.o -lcygipc -lcrypt -L/usr/local/lib\n> utf8_and_ascii.o(.text+0x31):utf8_and_ascii.c: undefined reference to\n> `pg_ascii2mic'\n> utf8_and_ascii.o(.text+0x55):utf8_and_ascii.c: undefined reference to\n> `pg_mic2ascii'\n[snip]\n> The OS is CYGWIN_NT-5.1 PC9 1.3.10(0.51/3/2) 2002-02-25 11:14 i686\n> unknown.\n",
"msg_date": "Sat, 03 Aug 2002 07:42:45 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Build errors with current CVS"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> I think this happens because you are using cygwin envrionment. Under\n> cygwin environment a shared object(dll) cannot be built until the\n> backend bild completes.\n\nYes. AIX also has this problem.\n\n> I heard this theory from a cygwin expert in Japan. If this is correct,\n> we have to move utils/mb/conversion_procs to right under src so that it\n> builds *after* the backend build finishes.\n\nYou don't have to move them, but you need to adjust the build order.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 5 Aug 2002 21:22:23 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Build errors with current CVS"
},
{
"msg_contents": "> Tatsuo Ishii writes:\n> \n> > I think this happens because you are using cygwin envrionment. Under\n> > cygwin environment a shared object(dll) cannot be built until the\n> > backend bild completes.\n> \n> Yes. AIX also has this problem.\n> \n> > I heard this theory from a cygwin expert in Japan. If this is correct,\n> > we have to move utils/mb/conversion_procs to right under src so that it\n> > builds *after* the backend build finishes.\n> \n> You don't have to move them, but you need to adjust the build order.\n\nI have committed changes according to your suggestion.\n\nCan people on cygwin or AIX test the fix?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 08 Aug 2002 17:26:15 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Build errors with current CVS"
}
] |
[
{
"msg_contents": "\nJust making sure that I haven't screwed up anything plugging in amavis ...\n\n\n",
"msg_date": "Fri, 2 Aug 2002 10:17:09 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Does this go out?"
},
{
"msg_contents": "got it...\n\nwhat's amivis ?\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> Sent: Friday, August 02, 2002 10:17 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Does this go out?\n>\n>\n>\n> Just making sure that I haven't screwed up anything plugging in amavis ...\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Fri, 2 Aug 2002 10:26:11 -0300",
"msg_from": "\"Jeff MacDonald\" <jeff@tsunamicreek.com>",
"msg_from_op": false,
"msg_subject": "Re: Does this go out?"
},
{
"msg_contents": "\nanti-virus software ... see http://www.amavis.org ...\n\nOn Fri, 2 Aug 2002, Jeff MacDonald wrote:\n\n> got it...\n>\n> what's amivis ?\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> > Sent: Friday, August 02, 2002 10:17 AM\n> > To: pgsql-hackers@postgresql.org\n> > Subject: [HACKERS] Does this go out?\n> >\n> >\n> >\n> > Just making sure that I haven't screwed up anything plugging in amavis ...\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n>\n\n",
"msg_date": "Fri, 2 Aug 2002 10:36:00 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Does this go out?"
}
] |
[
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> OK, we need all the interfaces to properly handle schemas within the\n> next month. However, we don't have psql working properly yet. Once we\n> get psql working, we can show examples of the changes we made and the\n> other interfaces can use that as a model.\n>\n> Is anyone working on schema fixes for psql?\n\nYes, I did a lot of work on this, but got stuck on the whole \ncurrent_schemas problem, which apparently will need some \nbackend-to-sqlfunction mojo to fix. I'll clean up what I already have \nand try to submit it next week in anticipation of that last missing \npiece.\n\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200208021015\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.7 (GNU/Linux)\n\niD8DBQE9SpfRvJuQZxSWSsgRAituAJ9t5rFarCQoylBq/467vmALSue9dACg2hxg\nGYQWUuPB2uUAxdCismtyOXc=\n=eLjg\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Fri, 2 Aug 2002 14:54:58 -0000",
"msg_from": "greg@turnstep.com",
"msg_from_op": true,
"msg_subject": "Re: Schema issues"
}
] |
[
{
"msg_contents": "> > The predicate for files we MUST (fuzzy) copy is: \n> > File exists at start of backup && File exists at end of backup\n> \n> Right, which seems to me to negate all these claims about needing a\n> (horribly messy) way to read uncommitted system catalog entries, do\n> blind reads, etc. What's wrong with just exec'ing tar after having\n> done a checkpoint?\n\nRight.\n\nIt looks like insert/update/etc ops over local relations are\nWAL-logged, and it's Ok (we have to do this).\n\nSo, we only have to use shared buffer pool for local (but probably\nnot for temporary) relations to close this issue, yes? I personally\ndon't see any performance issues if we do this.\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 13:55:19 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> So, we only have to use shared buffer pool for local (but probably\n> not for temporary) relations to close this issue, yes? I personally\n> don't see any performance issues if we do this.\n\nHmm. Temporary relations are a whole different story.\n\nIt would be nice if updates on temp relations never got WAL-logged at\nall, but I'm not sure how feasible that is. Right now we don't really\ndistinguish temp relations from ordinary ones --- in particular, they\nhave pg_class entries, which surely will get WAL-logged even if we\npersuade the buffer manager not to do it for the data pages. Is that\na problem? Not sure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 17:29:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations "
}
] |
[
{
"msg_contents": "> > (In particular, I *strongly* object to using the buffer \n> manager at all\n> > for reading files for backup. That's pretty much \n> guaranteed to blow out\n> > buffer cache. Use plain OS-level file reads. An OS \n> directory search\n> > will do fine for finding what you need to read, too.)\n> \n> How do you get atomic block copies otherwise?\n\nYou don't need it.\nAs long as whole block is saved in log on first after\ncheckpoint (you made before backup) change to block.\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 13:59:47 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "On Fri, 2002-08-02 at 16:59, Mikheev, Vadim wrote:\n\n> You don't need it.\n> As long as whole block is saved in log on first after\n> checkpoint (you made before backup) change to block.\n\nI thought half the point of PITR was to be able to turn off pre-image\nlogging so you can trade potential recovery time for speed without fear\nof data-loss. Didn't we have this discussion before?\n\nHow is this any worse than a table scan?\n \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n",
"msg_date": "02 Aug 2002 17:12:20 -0400",
"msg_from": "\"J. R. Nield\" <jrnield@usol.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of J. R. Nield\n> Sent: Friday, August 02, 2002 5:12 PM\n> To: Mikheev, Vadim\n> Cc: Tom Lane; Richard Tucker; Bruce Momjian; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> On Fri, 2002-08-02 at 16:59, Mikheev, Vadim wrote:\n>\n> > You don't need it.\n> > As long as whole block is saved in log on first after\n> > checkpoint (you made before backup) change to block.\n>\n> I thought half the point of PITR was to be able to turn off pre-image\n> logging so you can trade potential recovery time for speed without fear\n> of data-loss. Didn't we have this discussion before?\nSuppose you can turn off/on PostgreSQL's atomic write on the fly. Which\nmeans turning on or off whether XLoginsert writes a copy of the block into\nthe log file upon first modification after a checkpoint.\nSo ALTER SYSTEM BEGIN BACKUP would turn on atomic write and then checkpoint\nthe database.\nSo while the OS copy of the data files is going on the atomic write would be\nenabled. So any read of a partial write would be fixed up by the usual crash\nrecovery mechanism.\n>\n> How is this any worse than a table scan?\n>\n> --\n> J. R. Nield\n> jrnield@usol.com\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Fri, 02 Aug 2002 17:40:26 -0400",
"msg_from": "Richard Tucker <richt@multera.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
}
] |
[
{
"msg_contents": "> > So, we only have to use shared buffer pool for local (but probably\n> > not for temporary) relations to close this issue, yes? I personally\n> > don't see any performance issues if we do this.\n> \n> Hmm. Temporary relations are a whole different story.\n> \n> It would be nice if updates on temp relations never got WAL-logged at\n> all, but I'm not sure how feasible that is. Right now we don't really\n\nThere is no any point to log them.\n\n> distinguish temp relations from ordinary ones --- in particular, they\n> have pg_class entries, which surely will get WAL-logged even if we\n> persuade the buffer manager not to do it for the data pages. Is that\n> a problem? Not sure.\n\nIt was not about any problem. I just mean that local buffer pool\nstill could be used for temporary relations if someone thinks\nthat it has any sence, anyone?\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 14:49:57 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations "
}
] |
[
{
"msg_contents": "> > You don't need it.\n> > As long as whole block is saved in log on first after\n> > checkpoint (you made before backup) change to block.\n> \n> I thought half the point of PITR was to be able to turn\n> off pre-image logging so you can trade potential recovery\n\nCorrection - *after*-image.\n\n> time for speed without fear of data-loss. Didn't we have\n> this discussion before?\n\nSorry, I missed this.\n\nSo, it's already discussed what to do about partial\nblock updates? When system crashed just after LSN,\nbut not actual tuple etc, was stored in on-disk block\nand on restart you compare log record' LSN with\ndata block' LSN, they are equal and so you *assume*\nthat actual data are in place too, what is not the case?\n\nI always thought that the whole point of PITR is to be\nable to restore DB fast (faster than pg_restore) *AND*\nup to the last committed transaction (assuming that\nlog is Ok).\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 15:00:46 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations"
}
] |
[
{
"msg_contents": "> > How do you get atomic block copies otherwise?\n> \n> Eh? The kernel does that for you, as long as you're reading the\n> same-size blocks that the backends are writing, no?\n\nGood point.\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 15:01:11 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations "
},
{
"msg_contents": "Are you sure this is true for all ports? And if so, why would it be\ncheaper for the kernel to do it in its buffer manager, compared to us\ndoing it in ours? This just seems bogus to rely on. Does anyone know\nwhat POSIX has to say about this? \n\nOn Fri, 2002-08-02 at 18:01, Mikheev, Vadim wrote:\n> > > How do you get atomic block copies otherwise?\n> > \n> > Eh? The kernel does that for you, as long as you're reading the\n> > same-size blocks that the backends are writing, no?\n> \n> Good point.\n> \n> Vadim\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n",
"msg_date": "02 Aug 2002 18:24:07 -0400",
"msg_from": "\"J. R. Nield\" <jrnield@usol.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Mikheev, Vadim [mailto:vmikheev@SECTORBASE.COM]\n> Sent: Friday, August 02, 2002 6:01 PM\n> To: 'Tom Lane'; J. R. Nield\n> Cc: Richard Tucker; Bruce Momjian; PostgreSQL Hacker\n> Subject: RE: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> > > How do you get atomic block copies otherwise?\n> >\n> > Eh? The kernel does that for you, as long as you're reading the\n> > same-size blocks that the backends are writing, no?\n>\n> Good point.\n>\nWe know for sure the kernel does this? I think this is a dubious\nassumption.\n> Vadim\n>\n\n",
"msg_date": "Wed, 07 Aug 2002 11:32:01 -0400",
"msg_from": "Richard Tucker <richt@multera.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "Richard Tucker <richt@multera.com> writes:\n>>> Eh? The kernel does that for you, as long as you're reading the\n>>> same-size blocks that the backends are writing, no?\n\n> We know for sure the kernel does this? I think this is a dubious\n> assumption.\n\nYeah, as someone pointed out later, it doesn't work if the kernel's\ninternal buffer size is smaller than our BLCKSZ. So we do still need\nthe page images in WAL --- that protection against non-atomic writes\nat the hardware level should serve for this problem too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 11:40:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations "
}
] |
[
{
"msg_contents": "> > > As long as whole block is saved in log on first after\n> > > checkpoint (you made before backup) change to block.\n> >\n> I thought half the point of PITR was to be able to\n> turn off pre-image logging so you can trade potential\n> recovery time for speed without fear of data-loss.\n> Didn't we have this discussion before?\n\n> Suppose you can turn off/on PostgreSQL's atomic write on\n> the fly. Which means turning on or off whether XLoginsert\n> writes a copy of the block into the log file upon first\n> modification after a checkpoint.\n> So ALTER SYSTEM BEGIN BACKUP would turn on atomic write\n> and then checkpoint the database.\n> So while the OS copy of the data files is going on the\n> atomic write would be enabled. So any read of a partial\n> write would be fixed up by the usual crash recovery mechanism.\n\nYes, simple way to satisfy everyone.\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 15:15:32 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Mikheev, Vadim\n> Sent: Friday, August 02, 2002 6:16 PM\n> To: 'richt@multera.com'; J. R. Nield\n> Cc: Tom Lane; Bruce Momjian; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> > > > As long as whole block is saved in log on first after\n> > > > checkpoint (you made before backup) change to block.\n> > >\n> > I thought half the point of PITR was to be able to\n> > turn off pre-image logging so you can trade potential\n> > recovery time for speed without fear of data-loss.\n> > Didn't we have this discussion before?\n>\n> > Suppose you can turn off/on PostgreSQL's atomic write on\n> > the fly. Which means turning on or off whether XLoginsert\n> > writes a copy of the block into the log file upon first\n> > modification after a checkpoint.\n> > So ALTER SYSTEM BEGIN BACKUP would turn on atomic write\n> > and then checkpoint the database.\n> > So while the OS copy of the data files is going on the\n> > atomic write would be enabled. So any read of a partial\n> > write would be fixed up by the usual crash recovery mechanism.\n>\n> Yes, simple way to satisfy everyone.\n\nBy the way I could supply a patch which turns off the atomic write feature.\nIt is disabled via a configuration parameter. If the flag enabling /\ndisabling the feature were added to shared memory, XLogCtl struture, then it\ncould be toggled at runtime.\n\nSo I think what will work then is pg_copy (hot backup) would:\n1) Issue an ALTER SYSTEM BEGIN BACKUP command which turns on atomic write,\ncheckpoints the database and disables further checkpoints (so wal files\nwon't be reused) until the backup is complete.\n2) Change ALTER SYSTEM BACKUP DATABASE TO <directory> read the database\ndirectory to find which files it should backup rather than pg_class and for\neach file just use system(cp...) to copy it to the backup directory.\n3) ALTER SYSTEM FINISH BACKUP does at it does now and backs up the pg_xlog\ndirectory and renables database checkpointing.\n\nDoes this sound right?\n\nBTW I will be on vacation until next Wednesday.\n\n>\n> Vadim\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Fri, 02 Aug 2002 18:40:27 -0400",
"msg_from": "Richard Tucker <richt@multera.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "Richard Tucker <richt@multera.com> writes:\n> 1) Issue an ALTER SYSTEM BEGIN BACKUP command which turns on atomic write,\n> checkpoints the database and disables further checkpoints (so wal files\n> won't be reused) until the backup is complete.\n> 2) Change ALTER SYSTEM BACKUP DATABASE TO <directory> read the database\n> directory to find which files it should backup rather than pg_class and for\n> each file just use system(cp...) to copy it to the backup directory.\n> 3) ALTER SYSTEM FINISH BACKUP does at it does now and backs up the pg_xlog\n> directory and renables database checkpointing.\n\n> Does this sound right?\n\nI really dislike the notion of turning off checkpointing. What if the\nbackup process dies or gets stuck (eg, it's waiting for some operator to\nchange a tape, but the operator has gone to lunch)? IMHO, backup\nsystems that depend on breaking the system's normal operational behavior\nare broken. It should be sufficient to force a checkpoint when you\nstart and when you're done --- altering normal operation in between is\na bad design.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 19:49:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations "
},
{
"msg_contents": "Tom Lane wrote:\n> Richard Tucker <richt@multera.com> writes:\n> > 1) Issue an ALTER SYSTEM BEGIN BACKUP command which turns on atomic write,\n> > checkpoints the database and disables further checkpoints (so wal files\n> > won't be reused) until the backup is complete.\n> > 2) Change ALTER SYSTEM BACKUP DATABASE TO <directory> read the database\n> > directory to find which files it should backup rather than pg_class and for\n> > each file just use system(cp...) to copy it to the backup directory.\n> > 3) ALTER SYSTEM FINISH BACKUP does at it does now and backs up the pg_xlog\n> > directory and renables database checkpointing.\n> \n> > Does this sound right?\n> \n> I really dislike the notion of turning off checkpointing. What if the\n> backup process dies or gets stuck (eg, it's waiting for some operator to\n> change a tape, but the operator has gone to lunch)? IMHO, backup\n> systems that depend on breaking the system's normal operational behavior\n> are broken. It should be sufficient to force a checkpoint when you\n> start and when you're done --- altering normal operation in between is\n> a bad design.\n\nYes, and we have the same issue with turning on/off after-image writes.\nHow do we reset this from a PITR crash?; however, the failure mode is\nonly poorer performance, but it may be that way for a long time without\nthe administrator knowing it.\n\nI wonder if we could SET the value in a transaction and keep the session\nconnection open. When we complete, we abort the transaction and\ndisconnect. If we die, the session terminates and the SET variable goes\nback to the original value. (I am using the ignore SET in aborted\ntransactions feature.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Aug 2002 20:50:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "Maybe we don't have to turn off checkpointing but we DO have to make sure no\nwal files get re-used while the backup is running. The wal-files must be\narchived after everything else has been archived. Futhermore if we don't\nstop checkpointing then care must be taken to backup the pg_control file\nfirst.\n-regards\nricht\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, August 02, 2002 7:49 PM\n> To: richt@multera.com\n> Cc: Mikheev, Vadim; J. R. Nield; Bruce Momjian; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> Richard Tucker <richt@multera.com> writes:\n> > 1) Issue an ALTER SYSTEM BEGIN BACKUP command which turns on\n> atomic write,\n> > checkpoints the database and disables further checkpoints (so wal files\n> > won't be reused) until the backup is complete.\n> > 2) Change ALTER SYSTEM BACKUP DATABASE TO <directory> read the database\n> > directory to find which files it should backup rather than\n> pg_class and for\n> > each file just use system(cp...) to copy it to the backup directory.\n> > 3) ALTER SYSTEM FINISH BACKUP does at it does now and backs up\n> the pg_xlog\n> > directory and renables database checkpointing.\n>\n> > Does this sound right?\n>\n> I really dislike the notion of turning off checkpointing. What if the\n> backup process dies or gets stuck (eg, it's waiting for some operator to\n> change a tape, but the operator has gone to lunch)? IMHO, backup\n> systems that depend on breaking the system's normal operational behavior\n> are broken. It should be sufficient to force a checkpoint when you\n> start and when you're done --- altering normal operation in between is\n> a bad design.\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Wed, 07 Aug 2002 11:01:59 -0400",
"msg_from": "Richard Tucker <richt@multera.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, August 02, 2002 8:51 PM\n> To: Tom Lane\n> Cc: richt@multera.com; Mikheev, Vadim; J. R. Nield; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> Tom Lane wrote:\n> > Richard Tucker <richt@multera.com> writes:\n> > > 1) Issue an ALTER SYSTEM BEGIN BACKUP command which turns on\n> atomic write,\n> > > checkpoints the database and disables further checkpoints (so\n> wal files\n> > > won't be reused) until the backup is complete.\n> > > 2) Change ALTER SYSTEM BACKUP DATABASE TO <directory> read\n> the database\n> > > directory to find which files it should backup rather than\n> pg_class and for\n> > > each file just use system(cp...) to copy it to the backup directory.\n> > > 3) ALTER SYSTEM FINISH BACKUP does at it does now and backs\n> up the pg_xlog\n> > > directory and renables database checkpointing.\n> >\n> > > Does this sound right?\n> >\n> > I really dislike the notion of turning off checkpointing. What if the\n> > backup process dies or gets stuck (eg, it's waiting for some operator to\n> > change a tape, but the operator has gone to lunch)? IMHO, backup\n> > systems that depend on breaking the system's normal operational behavior\n> > are broken. It should be sufficient to force a checkpoint when you\n> > start and when you're done --- altering normal operation in between is\n> > a bad design.\n>\n> Yes, and we have the same issue with turning on/off after-image writes.\n> How do we reset this from a PITR crash?; however, the failure mode is\n> only poorer performance, but it may be that way for a long time without\n> the administrator knowing it.\n>\n> I wonder if we could SET the value in a transaction and keep the session\n> connection open. When we complete, we abort the transaction and\n> disconnect. If we die, the session terminates and the SET variable goes\n> back to the original value. (I am using the ignore SET in aborted\n> transactions feature.)\nI think all these concerns are addressed if the ALTER SYSTEM BACKUP is done\nas a single command. In what I implemented the checkpoint process while\npolling for the checkpoint lock tested if backup processing was still alive\nand if not reset everything back to the pre-backup settings.\n\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Wed, 07 Aug 2002 11:12:00 -0400",
"msg_from": "Richard Tucker <richt@multera.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
}
] |
[
{
"msg_contents": "> Are you sure this is true for all ports?\n\nWell, maybe you're right and it's not.\nBut with \"after-image blocks in log after checkpoint\"\nyou really shouldn't worry about block atomicity, right?\nAnd ability to turn blocks logging on/off, as suggested\nby Richard, looks as appropriate for everyone, ?\n\n> And if so, why would it be cheaper for the kernel to do it in\n> its buffer manager, compared to us doing it in ours? This just\n> seems bogus to rely on. Does anyone know what POSIX has to say\n> about this? \n\nDoes \"doing it in ours\" mean reading all data files through\nour shared buffer pool? Sorry, I just don't see point in this\nwhen tar ect will work just fine. At least for the first release\ntar is SuperOK, because of there must be and will be other\nproblems/bugs, unrelated to how to read data files, and so\nthe sooner we start testing the better.\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 16:42:57 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations"
}
] |
[
{
"msg_contents": "> So I think what will work then is pg_copy (hot backup) would:\n> 1) Issue an ALTER SYSTEM BEGIN BACKUP command which turns on \n> atomic write,\n> checkpoints the database and disables further checkpoints (so \n> wal files\n> won't be reused) until the backup is complete.\n> 2) Change ALTER SYSTEM BACKUP DATABASE TO <directory> read \n> the database\n> directory to find which files it should backup rather than \n> pg_class and for\n> each file just use system(cp...) to copy it to the backup directory.\n\nDid you consider saving backup on the client host (ie from where\npg_copy started)?\n\n> 3) ALTER SYSTEM FINISH BACKUP does at it does now and backs \n> up the pg_xlog\n> directory and renables database checkpointing.\n\nWell, wouldn't be single command ALTER SYSTEM BACKUP enough?\nWhat's the point to have 3 commands?\n\n(If all of this is already discussed then sorry - I'm not going\nto start new discussion).\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 16:50:31 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Mikheev, Vadim [mailto:vmikheev@SECTORBASE.COM]\n> Sent: Friday, August 02, 2002 7:51 PM\n> To: 'richt@multera.com'; J. R. Nield\n> Cc: Tom Lane; Bruce Momjian; PostgreSQL Hacker\n> Subject: RE: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> > So I think what will work then is pg_copy (hot backup) would:\n> > 1) Issue an ALTER SYSTEM BEGIN BACKUP command which turns on\n> > atomic write,\n> > checkpoints the database and disables further checkpoints (so\n> > wal files\n> > won't be reused) until the backup is complete.\n> > 2) Change ALTER SYSTEM BACKUP DATABASE TO <directory> read\n> > the database\n> > directory to find which files it should backup rather than\n> > pg_class and for\n> > each file just use system(cp...) to copy it to the backup directory.\n>\n> Did you consider saving backup on the client host (ie from where\n> pg_copy started)?\nNo, pg_copy just uses the libpq interface.\n>\n> > 3) ALTER SYSTEM FINISH BACKUP does at it does now and backs\n> > up the pg_xlog\n> > directory and renables database checkpointing.\n>\nI think now it could be just one command. My implementation was reading\npg_class to find the tables and indexes that needed backing up. Now reading\npg_database would be sufficient to find the directories containing files\nthat needed to be archived, so it could all be done in one command.\n\n> Well, wouldn't be single command ALTER SYSTEM BACKUP enough?\n> What's the point to have 3 commands?\n>\n> (If all of this is already discussed then sorry - I'm not going\n> to start new discussion).\n>\n> Vadim\n>\n\n",
"msg_date": "Wed, 07 Aug 2002 11:01:59 -0400",
"msg_from": "Richard Tucker <richt@multera.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
}
] |
[
{
"msg_contents": "> I really dislike the notion of turning off checkpointing. What if the\n> backup process dies or gets stuck (eg, it's waiting for some \n> operator to\n> change a tape, but the operator has gone to lunch)? IMHO, backup\n> systems that depend on breaking the system's normal \n> operational behavior\n> are broken. It should be sufficient to force a checkpoint when you\n> start and when you're done --- altering normal operation in between is\n> a bad design.\n\nBut you have to prevent log files reusing while you copy data files.\nThat's why I asked are 3 commands from pg_copy required and couldn't\nbe backup accomplished by issuing single command\n\nALTER SYSTEM BACKUP <dir | stdout (to copy data to client side)>\n\n(even from pgsql) so backup process would die with entire system -:)\nAs for tape changing, maybe we could use some timeout and then just\nstop backup process.\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 17:00:25 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> It should be sufficient to force a checkpoint when you\n>> start and when you're done --- altering normal operation in between is\n>> a bad design.\n\n> But you have to prevent log files reusing while you copy data files.\n\nNo, I don't think so. If you are using PITR then you presumably have\nsome process responsible for archiving off log files on a continuous\nbasis. The backup process should leave that normal operational behavior\nin place, not muck with it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 20:05:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> >> It should be sufficient to force a checkpoint when you\n> >> start and when you're done --- altering normal operation in between is\n> >> a bad design.\n> \n> > But you have to prevent log files reusing while you copy data files.\n> \n> No, I don't think so. If you are using PITR then you presumably have\n> some process responsible for archiving off log files on a continuous\n> basis. The backup process should leave that normal operational behavior\n> in place, not muck with it.\n\nBut what if you normally continuous LOG to tape, and now you want to\nbackup to tape. You can't use the same tape drive for both operations.\nIs that typical? I know sites that had only one tape drive that did\nthat.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Aug 2002 20:52:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "On Fri, Aug 02, 2002 at 08:52:27PM -0400, Bruce Momjian wrote:\n\n> But what if you normally continuous LOG to tape, and now you want to\n> backup to tape. You can't use the same tape drive for both operations.\n> Is that typical? I know sites that had only one tape drive that did\n> that.\n\nI have seen such installations. They always seemed like a real false\neconomy to me. Tape drives are not so expensive that, if you really\nneed to ensure your data is well and truly safe, you can't afford two\nof them. But that's just my 2 cents. (Or, I guess in this case, 4\ncents.)\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 6 Aug 2002 14:35:55 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "When I've seen this done, I've seen DLT's used as they allow for\nmultiple channels to be streamed to tape at the same time. If your tape\ndevice does not allow for multiple, concurrent input streams, you're\ngoing to have to obtain multiple drives.\n\nPlease keep in mind, my DLT experience is limited.\n\nGreg\n\n\nOn Tue, 2002-08-06 at 13:35, Andrew Sullivan wrote:\n> On Fri, Aug 02, 2002 at 08:52:27PM -0400, Bruce Momjian wrote:\n> \n> > But what if you normally continuous LOG to tape, and now you want to\n> > backup to tape. You can't use the same tape drive for both operations.\n> > Is that typical? I know sites that had only one tape drive that did\n> > that.\n> \n> I have seen such installations. They always seemed like a real false\n> economy to me. Tape drives are not so expensive that, if you really\n> need to ensure your data is well and truly safe, you can't afford two\n> of them. But that's just my 2 cents. (Or, I guess in this case, 4\n> cents.)\n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 87 Mowat Avenue \n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M6K 3E3\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org",
"msg_date": "06 Aug 2002 14:09:20 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, August 02, 2002 8:52 PM\n> To: Tom Lane\n> Cc: Mikheev, Vadim; richt@multera.com; J. R. Nield; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> Tom Lane wrote:\n> > \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> > >> It should be sufficient to force a checkpoint when you\n> > >> start and when you're done --- altering normal operation in\n> between is\n> > >> a bad design.\n> >\n> > > But you have to prevent log files reusing while you copy data files.\n> >\n> > No, I don't think so. If you are using PITR then you presumably have\n> > some process responsible for archiving off log files on a continuous\n> > basis. The backup process should leave that normal operational behavior\n> > in place, not muck with it.\n>\n> But what if you normally continuous LOG to tape, and now you want to\n> backup to tape. You can't use the same tape drive for both operations.\n> Is that typical? I know sites that had only one tape drive that did\n> that.\nOur implementation of pg_copy did not archive to tape. This adds a lot of\ncomplications so I thought just make a disk to disk copy and then the disk\ncopy could be archived to table at the users discretion.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Wed, 07 Aug 2002 11:12:00 -0400",
"msg_from": "Richard Tucker <richt@multera.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, August 02, 2002 8:06 PM\n> To: Mikheev, Vadim\n> Cc: richt@multera.com; J. R. Nield; Bruce Momjian; PostgreSQL Hacker\n> Subject: Re: [HACKERS] PITR, checkpoint, and local relations\n>\n>\n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> >> It should be sufficient to force a checkpoint when you\n> >> start and when you're done --- altering normal operation in between is\n> >> a bad design.\n>\n> > But you have to prevent log files reusing while you copy data files.\n>\n> No, I don't think so. If you are using PITR then you presumably have\n> some process responsible for archiving off log files on a continuous\n> basis. The backup process should leave that normal operational behavior\n> in place, not muck with it.\nYou want the log files necessary for recovering the database to be in the\nbackup copy -- don't you?\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Wed, 07 Aug 2002 11:12:00 -0400",
"msg_from": "Richard Tucker <richt@multera.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations"
},
{
"msg_contents": "Richard Tucker <richt@multera.com> writes:\n> But you have to prevent log files reusing while you copy data files.\n\n>> No, I don't think so. If you are using PITR then you presumably have\n>> some process responsible for archiving off log files on a continuous\n>> basis. The backup process should leave that normal operational behavior\n>> in place, not muck with it.\n\n> You want the log files necessary for recovering the database to be in the\n> backup copy -- don't you?\n\nWhy? As far as I can see, this entire feature only makes sense in the\ncontext where you are continuously archiving log files to someplace\n(let's say tape, for purposes of discussion). Every so often you make a\nbackup, and what that does is it lets you recycle the log-archive tapes\nolder than the start of the backup. You still need the log segments\nnewer than the start of the backup, and you might as well just keep the\ntapes that they're going to be on anyway. Doing it the way you propose\n(ie, causing a persistent change in the behavior of the log archiving\nprocess) simply makes the whole operation more complex and more fragile,\nwithout any actual gain in functionality that I can detect.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 11:23:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations "
}
] |
[
{
"msg_contents": "> >> It should be sufficient to force a checkpoint when you\n> >> start and when you're done --- altering normal operation \n> in between is\n> >> a bad design.\n> \n> > But you have to prevent log files reusing while you copy data files.\n> \n> No, I don't think so. If you are using PITR then you presumably have\n> some process responsible for archiving off log files on a continuous\n> basis. The backup process should leave that normal \n> operational behavior in place, not muck with it.\n\nWell, PITR without log archiving could be alternative to\npg_dump/pg_restore, but I agreed that it's not the big\nfeature to worry about.\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 17:31:04 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> No, I don't think so. If you are using PITR then you presumably have\n>> some process responsible for archiving off log files on a continuous\n>> basis. The backup process should leave that normal \n>> operational behavior in place, not muck with it.\n\n> Well, PITR without log archiving could be alternative to\n> pg_dump/pg_restore, but I agreed that it's not the big\n> feature to worry about.\n\nSeems like a pointless \"feature\" to me. A pg_dump dump serves just\nas well to capture a snapshot --- in fact better, since it's likely\nsmaller, definitely more portable, amenable to selective restore, etc.\n\nI think we should design the PITR dump to do a good job for PITR,\nnot a poor job of both PITR and pg_dump.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 20:44:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR, checkpoint, and local relations "
}
] |
[
{
"msg_contents": "I count about seventy occurrences of this code pattern:\n\n /* keep system catalog indices current */\n if (RelationGetForm(pg_rewrite_desc)->relhasindex)\n {\n Relation idescs[Num_pg_rewrite_indices];\n\n CatalogOpenIndices(Num_pg_rewrite_indices, Name_pg_rewrite_indices,\n idescs);\n CatalogIndexInsert(idescs, Num_pg_rewrite_indices, pg_rewrite_desc,\n ruletup);\n CatalogCloseIndices(Num_pg_rewrite_indices, idescs);\n }\n\nI believe this could be simplified to something like\n\n CatalogUpdateIndexes(Relation, HeapTuple, indexnamelist_constant,\n\t\t\t indexcount_constant);\n\nwith essentially no speed penalty.\n\nAn even more radical approach is to get rid of the hardwired index name\nlists in indexing.h, and instead expect CatalogOpenIndices to make use\nof the index OID lists that are maintained by the relcache (since 7.1 or\nso). Then the typical call would reduce to\n\n CatalogUpdateIndexes(Relation, HeapTuple);\n\nThis would simplify development/maintenance at the cost of a small\namount of CPU time building the index OID list whenever it wasn't\nalready cached. (OTOH ... I'm unsure whether opening an index by OID\nis any faster than opening it by name, but it's certainly plausible that\nit might be --- so we could find we buy back the time spent building\nrelcache index lists by making the actual index open step quicker.)\n\nComments? I want to do the first step in any case, but I'm not sure\nabout eliminating the index name lists.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 20:41:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Planned simplification of catalog index updates"
},
{
"msg_contents": "> An even more radical approach is to get rid of the hardwired index name\n> lists in indexing.h, and instead expect CatalogOpenIndices to make use\n> of the index OID lists that are maintained by the relcache (since 7.1 or\n> so). Then the typical call would reduce to\n> \n> CatalogUpdateIndexes(Relation, HeapTuple);\n\nThis would be great. Anyway to take it one step further and make it\ntransparent? Hide it in heap_insert / update?\n\n",
"msg_date": "02 Aug 2002 21:15:03 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Planned simplification of catalog index updates"
},
{
"msg_contents": "Tom Lane wrote:\n> This would simplify development/maintenance at the cost of a small\n> amount of CPU time building the index OID list whenever it wasn't\n> already cached. (OTOH ... I'm unsure whether opening an index by OID\n> is any faster than opening it by name, but it's certainly plausible that\n> it might be --- so we could find we buy back the time spent building\n> relcache index lists by making the actual index open step quicker.)\n> \n> Comments? I want to do the first step in any case, but I'm not sure\n> about eliminating the index name lists.\n\nAll your changes make sense to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Aug 2002 21:46:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Planned simplification of catalog index updates"
},
{
"msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n>> Then the typical call would reduce to\n>> \n>> CatalogUpdateIndexes(Relation, HeapTuple);\n\n> This would be great. Anyway to take it one step further and make it\n> transparent? Hide it in heap_insert / update?\n\nNo, that would be quite inappropriate. The control paths that we're\ntalking about here insert or update only one tuple per transaction.\nWe do *not* want to do (a) open all indexes, (b) process one tuple,\n(c) close all indexes in the performance-critical paths where many\ntuples are processed per transaction. (Even in the paths that use\nCatalogOpenIndexes, you wouldn't reduce it to a single call in\nthe routines that insert multiple tuples per call.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 23:26:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Planned simplification of catalog index updates "
},
{
"msg_contents": "I said:\n> An even more radical approach is to get rid of the hardwired index name\n> lists in indexing.h, and instead expect CatalogOpenIndices to make use\n> of the index OID lists that are maintained by the relcache (since 7.1 or\n> so). Then the typical call would reduce to\n> CatalogUpdateIndexes(Relation, HeapTuple);\n> This would simplify development/maintenance at the cost of a small\n> amount of CPU time building the index OID list whenever it wasn't\n> already cached. (OTOH ... I'm unsure whether opening an index by OID\n> is any faster than opening it by name, but it's certainly plausible that\n> it might be --- so we could find we buy back the time spent building\n> relcache index lists by making the actual index open step quicker.)\n\nIndeed, it seems this *is* faster. I used the regression tests as a\ncrude benchmark --- pg_bench wouldn't help since it doesn't do any\ncatalog updates in the inner loop. Over ten runs of \"make installcheck\"\non a RH 7.2 system, yesterday's CVS tip gave the following results for\nelapsed real time in seconds:\n\n min | max | avg | stddev \n-------+-------+------------------+-------------------\n 26.18 | 28.12 | 27.3590909090909 | 0.767247737637082\n\nWith modifications to do all system catalog index updates as above,\nI instead got:\n\n min | max | avg | stddev \n------+-------+--------+-------------------\n 24.3 | 26.72 | 25.833 | 0.674372959784605\n\nSo it seems to be a fairly reliable 5% win on the regression tests,\non top of being a lot simpler and more maintainable.\n\nI'm pretty pleased, and will be committing this as soon as CVS tip\nis back to a non-broken state ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 18:33:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Planned simplification of catalog index updates "
},
{
"msg_contents": "Tom Lane wrote:\n> So it seems to be a fairly reliable 5% win on the regression tests,\n> on top of being a lot simpler and more maintainable.\n> \n> I'm pretty pleased, and will be committing this as soon as CVS tip\n> is back to a non-broken state ...\n> \n\nNice work!\n\nJoe\n\n",
"msg_date": "Sun, 04 Aug 2002 16:27:35 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Planned simplification of catalog index updates"
},
{
"msg_contents": "\nWow, that is a huge difference for such a small change; makes you\nwonder what other optimizations are sitting out there.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I said:\n> > An even more radical approach is to get rid of the hardwired index name\n> > lists in indexing.h, and instead expect CatalogOpenIndices to make use\n> > of the index OID lists that are maintained by the relcache (since 7.1 or\n> > so). Then the typical call would reduce to\n> > CatalogUpdateIndexes(Relation, HeapTuple);\n> > This would simplify development/maintenance at the cost of a small\n> > amount of CPU time building the index OID list whenever it wasn't\n> > already cached. (OTOH ... I'm unsure whether opening an index by OID\n> > is any faster than opening it by name, but it's certainly plausible that\n> > it might be --- so we could find we buy back the time spent building\n> > relcache index lists by making the actual index open step quicker.)\n> \n> Indeed, it seems this *is* faster. I used the regression tests as a\n> crude benchmark --- pg_bench wouldn't help since it doesn't do any\n> catalog updates in the inner loop. Over ten runs of \"make installcheck\"\n> on a RH 7.2 system, yesterday's CVS tip gave the following results for\n> elapsed real time in seconds:\n> \n> min | max | avg | stddev \n> -------+-------+------------------+-------------------\n> 26.18 | 28.12 | 27.3590909090909 | 0.767247737637082\n> \n> With modifications to do all system catalog index updates as above,\n> I instead got:\n> \n> min | max | avg | stddev \n> ------+-------+--------+-------------------\n> 24.3 | 26.72 | 25.833 | 0.674372959784605\n> \n> So it seems to be a fairly reliable 5% win on the regression tests,\n> on top of being a lot simpler and more maintainable.\n> \n> I'm pretty pleased, and will be committing this as soon as CVS tip\n> is back to a non-broken state ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Aug 2002 21:13:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Planned simplification of catalog index updates"
}
] |
[
{
"msg_contents": "> > Well, PITR without log archiving could be alternative to\n> > pg_dump/pg_restore, but I agreed that it's not the big\n> > feature to worry about.\n> \n> Seems like a pointless \"feature\" to me. A pg_dump dump serves just\n> as well to capture a snapshot --- in fact better, since it's likely\n> smaller, definitely more portable, amenable to selective restore, etc.\n\nBut pg_restore probably will take longer time than copy data files\nback and re-apply log.\n\n> I think we should design the PITR dump to do a good job for PITR,\n> not a poor job of both PITR and pg_dump.\n\nAs I already said - agreed -:)\n\nVadim\n",
"msg_date": "Fri, 2 Aug 2002 18:07:40 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: PITR, checkpoint, and local relations "
}
] |
[
{
"msg_contents": "\nI wonder if we actually did the right thing with this.\n\nFor example:\n select cast('ab' as char(1));\n\nUsing sql92's definitions, I read TD as\na fixed length character string and\nSD as the same.\n\nWhich means I think the section that\ncomes into play is:\n\nSQL92 6.10 GR5 c ii\n\n ii) If the length in characters of SV is larger than LTD, then\n TV is the first LTD characters of SV. If any of the re-\n maining characters of SV are non-<space> characters, then a\n completion condition is raised: warning-string data, right\n truncation.\n\nIt looks like SQL99's cast specification is similar for this\ncase.\n\nWouldn't that mean the operation is supposed to succeed with\ndiagnostic information since it's a completion condition not\nan exception condition?\n\n",
"msg_date": "Fri, 2 Aug 2002 19:52:52 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "char/varchar truncation"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> I wonder if we actually did the right thing with this.\n> ...\n> Wouldn't that mean the operation is supposed to succeed with\n> diagnostic information since it's a completion condition not\n> an exception condition?\n\nHm. You are right: an explicit cast to varchar(n) has different\nbehavior according to the spec than a store assignment (ie,\nimplicit coercion) to varchar. The implicit coercion should fail.\n\nAFAIR our cast mechanisms aren't prepared to use two different\nroutines for these two cases. Looks like we have some work to do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 00:14:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: char/varchar truncation "
},
{
"msg_contents": "On Sat, 3 Aug 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > I wonder if we actually did the right thing with this.\n> > ...\n> > Wouldn't that mean the operation is supposed to succeed with\n> > diagnostic information since it's a completion condition not\n> > an exception condition?\n>\n> Hm. You are right: an explicit cast to varchar(n) has different\n> behavior according to the spec than a store assignment (ie,\n> implicit coercion) to varchar. The implicit coercion should fail.\n>\n> AFAIR our cast mechanisms aren't prepared to use two different\n> routines for these two cases. Looks like we have some work to do.\n\nAs a note, looking at the spec again, cast(12 as char(1)) should fail\nwhile cast('12' as char(1)) should succeed with notice, and both 12 and\n'12' should fail when being put into a column of char(1). So, it's\ndependant on both whether cast() was used and on the source type.\n\nI went poking around a little bit looking at the stuff in\nparse_coerce.c and related, but haven't had time to look\ntoo deeply at its callers.\n\nRight now there are two paths that seem to be be able to cause the length\nerrors. One is in coerce_type_typmod, the other is in the type's input\nfunction (for conversion from unknown). For the call to the input function\nfrom coerce_type it looks like we wouldn't need to pass a non -1 typmod\nsince the coerce_type_typmod that really should follow (since without it\nyou'd get broken behavior in the non-unknown case) would catch it. That'd\nallow us to fix the behavior through only one path. I assume that the\ninput function would continue working in the same fashion for non -1\ntypmods. This also gets us around needing to change the input function's\narguments.\n\nI believe that we'd want to store typmod conversion data in the pg_cast\nrow for the conversion we're doing. coerce_type_typmod could then lookup\nthe function that way (rather than from the typename and oid). I'm a\nlittle worried about the fact that we would be doing more searches\non pg_cast. Haven't thought of a better way (admittedly having not\nsearched too hard yet either). This seems preferable to doing some kind\nof hardcoded check on the source type since it allows user conversions\nto work either way. One side effect of this is that we could end up with\nmore rows in pg_cast since int->char(n) is no longer quite like int->text\nand we'd want to be able to specify what happens for char(n)->char(m).\n\nDoes any of that seem reasonable as a starting point for exploration?\n\n",
"msg_date": "Mon, 5 Aug 2002 22:14:21 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: char/varchar truncation "
}
] |
[
{
"msg_contents": "I have been wondering if we should add a web page of a list of\ndevelopers working on PostgreSQL who's time is sponsored by companies.\n\nI know myself (SRA), Tom Lane(RH), Neil Conway (RH), and probably many\nothers sponsor PostgreSQL developement, and I think we should have a way\nof recognizing such contributions to the project. This may also\nencourage additional developer time contributions.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 00:11:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Sponsored developers on web site"
}
] |
[
{
"msg_contents": "I just tried CLUSTER command at fts.postgresql.org to cluster\nfts index and got very visual performance win. Unfortunately\nI had to restore permissions and recreate other indices by hand.\nSo, I'm interested what's a future of CLUSTER command ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 3 Aug 2002 19:47:10 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "fate of CLUSTER command ?"
},
{
"msg_contents": "Oleg Bartunov dijo: \n\n> I just tried CLUSTER command at fts.postgresql.org to cluster\n> fts index and got very visual performance win. Unfortunately\n> I had to restore permissions and recreate other indices by hand.\n> So, I'm interested what's a future of CLUSTER command ?\n\nI'm working on CLUSTER. I have a problem with dependency tracking right\nnow that I need to get fixed before the patch gets accepted, but that\nshouldn't take long (hopefully).\n\nThe patch supposedly fixes all the concerns about CLUSTER (permissions,\nother indexes, inheritance).\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n\n",
"msg_date": "Sat, 3 Aug 2002 14:34:36 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "On Sat, 3 Aug 2002, Alvaro Herrera wrote:\n\n> Oleg Bartunov dijo:\n>\n> > I just tried CLUSTER command at fts.postgresql.org to cluster\n> > fts index and got very visual performance win. Unfortunately\n> > I had to restore permissions and recreate other indices by hand.\n> > So, I'm interested what's a future of CLUSTER command ?\n>\n> I'm working on CLUSTER. I have a problem with dependency tracking right\n> now that I need to get fixed before the patch gets accepted, but that\n> shouldn't take long (hopefully).\n>\n> The patch supposedly fixes all the concerns about CLUSTER (permissions,\n> other indexes, inheritance).\n>\n\nGod news. Will it go to 7.3 ?\n\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 3 Aug 2002 22:15:28 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "Oleg Bartunov dijo: \n\n> On Sat, 3 Aug 2002, Alvaro Herrera wrote:\n> \n> > Oleg Bartunov dijo:\n> >\n> > > I just tried CLUSTER command at fts.postgresql.org to cluster\n> > > fts index and got very visual performance win. Unfortunately\n> > > I had to restore permissions and recreate other indices by hand.\n> > > So, I'm interested what's a future of CLUSTER command ?\n> >\n> > I'm working on CLUSTER. I have a problem with dependency tracking right\n> > now that I need to get fixed before the patch gets accepted, but that\n> > shouldn't take long (hopefully).\n> \n> God news. Will it go to 7.3 ?\n\nIn fact, I have just corrected the error and am submitting the patch for\nrevision and possible inclusion.\n\nPlease test it and check if it does what you need. Let me know if it\ndoesn't, because it should.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n",
"msg_date": "Sat, 3 Aug 2002 16:25:32 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "Oleg Bartunov wrote:\n> I just tried CLUSTER command at fts.postgresql.org to cluster\n> fts index and got very visual performance win. Unfortunately\n> I had to restore permissions and recreate other indices by hand.\n> So, I'm interested what's a future of CLUSTER command ?\n\nYes, I have always liked CLUSTER with full text searches because you are\nusually hitting multiple rows with a single equaltiy restriction, and\nCLUSTER puts all the hits on the same page.\n\nIf you look in contrib/fulltextindex, you will see mention of CLUSTER in\nthe README. It may make sense to add that to your documentation.\n\nAlso, is there any value to contrib/fulltextindex now that we have\ncontrib/tsearch?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 20:57:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "\nAlso, let me add that CLUSTER in 7.3 will be fully functional because we\nwill no longer be changing the oid of the table during cluster. This\nwill allow people to use CLUSTER more frequently/safely.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Oleg Bartunov wrote:\n> > I just tried CLUSTER command at fts.postgresql.org to cluster\n> > fts index and got very visual performance win. Unfortunately\n> > I had to restore permissions and recreate other indices by hand.\n> > So, I'm interested what's a future of CLUSTER command ?\n> \n> Yes, I have always liked CLUSTER with full text searches because you are\n> usually hitting multiple rows with a single equaltiy restriction, and\n> CLUSTER puts all the hits on the same page.\n> \n> If you look in contrib/fulltextindex, you will see mention of CLUSTER in\n> the README. It may make sense to add that to your documentation.\n> \n> Also, is there any value to contrib/fulltextindex now that we have\n> contrib/tsearch?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 21:02:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "> Yes, I have always liked CLUSTER with full text searches because you are\n> usually hitting multiple rows with a single equaltiy restriction, and\n> CLUSTER puts all the hits on the same page.\n>\n> If you look in contrib/fulltextindex, you will see mention of CLUSTER in\n> the README. It may make sense to add that to your documentation.\n>\n> Also, is there any value to contrib/fulltextindex now that we have\n> contrib/tsearch?\n\nI haven't looked at tsearch yet, but I expect it's way better than\nfulltextindex. However there's more than a few of us using fulltextindex,\nso I think it will need to stay for some while. I'm working on a new\nversion of it for 7.3.\n\nI can put pointers in the README about checking out tsearch...\n\nChris\n\n\n",
"msg_date": "Sun, 4 Aug 2002 13:34:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Also, is there any value to contrib/fulltextindex now that we have\n>> contrib/tsearch?\n\n> I haven't looked at tsearch yet, but I expect it's way better than\n> fulltextindex. However there's more than a few of us using fulltextindex,\n> so I think it will need to stay for some while.\n\nRight, at least a couple releases.\n\n> I'm working on a new version of it for 7.3.\n\nWhat have you got in mind?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 01:41:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ? "
},
{
"msg_contents": "On Sat, 3 Aug 2002, Bruce Momjian wrote:\n\n> Oleg Bartunov wrote:\n> > I just tried CLUSTER command at fts.postgresql.org to cluster\n> > fts index and got very visual performance win. Unfortunately\n> > I had to restore permissions and recreate other indices by hand.\n> > So, I'm interested what's a future of CLUSTER command ?\n>\n> Yes, I have always liked CLUSTER with full text searches because you are\n> usually hitting multiple rows with a single equaltiy restriction, and\n> CLUSTER puts all the hits on the same page.\n>\n> If you look in contrib/fulltextindex, you will see mention of CLUSTER in\n> the README. It may make sense to add that to your documentation.\n>\n\nI have to play to get feeling. I don't understand what happens if\nrows will be added to clustered table. Also, what will happens if\nthere are several other indices on the same table ? Does clustering\non one index will decrease performance of queries based on another\nindices ?\n\n\n> Also, is there any value to contrib/fulltextindex now that we have\n> contrib/tsearch?\n>\n\nthey 're different things.\n\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 4 Aug 2002 10:21:39 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "On Sun, 4 Aug 2002, Christopher Kings-Lynne wrote:\n\n> > Yes, I have always liked CLUSTER with full text searches because you are\n> > usually hitting multiple rows with a single equaltiy restriction, and\n> > CLUSTER puts all the hits on the same page.\n> >\n> > If you look in contrib/fulltextindex, you will see mention of CLUSTER in\n> > the README. It may make sense to add that to your documentation.\n> >\n> > Also, is there any value to contrib/fulltextindex now that we have\n> > contrib/tsearch?\n>\n> I haven't looked at tsearch yet, but I expect it's way better than\n> fulltextindex. However there's more than a few of us using fulltextindex,\n> so I think it will need to stay for some while. I'm working on a new\n> version of it for 7.3.\n>\n\nI'm totally agre with Chris. FTI is something another thing.\nFTI is good for more or less static document collection - a cost of\ninsert if high for inverted indices. We've developed tsearch keeping in\nmind incremental update.\n\nFTI should be faster for short queries while tsearch is better for long one.\n\ntsearch development focused also on real IR support - language support,\nindexing of specified classes of lexemes , etc. We laready have OpenFTS\nwhich has these features, but we want to move all functionality to\ntsearch.\n\n\n> I can put pointers in the README about checking out tsearch...\n>\n> Chris\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sun, 4 Aug 2002 10:42:33 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "> > I'm working on a new version of it for 7.3.\n>\n> What have you got in mind?\n\nWell I have patches from Florian and someone else. Some wide character\nstuff, non-indexable word support, full word match search support, speed and\nspace optimisations, etc.\n\nI'm just trying to set it up in a backwards-compatible way... I want the\ncontrib to build two separate .so files...\n\nChris\n\n\n",
"msg_date": "Sun, 4 Aug 2002 15:52:45 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ? "
},
{
"msg_contents": "Oleg Bartunov wrote:\n> On Sat, 3 Aug 2002, Bruce Momjian wrote:\n> \n> > Oleg Bartunov wrote:\n> > > I just tried CLUSTER command at fts.postgresql.org to cluster\n> > > fts index and got very visual performance win. Unfortunately\n> > > I had to restore permissions and recreate other indices by hand.\n> > > So, I'm interested what's a future of CLUSTER command ?\n> >\n> > Yes, I have always liked CLUSTER with full text searches because you are\n> > usually hitting multiple rows with a single equaltiy restriction, and\n> > CLUSTER puts all the hits on the same page.\n> >\n> > If you look in contrib/fulltextindex, you will see mention of CLUSTER in\n> > the README. It may make sense to add that to your documentation.\n> >\n> \n> I have to play to get feeling. I don't understand what happens if\n> rows will be added to clustered table. Also, what will happens if\n> there are several other indices on the same table ? Does clustering\n> on one index will decrease performance of queries based on another\n> indices ?\n\nClustering on one index doesn't decrease the performance of the other\nindexes. Also, only >=7.3 will preserve all indexes during cluster.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Aug 2002 10:10:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "> Clustering on one index doesn't decrease the performance of the other\n> indexes. Also, only >=7.3 will preserve all indexes during cluster.\n\nSure it must? Since you are rearranging all on-disk rows to match a\nparticular index (say user_id, username) then it will slow down other\nindexes (eg one just on username).\n\nChris\n\n",
"msg_date": "Mon, 5 Aug 2002 11:10:31 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > Clustering on one index doesn't decrease the performance of the other\n> > indexes. Also, only >=7.3 will preserve all indexes during cluster.\n> \n> Sure it must? Since you are rearranging all on-disk rows to match a\n> particular index (say user_id, username) then it will slow down other\n> indexes (eg one just on username).\n\nIt will slow down other index scans only if there was some clustering on\nthose indexes before you ran the CLUSTER command.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Aug 2002 23:17:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
},
{
"msg_contents": "On Sun, Aug 04, 2002 at 11:17:03PM -0400, Bruce Momjian wrote:\n> Christopher Kings-Lynne wrote:\n> > > Clustering on one index doesn't decrease the performance of the other\n> > > indexes. Also, only >=7.3 will preserve all indexes during cluster.\n> > \n> > Sure it must? Since you are rearranging all on-disk rows to match a\n> > particular index (say user_id, username) then it will slow down other\n> > indexes (eg one just on username).\n> \n> It will slow down other index scans only if there was some clustering on\n> those indexes before you ran the CLUSTER command.\n\nActually, it would depend on the level of correlation between the values\nindexed. If there's some correlation, performance using the second index\ncould improve some - if they're anti-correlated, it will decrease. If\nuncorrelated, there should be no effect.\n\nRoss\n",
"msg_date": "Wed, 7 Aug 2002 00:24:53 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: fate of CLUSTER command ?"
}
] |
[
{
"msg_contents": "I couldn't keep up with the list traffic this week, but I thought I saw \nenough to convince me that after it was all said and done, I would still \nbe able to do `cvs co pgsql`. I'm finding today that after using cvsup \nto sync up, I can no longer checkout pgsql, but pgsql-server instead. Is \nthis intended, or are there more changes left to be made?\n\nAlso, as side note, the link for cvsup is broken:\n http://developer.postgresql.org/TODO/docs/cvsup.html\nand CVS tree Oragnization:\n http://developer.postgresql.org/TODO/docs/cvs-tree.html\n\nThanks,\n\nJoe\n\n",
"msg_date": "Sat, 03 Aug 2002 11:56:06 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "cvs changes and broken links"
},
{
"msg_contents": "On Sat, 3 Aug 2002, Joe Conway wrote:\n\n> I couldn't keep up with the list traffic this week, but I thought I saw\n> enough to convince me that after it was all said and done, I would still\n> be able to do `cvs co pgsql`. I'm finding today that after using cvsup\n> to sync up, I can no longer checkout pgsql, but pgsql-server instead. Is\n> this intended, or are there more changes left to be made?\n\nOkay, I've tested both using the regular CVS and the anoncvs servers, and\nboth co pgsql just fine ... can you email me (off list) the errors you are\nseeing, as well as your CVSROOT?\n\n\n",
"msg_date": "Sun, 4 Aug 2002 04:01:54 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: cvs changes and broken links"
}
] |
[
{
"msg_contents": "Hi all,\n\nIt occured to me on the plane home that now that CLUSTER is fixed we may\nbe able to put pg_index.indisclustered to use. If CLUSTER was to set\nindisclustered to true when it clusters a heap according to the given\nindex, we could speed up sequantial scans. There are two possible ways.\n\n1) Planner determines that a seqscan is appropriate *and* the retrieval is\nqualified by the key(s) of one of the relation's indexes\n2) Planner determines that the relation is clustered on disk according to\nthe index over the key(s) used to qualify the retrieval\n3) Planner sets an appropriate nodeTag for the retrieval (SeqScanCluster?)\n4) ExecProcNode() calls some new scan routine, ExecSeqScanCluster() ?\n5) ExecSeqScanCluster() calls ExecScan() with a new ExecScanAccessMtd (ie,\ndifferent from SeqNext) called SeqClusterNext\n6) SeqClusterNext() has all the heapgettup() logic with two\nexceptions: a) we find the first tuple more intelligently (instead of\nscanning from the first page) b) if we have found tuple(s) matching the\nScanKey when we encounter an non-matching tuple (via\nHeapTupleSatisfies() ?) we return a NULL'ed out tuple, terminating the\nscan\n\nAny reason this isn't possible? Any reason it couldn't dramatically speed\nup the performance of the type of query i've mentioned?\n\nGavin\n\n",
"msg_date": "Sun, 4 Aug 2002 11:43:31 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "CLUSTER and indisclustered"
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> It occured to me on the plane home that now that CLUSTER is fixed we may\n> be able to put pg_index.indisclustered to use. If CLUSTER was to set\n> indisclustered to true when it clusters a heap according to the given\n> index, we could speed up sequantial scans.\n\nAFAICT you're assuming that the table is *exactly* ordered by the\nclustered attribute. While this is true at the instant CLUSTER\ncompletes, the exact ordering will be destroyed by the first insert or\nupdate :-(. I can't see much value in creating a whole new scan type\nthat's only usable on a perfectly-clustered table.\n\nThe existing approach to making the planner smart about clustered tables\nis to compute a physical-vs-logical-order-correlation statistic and use\nthat to adjust the estimated cost of indexscans. I believe this is a\nmore robust approach than considering a table to be \"clustered\" or \"not\nclustered\", since it can deal with the gradual degradation of clustered\norder over time. However, I will not make any great claims for the\nspecific equations currently used for this purpose --- they're surely in\nneed of improvement. Feel free to take a look and see if you have any\nideas. The collection of the statistic is in commands/analyze.c and the\nuse of it is in optimizer/path/costsize.c.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 22:45:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "On Sat, 3 Aug 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > It occured to me on the plane home that now that CLUSTER is fixed we may\n> > be able to put pg_index.indisclustered to use. If CLUSTER was to set\n> > indisclustered to true when it clusters a heap according to the given\n> > index, we could speed up sequantial scans.\n> \n> AFAICT you're assuming that the table is *exactly* ordered by the\n> clustered attribute. While this is true at the instant CLUSTER\n> completes, the exact ordering will be destroyed by the first insert or\n> update :-(. I can't see much value in creating a whole new scan type\n\nSorry, I meant to say that heap_insert() etc would need to set\nindisclustered to false.\n\nI do see some worth in this however. Naturally, in a situation where a\ndatabase is being modified very often this is of little value. However,\nfor applications focussed on analysing large amounts of static data this\ncould increase performance significantly. Once I get some time I will\nattempt to explore this further in `diff -c` format :-).\n\nGavin\n\n",
"msg_date": "Sun, 4 Aug 2002 12:55:55 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "Tom Lane wrote:\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > It occured to me on the plane home that now that CLUSTER is fixed we may\n> > be able to put pg_index.indisclustered to use. If CLUSTER was to set\n> > indisclustered to true when it clusters a heap according to the given\n> > index, we could speed up sequantial scans.\n> \n> AFAICT you're assuming that the table is *exactly* ordered by the\n> clustered attribute. While this is true at the instant CLUSTER\n> completes, the exact ordering will be destroyed by the first insert or\n> update :-(. I can't see much value in creating a whole new scan type\n> that's only usable on a perfectly-clustered table.\n> \n> The existing approach to making the planner smart about clustered tables\n> is to compute a physical-vs-logical-order-correlation statistic and use\n> that to adjust the estimated cost of indexscans. I believe this is a\n> more robust approach than considering a table to be \"clustered\" or \"not\n> clustered\", since it can deal with the gradual degradation of clustered\n> order over time. However, I will not make any great claims for the\n> specific equations currently used for this purpose --- they're surely in\n> need of improvement. Feel free to take a look and see if you have any\n> ideas. The collection of the statistic is in commands/analyze.c and the\n> use of it is in optimizer/path/costsize.c.\n\nTom, should we be updating that flag after we CLUSTER instead of\nrequiring an ANALYZE after the CLUSTER?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 22:55:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Gavin Sherry wrote:\n> Hi all,\n> \n> It occured to me on the plane home that now that CLUSTER is fixed we may\n> be able to put pg_index.indisclustered to use. If CLUSTER was to set\n> indisclustered to true when it clusters a heap according to the given\n> index, we could speed up sequantial scans. There are two possible ways.\n> \n> 1) Planner determines that a seqscan is appropriate *and* the retrieval is\n> qualified by the key(s) of one of the relation's indexes\n> 2) Planner determines that the relation is clustered on disk according to\n> the index over the key(s) used to qualify the retrieval\n> 3) Planner sets an appropriate nodeTag for the retrieval (SeqScanCluster?)\n> 4) ExecProcNode() calls some new scan routine, ExecSeqScanCluster() ?\n> 5) ExecSeqScanCluster() calls ExecScan() with a new ExecScanAccessMtd (ie,\n> different from SeqNext) called SeqClusterNext\n> 6) SeqClusterNext() has all the heapgettup() logic with two\n> exceptions: a) we find the first tuple more intelligently (instead of\n> scanning from the first page) b) if we have found tuple(s) matching the\n> ScanKey when we encounter an non-matching tuple (via\n> HeapTupleSatisfies() ?) we return a NULL'ed out tuple, terminating the\n> scan\n\nGavin, is that a big win compared to just using the index and looping\nthrough the entries, knowing that the index matches are on the same\npage, and the heap matches are on the same page.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 22:57:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "On Sat, 3 Aug 2002, Bruce Momjian wrote:\n\n> Gavin Sherry wrote:\n> > Hi all,\n> > \n> > It occured to me on the plane home that now that CLUSTER is fixed we may\n> > be able to put pg_index.indisclustered to use. If CLUSTER was to set\n> > indisclustered to true when it clusters a heap according to the given\n> > index, we could speed up sequantial scans. There are two possible ways.\n> > \n> > 1) Planner determines that a seqscan is appropriate *and* the retrieval is\n> > qualified by the key(s) of one of the relation's indexes\n> > 2) Planner determines that the relation is clustered on disk according to\n> > the index over the key(s) used to qualify the retrieval\n> > 3) Planner sets an appropriate nodeTag for the retrieval (SeqScanCluster?)\n> > 4) ExecProcNode() calls some new scan routine, ExecSeqScanCluster() ?\n> > 5) ExecSeqScanCluster() calls ExecScan() with a new ExecScanAccessMtd (ie,\n> > different from SeqNext) called SeqClusterNext\n> > 6) SeqClusterNext() has all the heapgettup() logic with two\n> > exceptions: a) we find the first tuple more intelligently (instead of\n> > scanning from the first page) b) if we have found tuple(s) matching the\n> > ScanKey when we encounter an non-matching tuple (via\n> > HeapTupleSatisfies() ?) we return a NULL'ed out tuple, terminating the\n> > scan\n> \n> Gavin, is that a big win compared to just using the index and looping\n> through the entries, knowing that the index matches are on the same\n> page, and the heap matches are on the same page.\n\nBruce,\n\nIt would cut out the index over head. Besides at (1) (above) we would have\ndetermined that an index scan was too expensive and we would be using a\nSeqScan instead. This would just be faster, since a) we would locate the\ntuples more intelligently b) we wouldn't need to scan the whole heap once\nwe'd found all tuples matching the scan key.\n\nGavin\n\n",
"msg_date": "Sun, 4 Aug 2002 13:05:39 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, should we be updating that flag after we CLUSTER instead of\n> requiring an ANALYZE after the CLUSTER?\n\nCould do that I suppose, but I'm not super-excited about it. ANALYZE is\nquite cheap these days (especially in comparison to CLUSTER ;-)). I'd\nsettle for a note in the CLUSTER docs that recommends a subsequent\nANALYZE --- this seems no different from recommending ANALYZE after bulk\ndata load or other major update of a table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 23:20:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "Gavin Sherry wrote:\n> > Gavin, is that a big win compared to just using the index and looping\n> > through the entries, knowing that the index matches are on the same\n> > page, and the heap matches are on the same page.\n> \n> Bruce,\n> \n> It would cut out the index over head. Besides at (1) (above) we would have\n> determined that an index scan was too expensive and we would be using a\n> SeqScan instead. This would just be faster, since a) we would locate the\n> tuples more intelligently b) we wouldn't need to scan the whole heap once\n> we'd found all tuples matching the scan key.\n\nYes, but in a clustered table, an index scan is _never_ (?) more\nexpensive than a sequential scan, at least if the optimizer is working\ncorrectly. Index scans are slower only because they assume random heap\naccess, but with a clustered table, there is no random heap access. The\nindex takes to right to the spot to start.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 23:21:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom, should we be updating that flag after we CLUSTER instead of\n> > requiring an ANALYZE after the CLUSTER?\n> \n> Could do that I suppose, but I'm not super-excited about it. ANALYZE is\n> quite cheap these days (especially in comparison to CLUSTER ;-)). I'd\n> settle for a note in the CLUSTER docs that recommends a subsequent\n> ANALYZE --- this seems no different from recommending ANALYZE after bulk\n> data load or other major update of a table.\n\nOK. I am sure it is not obvious to people to ANALYZE because the data\nin their table hasn't changed, just the ordering.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Aug 2002 23:22:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> On Sat, 3 Aug 2002, Tom Lane wrote:\n>> AFAICT you're assuming that the table is *exactly* ordered by the\n>> clustered attribute. While this is true at the instant CLUSTER\n>> completes, the exact ordering will be destroyed by the first insert or\n>> update :-(. I can't see much value in creating a whole new scan type\n\n> Sorry, I meant to say that heap_insert() etc would need to set\n> indisclustered to false.\n\n<<itch>> You could do that, but only if you are prepared to invent\na mechanism that will instantly invalidate any existing query plans\nthat assume the clustered ordering is good.\n\nUp to now we've only allowed the planner to make decisions that impact\nperformace, not correctness of the result. I'm uncomfortable with the\nidea that a \"clusterscan\" plan could silently return wrong answers after\nsomeone else updates the table and doesn't tell us they did.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Aug 2002 23:37:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "Gavin Sherry wrote:\n\n>On Sat, 3 Aug 2002, Bruce Momjian wrote:\n>\n>>Gavin Sherry wrote:\n>>\n>>>Hi all,\n>>>\n>>>It occured to me on the plane home that now that CLUSTER is fixed we may\n>>>be able to put pg_index.indisclustered to use. If CLUSTER was to set\n>>>indisclustered to true when it clusters a heap according to the given\n>>>index, we could speed up sequantial scans. There are two possible ways.\n>>>\n>>>1) Planner determines that a seqscan is appropriate *and* the retrieval is\n>>>qualified by the key(s) of one of the relation's indexes\n>>>2) Planner determines that the relation is clustered on disk according to\n>>>the index over the key(s) used to qualify the retrieval\n>>>3) Planner sets an appropriate nodeTag for the retrieval (SeqScanCluster?)\n>>>4) ExecProcNode() calls some new scan routine, ExecSeqScanCluster() ?\n>>>5) ExecSeqScanCluster() calls ExecScan() with a new ExecScanAccessMtd (ie,\n>>>different from SeqNext) called SeqClusterNext\n>>>6) SeqClusterNext() has all the heapgettup() logic with two\n>>>exceptions: a) we find the first tuple more intelligently (instead of\n>>>scanning from the first page) b) if we have found tuple(s) matching the\n>>>ScanKey when we encounter an non-matching tuple (via\n>>>HeapTupleSatisfies() ?) we return a NULL'ed out tuple, terminating the\n>>>scan\n>>>\n>>Gavin, is that a big win compared to just using the index and looping\n>>through the entries, knowing that the index matches are on the same\n>>page, and the heap matches are on the same page.\n>>\n>\n>Bruce,\n>\n>It would cut out the index over head. Besides at (1) (above) we would have\n>determined that an index scan was too expensive and we would be using a\n>SeqScan instead. This would just be faster, since a) we would locate the\n>tuples more intelligently b) we wouldn't need to scan the whole heap once\n>we'd found all tuples matching the scan key.\n>\n>Gavin\n>\n\n\nGavin and Bruce,\n\nI am not so sure index access in these cases is such an overhead - since \nthe clustered nature of the table means that many index elements will \npoint to table data that is located in a few pages .\n\n\nOk, this change would save you the initial access of the index structure \nitself - but isnt\nthe usual killer for indexes is the \"thrashing\" that happens when the \n\"pointed to\" table data is spread over a many pages.\n\nI did some tests on this a while ago ( 7.1 era) and discovered that for \na clustered table a sequential scan did not start to win until the \ntarget dataset was ~30% of the table itself.\nWhile the suggested change would of course mean that seq scans would do \nmuch better than they did in my tests, it would be interesting to see \nsome timings...\n\n\nregards\n\nMark\n\n\n",
"msg_date": "Sun, 04 Aug 2002 15:39:10 +1200",
"msg_from": "mark Kirkwood <markir@slithery.org>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "On Sun, 4 Aug 2002, mark Kirkwood wrote:\n\n> Ok, this change would save you the initial access of the index\n> structure itself - but isnt the usual killer for indexes is the\n> \"thrashing\" that happens when the \"pointed to\" table data is spread\n> over a many pages.\n\nYeah, no kidding on this one. I've reduced queries from 75 seconds\nto 0.6 seconds by clustering on the appropriate field.\n\nBut after doing some benchmarking of various sorts of random reads\nand writes, it occurred to me that there might be optimizations\nthat could help a lot with this sort of thing. What if, when we've\ngot an index block with a bunch of entries, instead of doing the\nreads in the order of the entries, we do them in the order of the\nblocks the entries point to? That would introduce a certain amount\nof \"sequentialness\" to the reads that the OS is not capable of\nintroducing (since it can't reschedule the reads you're doing, the\nway it could reschedule, say, random writes).\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 7 Aug 2002 11:31:32 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "On Wed, 7 Aug 2002, Curt Sampson wrote:\n\n> But after doing some benchmarking of various sorts of random reads\n> and writes, it occurred to me that there might be optimizations\n> that could help a lot with this sort of thing. What if, when we've\n> got an index block with a bunch of entries, instead of doing the\n> reads in the order of the entries, we do them in the order of the\n> blocks the entries point to? That would introduce a certain amount\n> of \"sequentialness\" to the reads that the OS is not capable of\n> introducing (since it can't reschedule the reads you're doing, the\n> way it could reschedule, say, random writes).\n\nThis sounds more or less like the method employed by Firebird as described\nby Ann Douglas to Tom at OSCON (correct me if I get this wrong).\n\nBasically, firebird populates a bitmap with entries the scan is interested\nin. The bitmap is populated in page order so that all entries on the same\nheap page can be fetched at once.\n\nThis is totally different to the way postgres does things and would\nrequire significant modification to the index access methods.\n\nGavin\n\n",
"msg_date": "Wed, 7 Aug 2002 13:07:38 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> But after doing some benchmarking of various sorts of random reads\n> and writes, it occurred to me that there might be optimizations\n> that could help a lot with this sort of thing. What if, when we've\n> got an index block with a bunch of entries, instead of doing the\n> reads in the order of the entries, we do them in the order of the\n> blocks the entries point to?\n\nI thought to myself \"didn't I just post something about that?\"\nand then realized it was on a different mailing list. Here ya go\n(and no, this is not the first time around on this list either...)\n\n\nI am currently thinking that bitmap indexes per se are not all that\ninteresting. What does interest me is bitmapped index lookup, which\ncame back into mind after hearing Ann Harrison describe how FireBird/\nInterBase does it.\n\nThe idea is that you don't scan the index and base table concurrently\nas we presently do it. Instead, you scan the index and make a list\nof the TIDs of the table tuples you need to visit. This list can\nbe conveniently represented as a sparse bitmap. After you've finished\nlooking at the index, you visit all the required table tuples *in\nphysical order* using the bitmap. This eliminates multiple fetches\nof the same heap page, and can possibly let you get some win from\nsequential access.\n\nOnce you have built this mechanism, you can then move on to using\nmultiple indexes in interesting ways: you can do several indexscans\nin one query and then AND or OR their bitmaps before doing the heap\nscan. This would allow, for example, \"WHERE a = foo and b = bar\"\nto be handled by ANDing results from separate indexes on the a and b\ncolumns, rather than having to choose only one index to use as we do\nnow.\n\nSome thoughts about implementation: FireBird's implementation seems\nto depend on an assumption about a fixed number of tuple pointers\nper page. We don't have that, but we could probably get away with\njust allocating BLCKSZ/sizeof(HeapTupleHeaderData) bits per page.\nAlso, the main downside of this approach is that the bitmap could\nget large --- but you could have some logic that causes you to fall\nback to plain sequential scan if you get too many index hits. (It's\ninteresting to think of this as lossy compression of the bitmap...\nwhich leads to the idea of only being fuzzy in limited areas of the\nbitmap, rather than losing all the information you have.)\n\nA possibly nasty issue is that lazy VACUUM has some assumptions in it\nabout indexscans holding pins on index pages --- that's what prevents\nit from removing heap tuples that a concurrent indexscan is just about\nto visit. It might be that there is no problem: even if lazy VACUUM\nremoves a heap tuple and someone else then installs a new tuple in that\nsame TID slot, you should be okay because the new tuple is too new to\npass your visibility test. But I'm not convinced this is safe.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 00:41:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "On Wed, 2002-08-07 at 10:12, Tom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > On Wed, 7 Aug 2002, Tom Lane wrote:\n> >> Also, the main downside of this approach is that the bitmap could\n> >> get large --- but you could have some logic that causes you to fall\n> >> back to plain sequential scan if you get too many index hits.\n> \n> > Well, what I was thinking of, should the list of TIDs to fetch get too\n> > long, was just to break it down in to chunks.\n> \n> But then you lose the possibility of combining multiple indexes through\n> bitmap AND/OR steps, which seems quite interesting to me. If you've\n> visited only a part of each index then you can't apply that concept.\n\nWhen the tuples are small relative to pagesize, you may get some\n\"compression\" by saving just pages and not the actual tids in the the\nbitmap.\n\n-------------\nHannu\n",
"msg_date": "07 Aug 2002 09:46:29 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "On Wed, 7 Aug 2002, Tom Lane wrote:\n\n> I thought to myself \"didn't I just post something about that?\"\n> and then realized it was on a different mailing list. Here ya go\n> (and no, this is not the first time around on this list either...)\n\nWow. I'm glad to see you looking at this, because this feature would so\n*so* much for the performance of some of my queries, and really, really\nimpress my \"billion-row-database\" client.\n\n> The idea is that you don't scan the index and base table concurrently\n> as we presently do it. Instead, you scan the index and make a list\n> of the TIDs of the table tuples you need to visit.\n\nRight.\n\n> Also, the main downside of this approach is that the bitmap could\n> get large --- but you could have some logic that causes you to fall\n> back to plain sequential scan if you get too many index hits.\n\nWell, what I was thinking of, should the list of TIDs to fetch get too\nlong, was just to break it down in to chunks. If you want to limit to,\nsay, 1000 TIDs, and your index has 3000, just do the first 1000, then\nthe next 1000, then the last 1000. This would still result in much less\ndisk head movement and speed the query immensely.\n\n(BTW, I have verified this emperically during testing of random read vs.\nrandom write on a RAID controller. The writes were 5-10 times faster\nthan the reads because the controller was caching a number of writes and\nthen doing them in the best possible order, whereas the reads had to be\nsatisfied in the order they were submitted to the controller.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 7 Aug 2002 13:55:41 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Wed, 7 Aug 2002, Tom Lane wrote:\n>> Also, the main downside of this approach is that the bitmap could\n>> get large --- but you could have some logic that causes you to fall\n>> back to plain sequential scan if you get too many index hits.\n\n> Well, what I was thinking of, should the list of TIDs to fetch get too\n> long, was just to break it down in to chunks.\n\nBut then you lose the possibility of combining multiple indexes through\nbitmap AND/OR steps, which seems quite interesting to me. If you've\nvisited only a part of each index then you can't apply that concept.\n\nAnother point to keep in mind is that the bigger the bitmap gets, the\nless useful an indexscan is, by definition --- sooner or later you might\nas well fall back to a seqscan. So the idea of lossy compression of a\nlarge bitmap seems really ideal to me. In principle you could seqscan\nthe parts of the table where matching tuples are thick on the ground,\nand indexscan the parts where they ain't. Maybe this seems natural\nto me as an old JPEG campaigner, but if you don't see the logic I\nrecommend thinking about it a little ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 01:12:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "On Wed, 7 Aug 2002, Tom Lane wrote:\n\n> But then you lose the possibility of combining multiple indexes through\n> bitmap AND/OR steps, which seems quite interesting to me. If you've\n> visited only a part of each index then you can't apply that concept.\n\nRight. It'd be a shame to lose that, but a little is better than nothing\nat all, if one ends up being faced with that decision.\n\n> Another point to keep in mind is that the bigger the bitmap gets, the\n> less useful an indexscan is, by definition --- sooner or later you might\n> as well fall back to a seqscan.\n\nWell, yes, so long as you chose the correct values of \"big.\" I'd want\nthis to be able to optimize queries against a two billion row table\nabout 150 GB in size. And that might even get bigger in a few years.\n\n> Maybe this seems natural\n> to me as an old JPEG campaigner, but if you don't see the logic I\n> recommend thinking about it a little ...\n\nWell, photos are certainly not random, but database tables may be\nin essentially random order far more often. How much that applies,\nI'm not sure, since I don't really know a lot about this stuff.\nI'll take your word for it on what's best there.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 7 Aug 2002 14:30:02 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Now I remembered my original preference for page bitmaps (vs. tuple\n> bitmaps): one can't actually make good use of a bitmap of tuples because\n> there is no fixed tuples/page ratio and thus no way to quickly go from\n> bit position to actual tuple. You mention the same problem but propose a\n> different solution.\n\n> Using page bitmap, we will at least avoid fetching any unneeded pages -\n> essentially we will have a sequential scan over possibly interesting\n> pages.\n\nRight. One form of the \"lossy compression\" idea I suggested is to\nswitch from a per-tuple bitmap to a per-page bitmap once the bitmap gets\ntoo large to work with. Again, one could imagine doing that only in\ndenser areas of the bitmap.\n\n> But I guess that CLUSTER support for INSERT will not be touched for 7.3\n> as will real bitmap indexes ;)\n\nAll of this is far-future work I think. Adding a new scan type to the\nexecutor would probably be pretty localized, but the ramifications in\nthe planner could be extensive --- especially if you want to do plans\ninvolving ANDed or ORed bitmaps.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 09:26:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "On Wed, 2002-08-07 at 06:46, Hannu Krosing wrote:\n> On Wed, 2002-08-07 at 10:12, Tom Lane wrote:\n> > Curt Sampson <cjs@cynic.net> writes:\n> > > On Wed, 7 Aug 2002, Tom Lane wrote:\n> > >> Also, the main downside of this approach is that the bitmap could\n> > >> get large --- but you could have some logic that causes you to fall\n> > >> back to plain sequential scan if you get too many index hits.\n> > \n> > > Well, what I was thinking of, should the list of TIDs to fetch get too\n> > > long, was just to break it down in to chunks.\n> > \n> > But then you lose the possibility of combining multiple indexes through\n> > bitmap AND/OR steps, which seems quite interesting to me. If you've\n> > visited only a part of each index then you can't apply that concept.\n> \n> When the tuples are small relative to pagesize, you may get some\n> \"compression\" by saving just pages and not the actual tids in the the\n> bitmap.\n\nNow I remembered my original preference for page bitmaps (vs. tuple\nbitmaps): one can't actually make good use of a bitmap of tuples because\nthere is no fixed tuples/page ratio and thus no way to quickly go from\nbit position to actual tuple. You mention the same problem but propose a\ndifferent solution.\n\nUsing page bitmap, we will at least avoid fetching any unneeded pages -\nessentially we will have a sequential scan over possibly interesting\npages.\n\nIf we were to use page-bitmap index for something with only a few values\nlike booleans, some insert-time local clustering should be useful, so\nthat TRUEs and FALSEs end up on different pages.\n\nBut I guess that CLUSTER support for INSERT will not be touched for 7.3\nas will real bitmap indexes ;)\n\n---------------\nHannu\n\n",
"msg_date": "07 Aug 2002 15:29:26 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Wed, 2002-08-07 at 15:26, Tom Lane wrote:\n>> Right. One form of the \"lossy compression\" idea I suggested is to\n>> switch from a per-tuple bitmap to a per-page bitmap once the bitmap gets\n>> too large to work with. \n\n> If it is a real bitmap, should it not be easyeast to allocate at the\n> start ?\n\nBut it isn't a \"real bitmap\". That would be a really poor\nimplementation, both for space and speed --- do you really want to scan\nover a couple of megs of zeroes to find the few one-bits you care about,\nin the typical case? \"Bitmap\" is a convenient term because it describes\nthe abstract behavior we want, but the actual data structure will\nprobably be nontrivial. If I recall Ann's description correctly,\nFirebird's implementation uses run length coding of some kind (anyone\ncare to dig in their source and get all the details?). If we tried\nanything in the way of lossy compression then there'd be even more stuff\nlurking under the hood.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 10:26:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "On Wed, 2002-08-07 at 15:26, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Now I remembered my original preference for page bitmaps (vs. tuple\n> > bitmaps): one can't actually make good use of a bitmap of tuples because\n> > there is no fixed tuples/page ratio and thus no way to quickly go from\n> > bit position to actual tuple. You mention the same problem but propose a\n> > different solution.\n> \n> > Using page bitmap, we will at least avoid fetching any unneeded pages -\n> > essentially we will have a sequential scan over possibly interesting\n> > pages.\n> \n> Right. One form of the \"lossy compression\" idea I suggested is to\n> switch from a per-tuple bitmap to a per-page bitmap once the bitmap gets\n> too large to work with. \n\nIf it is a real bitmap, should it not be easyeast to allocate at the\nstart ?\n\na page bitmap for a 100 000 000 tuple table with 10 tuples/page will be\nsized 10000000/8 = 1.25 MB, which does not look too big for me for that\namount of data (the data table itself would occupy 80 GB).\n\nEven having the bitmap of 16 bits/page (with the bits 0-14 meaning\ntuples 0-14 and bit 15 meaning \"seq scan the rest of page\") would\nconsume just 20 MB of _local_ memory, and would be quite justifyiable\nfor a query on a table that large.\n\nFor a real bitmap index the tuples-per-page should be a user-supplied\ntuning parameter.\n\n> Again, one could imagine doing that only in denser areas of the bitmap.\n\nI would hardly call the resulting structure \"a bitmap\" ;)\n\nAnd I'm not sure the overhead for a more complex structure would win us\nany additional performance for most cases.\n\n> > But I guess that CLUSTER support for INSERT will not be touched for 7.3\n> > as will real bitmap indexes ;)\n> \n> All of this is far-future work I think. \n\nAfter we do that we will probably be able claim support for\n\"datawarehousing\" ;)\n\n> Adding a new scan type to the\n> executor would probably be pretty localized, but the ramifications in\n> the planner could be extensive --- especially if you want to do plans\n> involving ANDed or ORed bitmaps.\n\nAlso going to \"smart inserter\" which can do local clustering on sets of\nreal bitmap indexes for INSERTS (and INSERT side of UPDATE) would\nprobably be a major change from our current \"stupid inserter\" ;)\n\nThis will not be needed for bitmap resolution higher than 1bit/page but\ndefault local clustering on bitmap indexes will probably buy us some\nextra performance. by avoiding data page fetches when such indexes are\nused.\n\nAN anyway the support for INSERT being aware of clustering will probably\ncome up sometime.\n\n------------\nHannu\n\n\n",
"msg_date": "07 Aug 2002 17:13:54 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "On Wed, 2002-08-07 at 04:31, Curt Sampson wrote:\n> On Sun, 4 Aug 2002, mark Kirkwood wrote:\n> \n> > Ok, this change would save you the initial access of the index\n> > structure itself - but isnt the usual killer for indexes is the\n> > \"thrashing\" that happens when the \"pointed to\" table data is spread\n> > over a many pages.\n> \n> Yeah, no kidding on this one. I've reduced queries from 75 seconds\n> to 0.6 seconds by clustering on the appropriate field.\n> \n> But after doing some benchmarking of various sorts of random reads\n> and writes, it occurred to me that there might be optimizations\n> that could help a lot with this sort of thing. What if, when we've\n> got an index block with a bunch of entries, instead of doing the\n> reads in the order of the entries, we do them in the order of the\n> blocks the entries point to? That would introduce a certain amount\n> of \"sequentialness\" to the reads that the OS is not capable of\n> introducing (since it can't reschedule the reads you're doing, the\n> way it could reschedule, say, random writes).\n>\n\nI guess this could be solved elegantly using threading - one thread\nscans index and pushes tids into a btree or some other sorted structure,\nwhile other thread loops continuously (or \"elevatorly\" back and forth)\nover that structure in tuple order and does the actual data reads. \n\nThis would have the added benefit of better utilising multiprocessor\ncomputers.\n\n---------------\nHannu\n\n",
"msg_date": "07 Aug 2002 17:22:15 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "On Wed, 2002-08-07 at 16:26, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Wed, 2002-08-07 at 15:26, Tom Lane wrote:\n> >> Right. One form of the \"lossy compression\" idea I suggested is to\n> >> switch from a per-tuple bitmap to a per-page bitmap once the bitmap gets\n> >> too large to work with. \n> \n> > If it is a real bitmap, should it not be easyeast to allocate at the\n> > start ?\n> \n> But it isn't a \"real bitmap\". That would be a really poor\n> implementation, both for space and speed --- do you really want to scan\n> over a couple of megs of zeroes to find the few one-bits you care about,\n> in the typical case?\n\nI guess that depends on data. The typical case should be somthing the\nstats process will find out so the optimiser can use it\n\nThe bitmap must be less than 1/48 (size of TID) full for best\nuncompressed \"active-tid-list\" to be smaller than plain bitmap. If there\nwere some structure above list then this ratio would be even higher.\n\nI have had good experience using \"compressed delta lists\", which will\nscale well ofer the whole \"fullness\" spectrum of bitmap, but this is for\nstorage, not for initial constructing of lists.\n\n> \"Bitmap\" is a convenient term because it describes\n> the abstract behavior we want, but the actual data structure will\n> probably be nontrivial. If I recall Ann's description correctly,\n> Firebird's implementation uses run length coding of some kind (anyone\n> care to dig in their source and get all the details?).\n\nPlain RLL is probably a good way to store it and for merging two or more\nbitmaps, but not as good for constructing it bit-by-bit. I guess the\nmost effective structure for updating is often still a plain bitmap\n(maybe not if it is very sparse and all of it does not fit in cache),\nfollowed by some kind of balanced tree (maybe rb-tree).\n\nIf the bitmap is relatively full then the plain bitmap is almost always\nthe most effective to update.\n\n> If we tried anything in the way of lossy compression then there'd\n> be even more stuff lurking under the hood.\n\nHaving three-valued (0,1,maybe) RLL-encoded \"tritmap\" would be a good\nway to represent lossy compression, and it would also be quite\nstraightforward to merge two of these using AND or OR. It may even be\npossible to easily construct it using a fixed-length b-tree and going\nfrom 1 to \"maybe\" for nodes that get too dense.\n\n---------------\nHannu\n\n",
"msg_date": "07 Aug 2002 18:24:30 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Tom Lane dijo: \n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom, should we be updating that flag after we CLUSTER instead of\n> > requiring an ANALYZE after the CLUSTER?\n> \n> Could do that I suppose, but I'm not super-excited about it. ANALYZE is\n> quite cheap these days (especially in comparison to CLUSTER ;-)). I'd\n> settle for a note in the CLUSTER docs that recommends a subsequent\n> ANALYZE --- this seems no different from recommending ANALYZE after bulk\n> data load or other major update of a table.\n\nWhat if I [try to] extend the grammar to support an additional ANALYZE\nin CLUSTER, so that it analyzes the table automatically? Say\n\nCLUSTER <index> ON <table> [ANALYZE];\n\nOr maybe just do an analyze of the table automatically after the\nCLUSTERing.\n\nWhat does everybody think?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Para tener mas hay que desear menos\"\n\n",
"msg_date": "Thu, 8 Aug 2002 20:57:23 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> What if I [try to] extend the grammar to support an additional ANALYZE\n> in CLUSTER, so that it analyzes the table automatically?\n\nI don't like this -- it seems like bloat. What's the advantage of\n\nCLUSTER foo ON bar ANALYZE;\n\nover\n\nCLUSTER foo ON bar;\nANALYZE;\n\n> Or maybe just do an analyze of the table automatically after the\n> CLUSTERing.\n\nHmmm... I don't really see the problem with adding a note in the docs\nsuggesting that users following a CLUSTER with an ANALYZE (of course,\nthat assumes that the CLUSTER will significantly change the ordering\nof the data in the table, which isn't always the case -- which is\nanother reason why make this automatic seems unwarranted, IMHO). It\nseems like you're looking for a solution to a non-existent problem.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "08 Aug 2002 21:04:03 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Neil Conway dijo: \n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > What if I [try to] extend the grammar to support an additional ANALYZE\n> > in CLUSTER, so that it analyzes the table automatically?\n> \n> I don't like this -- it seems like bloat.\n\nMaybe you are right.\n\n\n> > Or maybe just do an analyze of the table automatically after the\n> > CLUSTERing.\n> \n> Hmmm... I don't really see the problem with adding a note in the docs\n> suggesting that users following a CLUSTER with an ANALYZE (...).\n\nANALYZE is an inexpensive operation (compared to CLUSTER, anyway), so it\ncan't hurt to have it done automatically.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Linux transform� mi computadora, de una `m�quina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada d�a aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n",
"msg_date": "Thu, 8 Aug 2002 22:03:02 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "> > > Or maybe just do an analyze of the table automatically after the\n> > > CLUSTERing.\n> >\n> > Hmmm... I don't really see the problem with adding a note in the docs\n> > suggesting that users following a CLUSTER with an ANALYZE (...).\n>\n> ANALYZE is an inexpensive operation (compared to CLUSTER, anyway), so it\n> can't hurt to have it done automatically.\n\nWell we have previously had discussions on the topic of adding analyze to\nthe end of dumps, etc. and the result has always been in favour of keeping\nthe command set orthogonal and not doing an automatic analyze...\n\nChris\n\n",
"msg_date": "Fri, 9 Aug 2002 10:15:45 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Christopher Kings-Lynne dijo: \n\n> > > > Or maybe just do an analyze of the table automatically after the\n> > > > CLUSTERing.\n> \n> Well we have previously had discussions on the topic of adding analyze to\n> the end of dumps, etc. and the result has always been in favour of keeping\n> the command set orthogonal and not doing an automatic analyze...\n\nOh. Sorry for the noise.\n\nI'm trying to look at other things in the TODO so I stop pestering about\nCLUSTER.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\ns�lo le suma el nuevo terror de la locura\" (Perelandra, CSLewis)\n\n",
"msg_date": "Thu, 8 Aug 2002 22:21:08 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "> > Well we have previously had discussions on the topic of adding\n> analyze to\n> > the end of dumps, etc. and the result has always been in favour\n> of keeping\n> > the command set orthogonal and not doing an automatic analyze...\n>\n> Oh. Sorry for the noise.\n>\n> I'm trying to look at other things in the TODO so I stop pestering about\n> CLUSTER.\n\nAll I can say is - thanks for fixing CLUSTER. As soon as we upgrade to 7.3\nI'm going on a CLUSTERing spree :)\n\nChris\n\n",
"msg_date": "Fri, 9 Aug 2002 10:54:33 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "If you're looking for something very useful to work on, see if Gavin\nSherry(?) can post his old CREATE OR REPLACE VIEW code. I'm pretty sure he\n(or someone) said that he had an old patch, that needed to be synced with\nHEAD... This functionality is pretty essential for 7.3...\n\nChris\n\n> -----Original Message-----\n> From: Alvaro Herrera [mailto:alvherre@atentus.com]\n> Sent: Friday, 9 August 2002 10:21 AM\n> To: Christopher Kings-Lynne\n> Cc: Neil Conway; Tom Lane; Bruce Momjian; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] CLUSTER and indisclustered\n>\n>\n> Christopher Kings-Lynne dijo:\n>\n> > > > > Or maybe just do an analyze of the table automatically after the\n> > > > > CLUSTERing.\n> >\n> > Well we have previously had discussions on the topic of adding\n> analyze to\n> > the end of dumps, etc. and the result has always been in favour\n> of keeping\n> > the command set orthogonal and not doing an automatic analyze...\n>\n> Oh. Sorry for the noise.\n>\n> I'm trying to look at other things in the TODO so I stop pestering about\n> CLUSTER.\n>\n> --\n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\n> s�lo le suma el nuevo terror de la locura\" (Perelandra, CSLewis)\n>\n\n",
"msg_date": "Fri, 9 Aug 2002 10:57:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n>> What if I [try to] extend the grammar to support an additional ANALYZE\n>> in CLUSTER, so that it analyzes the table automatically?\n\n> I don't like this -- it seems like bloat.\n\nMy reaction exactly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Aug 2002 23:52:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered "
},
{
"msg_contents": "\nI wanted to comment on this bitmapped index discussion because I am\nhearing a lot about star joins, data warehousing, and bitmapped indexes\nrecently.\n\nIt seems we have several uses for bitmapped indexes:\n\n\tDo index lookups in sequential heap order\n\tAllow joining of bitmapped indexes to construct arbitrary indexes\n\nThere is a web page about \"star joins\" used a lot in data warehousing,\nwhere you don't know what queries are going to be required and what\nindexes to create:\n\n\thttp://www.dbdomain.com/a100397.htm\n\nThey show some sample queries, which is good. Here is some\ninteresting text:\n\n\tStar Transformation\n\n\tIf there are bitmap indexes on SALES_REP_ID, PRODUCT_ID, and\n\tDEPARTMENT_ID in the SALES table, then Oracle can resolve the query\n\tusing merges of the bitmap indexes.\n\t\n\tBecause Oracle can efficiently merge multiple bitmap indexes, you can \n\tcreate a single bitmap index on each of the foreign-key columns in the\n\tfact table rather than on every possible combination of columns. This\n\tlets you support all possible combinations of dimensions without\n\tcreating an unreasonable number of indexes.\n\nAdded to TODO:\n\t\n\t* Use bitmaps to fetch heap pages in sequential order [performance] \n\t* Use bitmaps to combine existing indexes [performance]\n\nand I will add some of these emails to TODO.detail/performance.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > But after doing some benchmarking of various sorts of random reads\n> > and writes, it occurred to me that there might be optimizations\n> > that could help a lot with this sort of thing. What if, when we've\n> > got an index block with a bunch of entries, instead of doing the\n> > reads in the order of the entries, we do them in the order of the\n> > blocks the entries point to?\n> \n> I thought to myself \"didn't I just post something about that?\"\n> and then realized it was on a different mailing list. Here ya go\n> (and no, this is not the first time around on this list either...)\n> \n> \n> I am currently thinking that bitmap indexes per se are not all that\n> interesting. What does interest me is bitmapped index lookup, which\n> came back into mind after hearing Ann Harrison describe how FireBird/\n> InterBase does it.\n> \n> The idea is that you don't scan the index and base table concurrently\n> as we presently do it. Instead, you scan the index and make a list\n> of the TIDs of the table tuples you need to visit. This list can\n> be conveniently represented as a sparse bitmap. After you've finished\n> looking at the index, you visit all the required table tuples *in\n> physical order* using the bitmap. This eliminates multiple fetches\n> of the same heap page, and can possibly let you get some win from\n> sequential access.\n> \n> Once you have built this mechanism, you can then move on to using\n> multiple indexes in interesting ways: you can do several indexscans\n> in one query and then AND or OR their bitmaps before doing the heap\n> scan. This would allow, for example, \"WHERE a = foo and b = bar\"\n> to be handled by ANDing results from separate indexes on the a and b\n> columns, rather than having to choose only one index to use as we do\n> now.\n> \n> Some thoughts about implementation: FireBird's implementation seems\n> to depend on an assumption about a fixed number of tuple pointers\n> per page. We don't have that, but we could probably get away with\n> just allocating BLCKSZ/sizeof(HeapTupleHeaderData) bits per page.\n> Also, the main downside of this approach is that the bitmap could\n> get large --- but you could have some logic that causes you to fall\n> back to plain sequential scan if you get too many index hits. (It's\n> interesting to think of this as lossy compression of the bitmap...\n> which leads to the idea of only being fuzzy in limited areas of the\n> bitmap, rather than losing all the information you have.)\n> \n> A possibly nasty issue is that lazy VACUUM has some assumptions in it\n> about indexscans holding pins on index pages --- that's what prevents\n> it from removing heap tuples that a concurrent indexscan is just about\n> to visit. It might be that there is no problem: even if lazy VACUUM\n> removes a heap tuple and someone else then installs a new tuple in that\n> same TID slot, you should be okay because the new tuple is too new to\n> pass your visibility test. But I'm not convinced this is safe.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 00:25:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
},
{
"msg_contents": "On Tue, 2002-08-13 at 09:25, Bruce Momjian wrote:\n> \n> There is a web page about \"star joins\" used a lot in data warehousing,\n> where you don't know what queries are going to be required and what\n> indexes to create:\n> \n> \thttp://www.dbdomain.com/a100397.htm\n> \n> They show some sample queries, which is good. Here is some\n> interesting text:\n> \n> \tStar Transformation\n> \n> \tIf there are bitmap indexes on SALES_REP_ID, PRODUCT_ID, and\n> \tDEPARTMENT_ID in the SALES table, then Oracle can resolve the query\n> \tusing merges of the bitmap indexes.\n> \t\n> \tBecause Oracle can efficiently merge multiple bitmap indexes, you can \n> \tcreate a single bitmap index on each of the foreign-key columns in the\n> \tfact table rather than on every possible combination of columns.\n\nAnother way to achive the similar result would be using segmented hash\nindexes, where each column maps directly to some part of hash value.\n\n> This\n> \tlets you support all possible combinations of dimensions without\n> \tcreating an unreasonable number of indexes.\n\n-----------\nHannu\n\n",
"msg_date": "13 Aug 2002 09:27:03 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER and indisclustered"
}
] |
[
{
"msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql-server\nChanges by:\tthomas@postgresql.org\t02/08/04 02:26:38\n\nModified files:\n\tsrc/backend/tcop: postgres.c \n\tsrc/backend/bootstrap: bootstrap.c \n\tsrc/backend/postmaster: postmaster.c \n\tsrc/bin/initdb : initdb.sh \n\tsrc/bin/pg_ctl : pg_ctl.sh \n\tsrc/include/access: xlog.h \n\nLog message:\n\tImplement WAL log location control using \"-X\" or PGXLOG.\n\n",
"msg_date": "Sun, 4 Aug 2002 02:26:38 -0400 (EDT)",
"msg_from": "thomas@postgresql.org (Thomas Lockhart)",
"msg_from_op": true,
"msg_subject": "pgsql-server/src backend/tcop/postgres.c backe ..."
},
{
"msg_contents": "Thomas Lockhart wrote:\n> CVSROOT:\t/cvsroot\n> Module name:\tpgsql-server\n> Changes by:\tthomas@postgresql.org\t02/08/04 02:26:38\n> \n> Modified files:\n> \tsrc/backend/tcop: postgres.c \n> \tsrc/backend/bootstrap: bootstrap.c \n> \tsrc/backend/postmaster: postmaster.c \n> \tsrc/bin/initdb : initdb.sh \n> \tsrc/bin/pg_ctl : pg_ctl.sh \n> \tsrc/include/access: xlog.h \n> \n> Log message:\n> \tImplement WAL log location control using \"-X\" or PGXLOG.\n\nWoh, I didn't think we had agreement on this. This populates the 'X'\nall over the system (postgres, postmaster, initdb, pg_ctl), while the\nsimple solution would be to add the flag only to initdb and use a\nsymlink from /data. I also think it is less error-prone because you\ncan't accidentally point to the wrong XLOG directory. In fact, you\nalmost have to use an environment variable unles you plan to specify -X\nfor all the commands. In my mind, PGDATA should take care of the whole\nthing for pointing to your data.\n\nIf you want to do it this way, I request a vote.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Aug 2002 10:21:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.c backe ..."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Thomas Lockhart wrote:\n>> Implement WAL log location control using \"-X\" or PGXLOG.\n\n> Woh, I didn't think we had agreement on this. This populates the 'X'\n> all over the system (postgres, postmaster, initdb, pg_ctl), while the\n> simple solution would be to add the flag only to initdb and use a\n> symlink from /data. I also think it is less error-prone because you\n> can't accidentally point to the wrong XLOG directory. In fact, you\n> almost have to use an environment variable unles you plan to specify -X\n> for all the commands. In my mind, PGDATA should take care of the whole\n> thing for pointing to your data.\n\n> If you want to do it this way, I request a vote.\n\nI have to vote a strong NO on this. What the patch essentially does is\nto decouple specification of the data directory ($PGDATA or -D) from the\nxlog directory ($PGXLOG or -X). This might seem nice and clean and\nsymmetrical, but in fact it has no conceivable use except for shooting\nyourself in the foot in a particularly nasty manner. The xlog *cannot*\nbe decoupled from the data directory --- there are pointers in\npg_control and in every single data page that depend on the state of\nxlog. Trying to make them look independent is just a recipe for\nbreaking your database by starting the postmaster with the wrong\ncombination of PGDATA and PGXLOG. And I'm not at all sure it'll be\npossible to recover after you do that: if the postmaster notices the\ndiscrepancy, it is likely to try to fix it by rolling forward from the\nlast checkpoint it can find in the mismatching xlog. Oops :-(\n\nI think the existing mechanism of using a symlink in the data directory\nwhen you want to move xlog is far safer and more reliable. I do not see\nwhat functionality is added by this patch that can possibly justify the\nhazards it creates.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 13:14:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.c backe ... "
},
{
"msg_contents": "\nCan I propose a compromise? Thomas, can we have only initdb honor\nPGXLOG, just like it honors PGDATA? That way, admins can set PGXLOG and\ninitdb will take care of the symlinking for them, and we don't have to\npush PGXLOG/-X down into all the other server parts. How do people like\nthat?\n\nAs far as tablespaces, we have three ways of storing directory path\ninfo: environment variables, in the database, or in symlinks. If we\nchoose the first or last options, XLOG will have to follow that plan. \nIf we choose database, well there is no database during initdb so XLOG\nwould have a different system of specifying the location. I think\nhaving only initdb honor PGXLOG and using a symlink will give us maximum\nflexibility when we decide on table spaces.\n\nActually, as I remember, there was a strong vote that tablespaces where\ngoing to use symlinks for storage, at least in some way.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Thomas Lockhart wrote:\n> >> Implement WAL log location control using \"-X\" or PGXLOG.\n> \n> > Woh, I didn't think we had agreement on this. This populates the 'X'\n> > all over the system (postgres, postmaster, initdb, pg_ctl), while the\n> > simple solution would be to add the flag only to initdb and use a\n> > symlink from /data. I also think it is less error-prone because you\n> > can't accidentally point to the wrong XLOG directory. In fact, you\n> > almost have to use an environment variable unles you plan to specify -X\n> > for all the commands. In my mind, PGDATA should take care of the whole\n> > thing for pointing to your data.\n> \n> > If you want to do it this way, I request a vote.\n> \n> I have to vote a strong NO on this. What the patch essentially does is\n> to decouple specification of the data directory ($PGDATA or -D) from the\n> xlog directory ($PGXLOG or -X). This might seem nice and clean and\n> symmetrical, but in fact it has no conceivable use except for shooting\n> yourself in the foot in a particularly nasty manner. The xlog *cannot*\n> be decoupled from the data directory --- there are pointers in\n> pg_control and in every single data page that depend on the state of\n> xlog. Trying to make them look independent is just a recipe for\n> breaking your database by starting the postmaster with the wrong\n> combination of PGDATA and PGXLOG. And I'm not at all sure it'll be\n> possible to recover after you do that: if the postmaster notices the\n> discrepancy, it is likely to try to fix it by rolling forward from the\n> last checkpoint it can find in the mismatching xlog. Oops :-(\n> \n> I think the existing mechanism of using a symlink in the data directory\n> when you want to move xlog is far safer and more reliable. I do not see\n> what functionality is added by this patch that can possibly justify the\n> hazards it creates.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Aug 2002 15:35:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.c"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can I propose a compromise? Thomas, can we have only initdb honor\n> PGXLOG, just like it honors PGDATA? That way, admins can set PGXLOG and\n> initdb will take care of the symlinking for them, and we don't have to\n> push PGXLOG/-X down into all the other server parts.\n\nThat seems reasonable to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 17:04:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.c backe ... "
},
{
"msg_contents": "\nThomas, have you commented on the objections to this patch? If so, I\ndidn't see it.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Thomas Lockhart wrote:\n> >> Implement WAL log location control using \"-X\" or PGXLOG.\n> \n> > Woh, I didn't think we had agreement on this. This populates the 'X'\n> > all over the system (postgres, postmaster, initdb, pg_ctl), while the\n> > simple solution would be to add the flag only to initdb and use a\n> > symlink from /data. I also think it is less error-prone because you\n> > can't accidentally point to the wrong XLOG directory. In fact, you\n> > almost have to use an environment variable unles you plan to specify -X\n> > for all the commands. In my mind, PGDATA should take care of the whole\n> > thing for pointing to your data.\n> \n> > If you want to do it this way, I request a vote.\n> \n> I have to vote a strong NO on this. What the patch essentially does is\n> to decouple specification of the data directory ($PGDATA or -D) from the\n> xlog directory ($PGXLOG or -X). This might seem nice and clean and\n> symmetrical, but in fact it has no conceivable use except for shooting\n> yourself in the foot in a particularly nasty manner. The xlog *cannot*\n> be decoupled from the data directory --- there are pointers in\n> pg_control and in every single data page that depend on the state of\n> xlog. Trying to make them look independent is just a recipe for\n> breaking your database by starting the postmaster with the wrong\n> combination of PGDATA and PGXLOG. And I'm not at all sure it'll be\n> possible to recover after you do that: if the postmaster notices the\n> discrepancy, it is likely to try to fix it by rolling forward from the\n> last checkpoint it can find in the mismatching xlog. Oops :-(\n> \n> I think the existing mechanism of using a symlink in the data directory\n> when you want to move xlog is far safer and more reliable. I do not see\n> what functionality is added by this patch that can possibly justify the\n> hazards it creates.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 22:39:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.c"
},
{
"msg_contents": "> Thomas, have you commented on the objections to this patch? If so, I\n> didn't see it.\n\nYes, there was quite a long thread on this.\n\n - Thomas\n",
"msg_date": "Wed, 07 Aug 2002 07:30:25 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "\nThomas, would you remind me of the concusions because I thought everyone\ninvolved felt that it should be an initdb-only option, but I still see\nit in CVS.\n\n---------------------------------------------------------------------------\n\nThomas Lockhart wrote:\n> > Thomas, have you commented on the objections to this patch? If so, I\n> > didn't see it.\n> \n> Yes, there was quite a long thread on this.\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 9 Aug 2002 21:28:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "> Thomas, would you remind me of the concusions because I thought everyone\n> involved felt that it should be an initdb-only option, but I still see\n> it in CVS.\n\n?? \"Concussions\" as in brain bruises? ;)\n\nI'm not sure I understand the question. I assume that we are talking\nabout the WAL log location feature I implemented recently. It is an\ninitdb-only option, and defaults to the current behavior *exactly*.\n\nThe new feature is to allow an argument to initdb to locate the WAL file\nto another location. That location can be specified on the command line,\nor through an environment variable. Neither form precludes use of the\nother, and either form can be considered \"best practice\" depending on\nyour opinion of what that is.\n\nThe postmaster also recognizes the command line option and environment\nvariable. The only suggestion I got as an alternative involved soft\nlinks, which is not portable, which is not robust, and which is not used\nanywhere else in the system. If we moved toward relying on soft links\nfor distributing resources we will be moving in the wrong direction for\nmany reasons, some of which I've mentioned previously. GUC parameters\nwere also mentioned as a possibility, and the infrastructure does not\npreclude that at any time.\n\nI don't recall that there were very many folks \"involved\". There were\nseveral opinions, though most were from folks who were not thinking of\nimplementing disk management features. Some opinions dealt with details,\nand some seemed to deal with the wisdom of allowing anything other than\na \"one partition\" model of the database, which is nothing if not short\nsighted. Current default behavior is as first implemented, and the new\nfeature allows locating the WAL logs in another area. For the current\nstate of the art, that seems competitive with features found in other\ndatabase products, and an essential step in teaching PostgreSQL to work\nwith very large databases.\n\nI had thought to extend the capabilities to allow resource allocation\nfor individual tables and indices, which has *long* been identified as a\ndesired capability by folks who are managing large systems. It seemed\nreasonable to have done in time for 7.3. I'm rethinking that, not\nbecause it shouldn't happen, but because the process of discussing these\nissues has become so argumentative, divisive, impolite, and unpleasant.\nWhich is a shame imho...\n\n - Thomas\n",
"msg_date": "Fri, 09 Aug 2002 18:54:44 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > Thomas, would you remind me of the concusions because I thought everyone\n> > involved felt that it should be an initdb-only option, but I still see\n> > it in CVS.\n> \n> ?? \"Concussions\" as in brain bruises? ;)\n\nUh, conclusions. Sorry. New keyboard desk in new house. :-)\n\n> I'm not sure I understand the question. I assume that we are talking\n> about the WAL log location feature I implemented recently. It is an\n> initdb-only option, and defaults to the current behavior *exactly*.\n\nYep. What bothers me is the clutter to the other commands that allow\nXLOG location specification when you would never really want to specify\nit except as part of initdb. I just see those extra flags as\ncruft/confusion.\n\nLook at pg_ctl:\n\n pg_ctl start [-w] [-D DATADIR] [-s] [-X PGXLOG] [-l FILENAME] [-o \"OPTIONS\"]\n\nWhich option doesn't make sense? -X. It is way beyond the\nfunctionality of the command.\n\n\n> The new feature is to allow an argument to initdb to locate the WAL file\n> to another location. That location can be specified on the command line,\n> or through an environment variable. Neither form precludes use of the\n> other, and either form can be considered \"best practice\" depending on\n> your opinion of what that is.\n> \n> The postmaster also recognizes the command line option and environment\n> variable. The only suggestion I got as an alternative involved soft\n> links, which is not portable, which is not robust, and which is not used\n> anywhere else in the system. If we moved toward relying on soft links\n> for distributing resources we will be moving in the wrong direction for\n> many reasons, some of which I've mentioned previously. GUC parameters\n> were also mentioned as a possibility, and the infrastructure does not\n> preclude that at any time.\n\nI don't think anyone agreed with your concerns about symlinks. If you\nwant to be careful, do the \"ln -s\" in initdb and exit on failure, and\ntell them not to use -X on that platform, though we use symlinks for\npostmaster/postgres identification, so I know the only OS that doesn't\nsupport symlinks is Netware, only because Netware folks just sent in a\npatch to add a -post flag to work around lack of symlinks. (I have\nasked for clarification from them.)\n\nI actually requested a vote, and got several people who wanted my\ncompromise (PGXLOG or initdb -X flag only), and I didn't see anyone who\nliked the addition of -X into non-initdb commands. Should I have a\nspecific vote? OK, three options:\n\n\t1) -X, PGXLOG in initdb, postmaster, postgres, pg_ctl\n\t2) -X, PGXLOG in initdb only\n\t3) nothing\n\nI remember a number of people liking 2, but we can vote again.\n\n> I don't recall that there were very many folks \"involved\". There were\n> several opinions, though most were from folks who were not thinking of\n> implementing disk management features. Some opinions dealt with details,\n> and some seemed to deal with the wisdom of allowing anything other than\n> a \"one partition\" model of the database, which is nothing if not short\n> sighted. Current default behavior is as first implemented, and the new\n> feature allows locating the WAL logs in another area. For the current\n> state of the art, that seems competitive with features found in other\n> database products, and an essential step in teaching PostgreSQL to work\n> with very large databases.\n> \n> I had thought to extend the capabilities to allow resource allocation\n> for individual tables and indices, which has *long* been identified as a\n> desired capability by folks who are managing large systems. It seemed\n> reasonable to have done in time for 7.3. I'm rethinking that, not\n> because it shouldn't happen, but because the process of discussing these\n> issues has become so argumentative, divisive, impolite, and unpleasant.\n> Which is a shame imho...\n\nI clearly want tablespaces, and it would be great for 7.3, and I don't\nthink it is a huge job. However, I think it will require symlinks to be\nusable, and you probably do not, so it may be an issue.\n\nAs for the argumentativeness, we do have folks with some strong\nopinions, and I guess on the PGXLOG issue, I am one of them. Maybe that\nis bad? Are people expressing themselves badly? If so, I would like to\nhear details either on list or privately so I can address them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 9 Aug 2002 22:17:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On Fri, Aug 09, 2002 at 06:54:44PM -0700, Thomas Lockhart wrote:\n\n> I had thought to extend the capabilities to allow resource allocation\n> for individual tables and indices, which has *long* been identified as a\n> desired capability by folks who are managing large systems. \n\nWithout wishing to pour fuel on any fires, or advocate any\nimplementation, I can say for sure that this is very much a feature\nwe'd love to see.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 12 Aug 2002 10:31:29 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "\nOK, seeing as no one voted, and only Tom and I objected originally, we\nwill keep the code as Thomas has applied it, namely that PGXLOG/-X is\nrecognized by initdb, postmaster, postgres, and pg_ctl. There is no\nsymlink from the /data directory to the WAL location.\n\n\nThomas, you mentioned implementing table spaces. Are you planning to\nuse environment variables there too? You mentioned that Ingres's use of\nenvironment variables wasn't well implemented. How will you improve on\nit?\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Thomas Lockhart wrote:\n> > > Thomas, would you remind me of the concusions because I thought everyone\n> > > involved felt that it should be an initdb-only option, but I still see\n> > > it in CVS.\n> > \n> > ?? \"Concussions\" as in brain bruises? ;)\n> \n> Uh, conclusions. Sorry. New keyboard desk in new house. :-)\n> \n> > I'm not sure I understand the question. I assume that we are talking\n> > about the WAL log location feature I implemented recently. It is an\n> > initdb-only option, and defaults to the current behavior *exactly*.\n> \n> Yep. What bothers me is the clutter to the other commands that allow\n> XLOG location specification when you would never really want to specify\n> it except as part of initdb. I just see those extra flags as\n> cruft/confusion.\n> \n> Look at pg_ctl:\n> \n> pg_ctl start [-w] [-D DATADIR] [-s] [-X PGXLOG] [-l FILENAME] [-o \"OPTIONS\"]\n> \n> Which option doesn't make sense? -X. It is way beyond the\n> functionality of the command.\n> \n> \n> > The new feature is to allow an argument to initdb to locate the WAL file\n> > to another location. That location can be specified on the command line,\n> > or through an environment variable. Neither form precludes use of the\n> > other, and either form can be considered \"best practice\" depending on\n> > your opinion of what that is.\n> > \n> > The postmaster also recognizes the command line option and environment\n> > variable. The only suggestion I got as an alternative involved soft\n> > links, which is not portable, which is not robust, and which is not used\n> > anywhere else in the system. If we moved toward relying on soft links\n> > for distributing resources we will be moving in the wrong direction for\n> > many reasons, some of which I've mentioned previously. GUC parameters\n> > were also mentioned as a possibility, and the infrastructure does not\n> > preclude that at any time.\n> \n> I don't think anyone agreed with your concerns about symlinks. If you\n> want to be careful, do the \"ln -s\" in initdb and exit on failure, and\n> tell them not to use -X on that platform, though we use symlinks for\n> postmaster/postgres identification, so I know the only OS that doesn't\n> support symlinks is Netware, only because Netware folks just sent in a\n> patch to add a -post flag to work around lack of symlinks. (I have\n> asked for clarification from them.)\n> \n> I actually requested a vote, and got several people who wanted my\n> compromise (PGXLOG or initdb -X flag only), and I didn't see anyone who\n> liked the addition of -X into non-initdb commands. Should I have a\n> specific vote? OK, three options:\n> \n> \t1) -X, PGXLOG in initdb, postmaster, postgres, pg_ctl\n> \t2) -X, PGXLOG in initdb only\n> \t3) nothing\n> \n> I remember a number of people liking 2, but we can vote again.\n> \n> > I don't recall that there were very many folks \"involved\". There were\n> > several opinions, though most were from folks who were not thinking of\n> > implementing disk management features. Some opinions dealt with details,\n> > and some seemed to deal with the wisdom of allowing anything other than\n> > a \"one partition\" model of the database, which is nothing if not short\n> > sighted. Current default behavior is as first implemented, and the new\n> > feature allows locating the WAL logs in another area. For the current\n> > state of the art, that seems competitive with features found in other\n> > database products, and an essential step in teaching PostgreSQL to work\n> > with very large databases.\n> > \n> > I had thought to extend the capabilities to allow resource allocation\n> > for individual tables and indices, which has *long* been identified as a\n> > desired capability by folks who are managing large systems. It seemed\n> > reasonable to have done in time for 7.3. I'm rethinking that, not\n> > because it shouldn't happen, but because the process of discussing these\n> > issues has become so argumentative, divisive, impolite, and unpleasant.\n> > Which is a shame imho...\n> \n> I clearly want tablespaces, and it would be great for 7.3, and I don't\n> think it is a huge job. However, I think it will require symlinks to be\n> usable, and you probably do not, so it may be an issue.\n> \n> As for the argumentativeness, we do have folks with some strong\n> opinions, and I guess on the PGXLOG issue, I am one of them. Maybe that\n> is bad? Are people expressing themselves badly? If so, I would like to\n> hear details either on list or privately so I can address them.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 12 Aug 2002 23:56:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, seeing as no one voted, and only Tom and I objected originally, we\n> will keep the code as Thomas has applied it, namely that PGXLOG/-X is\n> recognized by initdb, postmaster, postgres, and pg_ctl.\n\nWe will? It looks to me like Thomas lost the vote 2-to-1.\n\nUnless there are more votes, I'm going to *insist* that this code be\nchanged. It's dangerous and offers no offsetting benefit. XLOG\nlocation should be settable at initdb, noplace later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 00:09:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, seeing as no one voted, and only Tom and I objected originally, we\n> > will keep the code as Thomas has applied it, namely that PGXLOG/-X is\n> > recognized by initdb, postmaster, postgres, and pg_ctl.\n> \n> We will? It looks to me like Thomas lost the vote 2-to-1.\n> \n> Unless there are more votes, I'm going to *insist* that this code be\n> changed. It's dangerous and offers no offsetting benefit. XLOG\n> location should be settable at initdb, noplace later.\n\nWell, you didn't vote again in my follow up email, so I thought you\ndidn't care anymore, and Thomas didn't reply to my email either. I am\nclearly concerned, as you are, but Thomas isn't, and no one else seems\nto care.\n\nCan two guys override another guy if he is doing the work? I usually\nlike to have a larger margin than that. I don't know what to do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 00:30:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On Tue, 13 Aug 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, seeing as no one voted, and only Tom and I objected originally, we\n> > will keep the code as Thomas has applied it, namely that PGXLOG/-X is\n> > recognized by initdb, postmaster, postgres, and pg_ctl.\n>\n> We will? It looks to me like Thomas lost the vote 2-to-1.\n>\n> Unless there are more votes, I'm going to *insist* that this code be\n> changed. It's dangerous and offers no offsetting benefit. XLOG\n> location should be settable at initdb, noplace later.\n\nOkay, I'm going to pop up here and side with Thomas ... I think ... I may\nhave missed some issues here, but, quite frankly, I hate symlinks, as I've\nseen it create more evil then good .. hardlinks is a different story ...\n\nAnd for that reason, I will side with Thomas ...\n\ninitdb should allow you to specify a seperate location, which I don't\nthink anyone disagrees with ... but, what happens if, for some reason, I\nhave to move it to another location? I have to then dump/reload after\ndoing a new initdb?\n\nOne thought at the back of my mind is why not have something like a\n'PG_VERSION' for XLOG? Maybe something so simple as a text file in both\nthe data and xlog directory that just contains a timestamp from the\ninitdb? then, when you startup postmaster with a -X option, it compares\nthe two files and makes sure that they belong to each other?\n\nBruce, if I'm missing something, could you post a summary/scorecard for\npros-cons on this issue?\n\n",
"msg_date": "Tue, 13 Aug 2002 02:12:04 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On Mon, 2002-08-12 at 23:09, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, seeing as no one voted, and only Tom and I objected originally, we\n> > will keep the code as Thomas has applied it, namely that PGXLOG/-X is\n> > recognized by initdb, postmaster, postgres, and pg_ctl.\n> \n> We will? It looks to me like Thomas lost the vote 2-to-1.\n> \n> Unless there are more votes, I'm going to *insist* that this code be\n> changed. It's dangerous and offers no offsetting benefit. XLOG\n> location should be settable at initdb, noplace later.\n> \n\n\nI think Tom is on to something here. I meant to ask but never got\naround to it. Why would anyone need to move the XLOG after you've\ninited the db?\n\nGreg",
"msg_date": "13 Aug 2002 00:12:48 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "On Tue, 13 Aug 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > OK, seeing as no one voted, and only Tom and I objected originally, we\n> > > will keep the code as Thomas has applied it, namely that PGXLOG/-X is\n> > > recognized by initdb, postmaster, postgres, and pg_ctl.\n> >\n> > We will? It looks to me like Thomas lost the vote 2-to-1.\n> >\n> > Unless there are more votes, I'm going to *insist* that this code be\n> > changed. It's dangerous and offers no offsetting benefit. XLOG\n> > location should be settable at initdb, noplace later.\n>\n> Well, you didn't vote again in my follow up email, so I thought you\n> didn't care anymore, and Thomas didn't reply to my email either. I am\n> clearly concerned, as you are, but Thomas isn't, and no one else seems\n> to care.\n\nk, why are you concerned? see my other message, but if the major concern\nis the xlog directory for a postmaster, then put in a consistency check\nfor when starting ...\n\nthen again, how many out there are running multiple instances of\npostmaster on their machine that would move xlog to a different location\nwithout realizing they did? I know in my case, we're working at reducing\nthe # of instances on our servers to 2 (internal vs external), but, our\nservers are all on a RAID5 array, so there is nowhere to really \"move\"\nxlog to out then its default location ... but I can see the usefulness of\ndoing so ...\n\nMyself, if I'm going to move something around, it will be after the server\nhas been running for a while and I've added in more drive space in order\nto correct a problem i didn't anticipate (namely, disk I/O) ... at that\npoint, I really don't wan tto have to dump/re-initdb/load just to move the\nxlog directory to that new drive ...\n\n",
"msg_date": "Tue, 13 Aug 2002 02:16:27 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On 13 Aug 2002, Greg Copeland wrote:\n\n> On Mon, 2002-08-12 at 23:09, Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > OK, seeing as no one voted, and only Tom and I objected originally, we\n> > > will keep the code as Thomas has applied it, namely that PGXLOG/-X is\n> > > recognized by initdb, postmaster, postgres, and pg_ctl.\n> >\n> > We will? It looks to me like Thomas lost the vote 2-to-1.\n> >\n> > Unless there are more votes, I'm going to *insist* that this code be\n> > changed. It's dangerous and offers no offsetting benefit. XLOG\n> > location should be settable at initdb, noplace later.\n> >\n>\n>\n> I think Tom is on to something here. I meant to ask but never got\n> around to it. Why would anyone need to move the XLOG after you've\n> inited the db?\n\nI just determined that disk I/O is terrible, so want to move the XLOG over\nto a different file system that is currently totally idle ...\n\n",
"msg_date": "Tue, 13 Aug 2002 02:21:54 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Tue, 13 Aug 2002, Tom Lane wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > OK, seeing as no one voted, and only Tom and I objected originally, we\n> > > will keep the code as Thomas has applied it, namely that PGXLOG/-X is\n> > > recognized by initdb, postmaster, postgres, and pg_ctl.\n> >\n> > We will? It looks to me like Thomas lost the vote 2-to-1.\n> >\n> > Unless there are more votes, I'm going to *insist* that this code be\n> > changed. It's dangerous and offers no offsetting benefit. XLOG\n> > location should be settable at initdb, noplace later.\n> \n> Okay, I'm going to pop up here and side with Thomas ... I think ... I may\n> have missed some issues here, but, quite frankly, I hate symlinks, as I've\n> seen it create more evil then good .. hardlinks is a different story ...\n\nOK, that agrees with Thomas's aversion to symlinks.\n\n> And for that reason, I will side with Thomas ...\n> \n> initdb should allow you to specify a seperate location, which I don't\n> think anyone disagrees with ... but, what happens if, for some reason, I\n> have to move it to another location? I have to then dump/reload after\n> doing a new initdb?\n\nIf you move pg_xlog, you have to create a symlink in /data that points\nto the new location. Initdb would do that automatically, but if you\nmove it after initdb, you would have to create the symlink yourself. \nWith Thomas's current code, you would add/change PGXLOG instead to point\nto the new location, rather than modify the symlink.\n\n> One thought at the back of my mind is why not have something like a\n> 'PG_VERSION' for XLOG? Maybe something so simple as a text file in both\n> the data and xlog directory that just contains a timestamp from the\n> initdb? then, when you startup postmaster with a -X option, it compares\n> the two files and makes sure that they belong to each other?\n\nUh, seems it could get messy, but, yea, that would work. It means\nadding a file to pg_xlog and /data and somehow matching them. My\nfeeling was that the symlink was unambiguous and allowed for fewer\nmistakes. I think that was Tom's opinion too.\n\n> Bruce, if I'm missing something, could you post a summary/scorecard for\n> pros-cons on this issue?\n\nOK, I tried. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 01:23:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> > Well, you didn't vote again in my follow up email, so I thought you\n> > didn't care anymore, and Thomas didn't reply to my email either. I am\n> > clearly concerned, as you are, but Thomas isn't, and no one else seems\n> > to care.\n> \n> k, why are you concerned? see my other message, but if the major concern\n> is the xlog directory for a postmaster, then put in a consistency check\n> for when starting ...\n\n\nI really have two concerns; one, the ability to point to the wrong\nplace or to the default place accidentally if PGXLOG isn't set, and two,\nthe addition of PGXLOG/-X into postmaster, postgres, pg_ctl where it\nreally isn't adding any functionality and just adds bloat to the\ncommands arg list.\n\n> then again, how many out there are running multiple instances of\n> postmaster on their machine that would move xlog to a different location\n> without realizing they did? I know in my case, we're working at reducing\n> the # of instances on our servers to 2 (internal vs external), but, our\n> servers are all on a RAID5 array, so there is nowhere to really \"move\"\n> xlog to out then its default location ... but I can see the usefulness of\n> doing so ...\n> \n> Myself, if I'm going to move something around, it will be after the server\n> has been running for a while and I've added in more drive space in order\n> to correct a problem i didn't anticipate (namely, disk I/O) ... at that\n> point, I really don't wan tto have to dump/re-initdb/load just to move the\n> xlog directory to that new drive ...\n\nNo dump/reload required, but the creation of a _dreaded_ symlink is\nrequired.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 01:26:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> > I think Tom is on to something here. I meant to ask but never got\n> > around to it. Why would anyone need to move the XLOG after you've\n> > inited the db?\n> \n> I just determined that disk I/O is terrible, so want to move the XLOG over\n> to a different file system that is currently totally idle ...\n\nYep, and you are going to do it using symlinks. Let us know how it\ngoes?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 01:26:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "On Tue, 2002-08-13 at 00:16, Marc G. Fournier wrote:\n> \n> Myself, if I'm going to move something around, it will be after the server\n> has been running for a while and I've added in more drive space in order\n> to correct a problem i didn't anticipate (namely, disk I/O) ... at that\n> point, I really don't wan tto have to dump/re-initdb/load just to move the\n> xlog directory to that new drive ...\n> \n\nOkay, fair enough. But do we really need to have environment variables\nfor this? Sounds like we need a stand alone utility which does the\nassociated magic in the database which moves the xlog and associated\ninternal pointers. Doing so would assuming that all \"be\"s for the\ndatabase have been shutdown or simply would not go into effect until\nrestarted. Is this feasible?\n\nFor something that would seemingly be infrequently used, creating\nenvironment variables would seemingly be rather error prone.\n\nRequiring soft links also doesn't strike me as a good portable idea\neither...not to mention I've been bitten by them before too.\n\nSign,\n\tGreg Copeland",
"msg_date": "13 Aug 2002 00:30:35 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "> If you move pg_xlog, you have to create a symlink in /data that points\n> to the new location. Initdb would do that automatically, but if you\n> move it after initdb, you would have to create the symlink yourself.\n> With Thomas's current code, you would add/change PGXLOG instead to point\n> to the new location, rather than modify the symlink.\n\nThere is no \"the symlink\", but of course that tinkering is in no way\nprecluded by the new code. Although some seem to like symlinks, others\n(including myself) see no good engineering practice in making them the\nonly foundation for distributing files across file systems.\n\nThe patches as-is follow existing PostgreSQL practice, have complete and\nperfect backward compatibility, and do not preclude changes in\nunderlying implementation in the future if those who are objecting\nchoose to do a complete and thorough job of meeting my objections to the\ncurrent counter-suggestions. As an example, two lines of code in initdb\nwould add \"the beloved symlink\" to $PGDATA, eliminating one objection\nthough (of course) one I don't support.\n\n> > One thought at the back of my mind is why not have something like a\n> > 'PG_VERSION' for XLOG? Maybe something so simple as a text file in both\n> > the data and xlog directory that just contains a timestamp from the\n> > initdb? then, when you startup postmaster with a -X option, it compares\n> > the two files and makes sure that they belong to each other?\n> Uh, seems it could get messy, but, yea, that would work. It means\n> adding a file to pg_xlog and /data and somehow matching them. My\n> feeling was that the symlink was unambiguous and allowed for fewer\n> mistakes. I think that was Tom's opinion too.\n\nIn the spirit of gratutious overstatement, I'll point out again:\nsymlinks are evil. Any sense of a job well done is misplaced if our\nunderpinnings rely on them for distributing files across file systems.\nAs an ad hoc hack to work around current limitations they may have some\nutility.\n\nAnyway, istm that this is way too much discussion for a small extension\nof capability, and it has likely cost a table and index \"with location\"\nimplementation for the upcoming release just due to time wasted\ndiscussing it. Hope it was worth it :/\n\n - Thomas\n",
"msg_date": "Mon, 12 Aug 2002 23:09:14 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> We will? It looks to me like Thomas lost the vote 2-to-1.\n\n> Well, you didn't vote again in my follow up email,\n\nI thought my vote was obvious already ...\n\n> Can two guys override another guy if he is doing the work? I usually\n> like to have a larger margin than that. I don't know what to do.\n\nI'm not pleased about it either; I'd have preferred to see a few more\nopinions given (and I'm surprised that no one else bothered to weigh in;\nlack of opinions is usually not a problem for pghackers ;-)).\n\nBut I really seriously feel that this feature is a bad idea as presently\nimplemented. If necessary, I'll volunteer to change it the way I think\nit should be (viz, initdb can set up a symlink to a specified xlog\ndirectory; no change from previous behavior anywhere else).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 09:00:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "\n> But I really seriously feel that this feature is a bad idea as presently\n> implemented. If necessary, I'll volunteer to change it the way I think\n> it should be (viz, initdb can set up a symlink to a specified xlog\n> directory; no change from previous behavior anywhere else).\n\nNeither solution is a particularly good one.\n\nSymlinks seem to break all over the place (windows, novell, os/2),\nenvironment variables are clumsy, arguments are easily forgotten by a\nnew admin starting up the system manually without reading documentation\nfirst, and postgresql.conf changes are implemented via HUP (which we\ndon't want -- has to be a full restart?).\n\n\nI'm going to vote a postgresql.conf entry similar to the LC_ vars thats\ninitialized by initdb BUT is configurable with a big warning above it\ndescribing what needs to be done when changing it.\n\n\n\n",
"msg_date": "13 Aug 2002 09:14:34 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> I think Tom is on to something here. I meant to ask but never got\n>> around to it. Why would anyone need to move the XLOG after you've\n>> inited the db?\n\n> I just determined that disk I/O is terrible, so want to move the XLOG over\n> to a different file system that is currently totally idle ...\n\nSure, needing to manually move the xlog directory is a plausible thing,\nbut *you can already do it*. The current procedure is\n\n1. shut down postmaster\n2. cp -p -r xlog directory to new location\n3. rm -rf old xlog directory\n4. ln -s new xlog directory to $PGDATA/xlog\n5. start postmaster\n\nWith the patch it's almost the same, but you can instead of (4) substitute\n\n(4a) Change PGXLOG environment variable or -X argument in start script.\n\nThat is *not* materially easier than an \"ln\" in my book. And it's\nfraught with all the risks we've come to know and not love over the\nyears: it's just way too easy to start a postmaster with the wrong set\nof environment variables. (Hand start vs start from boot script, etc,\netc, etc.) But this time the penalty for getting it wrong is, very\npossibly, irrecoverable corruption of your database.\n\nI see a serious downside to doing it this way, and not enough upside\nto justify taking the risk. We should continue to keep the \"where's the\nxlog\" information in the database directory itself. While a symlink\nisn't the only possible way to do that (a configuration-file item might\ndo instead), I just don't think it's a good idea to allow it to be\nspecified externally.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 09:15:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src "
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> In the spirit of gratutious overstatement, I'll point out again:\n> symlinks are evil.\n\nPlease justify that claim. They work really nicely in my experience...\nand I don't know of any modern Unix system that doesn't rely on them\n*heavily*.\n\nPossibly more to the point, I can assert \"environment variables are\nevil\" with at least as much foundation. We have seen many many reports\nof trouble from people who were bit by environment-variable problems\nwith Postgres. Do I need to trawl the archives for examples?\n\nHowever, as I just commented to Marc the real issue in my mind is that\nthe xlog needs to be solidly tied to the data directory, because we\ncan't risk starting a postmaster with the wrong combination. I do not\nthink that external specification of the xlog as a separate env-var or\npostmaster command-line arg gives the appropriate amount of safety.\nBut there's more than one way to record the xlog location in the data\ndirectory. If you don't like a symlink, what of putting it in\npostgresql.conf as a postmaster-start-time-only config option?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 09:24:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> One thought at the back of my mind is why not have something like a\n> 'PG_VERSION' for XLOG? Maybe something so simple as a text file in both\n> the data and xlog directory that just contains a timestamp from the\n> initdb? then, when you startup postmaster with a -X option, it compares\n> the two files and makes sure that they belong to each other?\n\nWhile that isn't a bad idea, it seems to me that it's adding mechanism\nto get around a problem that we don't need to have in the first place.\nThe only reason this risk exists is that the patch changes a monolithic\npostmaster option (-D) into two independent options (-D/-X) that in\nreality should never be independent.\n\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\nThe shoes might be a good idea anyway, but the primary problem is\nelsewhere...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 09:43:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "On Tue, 2002-08-13 at 08:15, Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> >> I think Tom is on to something here. I meant to ask but never got\n> >> around to it. Why would anyone need to move the XLOG after you've\n> >> inited the db?\n> \n> > I just determined that disk I/O is terrible, so want to move the XLOG over\n> > to a different file system that is currently totally idle ...\n> \n> Sure, needing to manually move the xlog directory is a plausible thing,\n> but *you can already do it*. The current procedure is\n> \n> 1. shut down postmaster\n> 2. cp -p -r xlog directory to new location\n> 3. rm -rf old xlog directory\n> 4. ln -s new xlog directory to $PGDATA/xlog\n> 5. start postmaster\n> \n> With the patch it's almost the same, but you can instead of (4) substitute\n\nWhy not simply create a script which does this? Creation of \"movexlog\"\nor some such beast which anally checked everything it did. As options,\nyou could simply pass it the src and dest and let it take care of the\nrest.\n\nGreg",
"msg_date": "13 Aug 2002 08:48:45 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "On Tue, 2002-08-13 at 14:15, Tom Lane wrote:\n> 4. ln -s new xlog directory to $PGDATA/xlog\n \n> With the patch it's almost the same, but you can instead of (4) substitute\n> \n> (4a) Change PGXLOG environment variable or -X argument in start script.\n> \n> That is *not* materially easier than an \"ln\" in my book. And it's\n> fraught with all the risks we've come to know and not love over the\n> years: it's just way too easy to start a postmaster with the wrong set\n> of environment variables. (Hand start vs start from boot script, etc,\n> etc, etc.) But this time the penalty for getting it wrong is, very\n> possibly, irrecoverable corruption of your database.\n> \n> I see a serious downside to doing it this way, and not enough upside\n> to justify taking the risk. We should continue to keep the \"where's the\n> xlog\" information in the database directory itself. While a symlink\n> isn't the only possible way to do that (a configuration-file item might\n> do instead), I just don't think it's a good idea to allow it to be\n> specified externally.\n\nSince the xlog is so closely linked with the database, I would be\nunhappy for its location to be determined by a parameter in a file that\ncould be edited by an ignorant or careless administrator. Thomas does\nnot like symlinks. Equally I don't like the idea of an environment\nvariable, which is even more vulnerable to misuse.\n\nCould you not store the location of the xlog directory as an entry in\n$PGDATA/global/pg_control? The xlog is as closely tied in with the\ndatabase as is its locale, which is already stored in pg_control.\n\nTo let the directory be moved, there should then be a standalone program\nthat would shut down the server, copy the xlog directory to the new\nlocation and set its access permissions; on a successful copy, change\nthe control entry, delete the old xlog directory and finally restart the\nserver. Use of such a program would protect against other possible\nerrors, such as pointing two different databases to the same xlog.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n",
"msg_date": "13 Aug 2002 14:49:51 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Symlinks seem to break all over the place (windows, novell, os/2),\n\nThe portability argument carries little weight with me. Recent versions\nof Windows have symlinks. If anyone wants to run a PG installation on\na symlink-less platform, okay; they just won't have the option to move\nthe xlog directory. That's probably not the only functionality they\nlose by using such an inadequate filesystem...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 09:50:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "On Tue, 2002-08-13 at 14:24, Tom Lane wrote:\n...\n> But there's more than one way to record the xlog location in the data\n> directory. If you don't like a symlink, what of putting it in\n> postgresql.conf as a postmaster-start-time-only config option?\n\nPlease don't!\n\nThe Debian package at least provides a default postgresql.conf and it\nwill be all too easy for someone installing an updated package to let\nthe default file overwrite the existing configuration. That could be\ndisastrous.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n",
"msg_date": "13 Aug 2002 14:59:45 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> Could you not store the location of the xlog directory as an entry in\n> $PGDATA/global/pg_control?\n\nWe could do that *only* if we were to produce an xlog-moving program\nimmediately; otherwise we've regressed in functionality compared to\nprior releases.\n\nI do not think it's necessary to be quite that anal about tying the two\ndirectories together --- the manual symlinking procedure I described has\nbeen around for two releases now, and while doubtless not that many\npeople have actually done it, we've not heard any reports of failures.\nThe thing is that if the DBA has to do this himself, he is very well\naware that he's performing a critical procedure, and he's not likely\nto muck it up.\n\nI think that from a safety point of view either a symlink or a\nconfig-file entry are perfectly acceptable, and in general I prefer\nplain-text config files to those which are not. (Right now, pg_control\nis *not* a config file: there is not anything in it that you might want\nto edit in normal system maintenance. It should stay that way.)\n\nMarc's idea of matching signature files would be a better\nsafety-checking mechanism than just making the data directory's xlog\nlink hard to get at.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 10:00:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src "
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> On Tue, 2002-08-13 at 14:24, Tom Lane wrote:\n>> But there's more than one way to record the xlog location in the data\n>> directory. If you don't like a symlink, what of putting it in\n>> postgresql.conf as a postmaster-start-time-only config option?\n\n> Please don't!\n\n> The Debian package at least provides a default postgresql.conf and it\n> will be all too easy for someone installing an updated package to let\n> the default file overwrite the existing configuration. That could be\n> disastrous.\n\nOuch. That's a mighty good point ... although if we were to implement\nMarc's idea of matching signature files, we'd certainly catch the error.\n\nIf we didn't, we'd need to use a separate, one-purpose config file that\njust records the xlog location. Curiously enough, that seems to me to\nbe exactly what a symlink does, except that the symlink is OS-level code\nrather than something we have to write for ourselves. So I'm back to\nthinking that a symlink is a perfectly respectable answer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 10:06:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "On Tue, 2002-08-13 at 15:00, Tom Lane wrote:\n> Oliver Elphick <olly@lfix.co.uk> writes:\n> > Could you not store the location of the xlog directory as an entry in\n> > $PGDATA/global/pg_control?\n> \n> We could do that *only* if we were to produce an xlog-moving program\n> immediately; otherwise we've regressed in functionality compared to\n> prior releases.\n\nIf it doesn't have to edit pg_control (accepting your point below) it\ncan be a shell script - half an hour to write and test it. I'll do it\ntonight if you choose to go this way....\n \n...\n> I think that from a safety point of view either a symlink or a\n> config-file entry are perfectly acceptable, and in general I prefer\n> plain-text config files to those which are not. (Right now, pg_control\n> is *not* a config file: there is not anything in it that you might want\n> to edit in normal system maintenance. It should stay that way.)\n\nI suggested pg_control because it's already there. It could just as well\nbe a *private* configuration file containing the pathname. Just don't\nput it in with postgresql.conf. As a producer of a binary distribution,\nI don't want to deal with the consequences of people ignorantly changing\nit. I'm sure you remember those mails from people who said, \"I wanted\nto save space so I deleted this log file...\"\n\n> Marc's idea of matching signature files would be a better\n> safety-checking mechanism than just making the data directory's xlog\n> link hard to get at.\n\nWhen dealing with unknown numbers of package users, some of whom have\nonly just converted from being Windows users, I want to be defensive. I\ncannot afford to assume that administrators know what they are doing! I\nhave to try to pick up the pieces after those that don't.\n\nI would like to have Marc's safeguards as well.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n",
"msg_date": "13 Aug 2002 15:18:48 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n>> Marc's idea of matching signature files would be a better\n>> safety-checking mechanism than just making the data directory's xlog\n>> link hard to get at.\n\n> I would like to have Marc's safeguards as well.\n\nYeah, I was lukewarm about that at first, but the more I think about it\nthe better it seems.\n\nThat does not change my opinion about the -X/PGXLOG switch though ---\nhaving a backup safety check is not an excuse for having a fundamentally\ninsecure set of startup options.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 10:38:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src "
},
{
"msg_contents": "\nYea, the problem with postgresql.conf is that we don't have any\nautomatic modifications of that file, and I don't think we want to start\njust to avoid symlinks.\n\nI personally like symlinks too. I use them all the time. What is the\nproblem with them, exactly? Can someone show me some commands that\ncause problems?\n\nAnd the problem with a separate file is that when the move pg_xlog, it\nisn't going to be obvious what they need to change to find the new\ndirectory. Of course, they could just create a symlink and leave the\nfile unchanged.\n\nAside from the arg bloat problem, the real danger is that someone is\ngoing to forget PGDATA and PGXLOG, try to start the postmaster, add -D\nfor PGDATA, then when they see that they need PGXLOG, they may just\ncreate data/pg_xlog as an empty directory and start the postmaster. \nThat is a very real possibility. I just tried it and it does complain\nabout the missing checkpoint records so maybe it isn't as bad as I\nthought, but still, it opens a place for error where none existed\nbefore.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Oliver Elphick <olly@lfix.co.uk> writes:\n> > On Tue, 2002-08-13 at 14:24, Tom Lane wrote:\n> >> But there's more than one way to record the xlog location in the data\n> >> directory. If you don't like a symlink, what of putting it in\n> >> postgresql.conf as a postmaster-start-time-only config option?\n> \n> > Please don't!\n> \n> > The Debian package at least provides a default postgresql.conf and it\n> > will be all too easy for someone installing an updated package to let\n> > the default file overwrite the existing configuration. That could be\n> > disastrous.\n> \n> Ouch. That's a mighty good point ... although if we were to implement\n> Marc's idea of matching signature files, we'd certainly catch the error.\n> \n> If we didn't, we'd need to use a separate, one-purpose config file that\n> just records the xlog location. Curiously enough, that seems to me to\n> be exactly what a symlink does, except that the symlink is OS-level code\n> rather than something we have to write for ourselves. So I'm back to\n> thinking that a symlink is a perfectly respectable answer.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 10:55:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On Mon, 12 Aug 2002, Thomas Lockhart wrote:\n\n> > If you move pg_xlog, you have to create a symlink in /data that points\n> > to the new location. Initdb would do that automatically, but if you\n> > move it after initdb, you would have to create the symlink yourself.\n> > With Thomas's current code, you would add/change PGXLOG instead to point\n> > to the new location, rather than modify the symlink.\n> \n> There is no \"the symlink\", but of course that tinkering is in no way\n> precluded by the new code. Although some seem to like symlinks, others\n> (including myself) see no good engineering practice in making them the\n> only foundation for distributing files across file systems.\n\nWhy? You often say you don't like them, but I have yet to see you say why \nyou don't like them.\n\n> The patches as-is follow existing PostgreSQL practice,\n\nusing environmental variables is a practice we should discontinue if \npossible, and use as little as possible. They ARE a security hole waiting \nto happen. \n\n> have complete and\n> perfect backward compatibility, and do not preclude changes in\n> underlying implementation in the future if those who are objecting\n> choose to do a complete and thorough job of meeting my objections to the\n> current counter-suggestions. As an example, two lines of code in initdb\n> would add \"the beloved symlink\" to $PGDATA, eliminating one objection\n> though (of course) one I don't support.\n> \n> > > One thought at the back of my mind is why not have something like a\n> > > 'PG_VERSION' for XLOG? Maybe something so simple as a text file in both\n> > > the data and xlog directory that just contains a timestamp from the\n> > > initdb? then, when you startup postmaster with a -X option, it compares\n> > > the two files and makes sure that they belong to each other?\n> > Uh, seems it could get messy, but, yea, that would work. It means\n> > adding a file to pg_xlog and /data and somehow matching them. My\n> > feeling was that the symlink was unambiguous and allowed for fewer\n> > mistakes. I think that was Tom's opinion too.\n> \n> In the spirit of gratutious overstatement, I'll point out again:\n> symlinks are evil. Any sense of a job well done is misplaced if our\n> underpinnings rely on them for distributing files across file systems.\n> As an ad hoc hack to work around current limitations they may have some\n> utility.\n\nWhy are symlinks evil? They exist on every major OS I know of, and they \nwork. They allow the user to quickly point the postgresql engine in \ndifferent places, and they are simple and easy to use. I found the use of \nenvironmental variables far more confusing when I first started using \npostgresql than symlinks. \n\nIn particular, which operating systems does Postgresql run don't have \nsymlink capability?\n\n> Anyway, istm that this is way too much discussion for a small extension\n> of capability, and it has likely cost a table and index \"with location\"\n> implementation for the upcoming release just due to time wasted\n> discussing it. Hope it was worth it :/\n\nWell, if it averts a security problem, or makes the database easier to use \nin the long run, then it probably was. It may seem like too much \ndiscussion for such a simple topic, but it's not.\n\nMy non-coding vote goes with Tom Lane on this. initdb can set pg_xlog, \nand if you need to change it, use symlinks. They're safe, secure, and \nthey just plain work. The only argument I can possibly think of against \nthe symlink boogie is if there is an os we run on that can't do symlinks. \nAnd then I'd still think it would belong in postgresql.conf, be set by \ninitdb, and not be an environmental variable.\n\nOf course that's just my opinion, I could be wrong (with apologies to \nDennis Miller)\n\nScott Marlowe\n\n",
"msg_date": "Tue, 13 Aug 2002 09:14:18 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On Tue, 13 Aug 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > > I think Tom is on to something here. I meant to ask but never got\n> > > around to it. Why would anyone need to move the XLOG after you've\n> > > inited the db?\n> >\n> > I just determined that disk I/O is terrible, so want to move the XLOG over\n> > to a different file system that is currently totally idle ...\n>\n> Yep, and you are going to do it using symlinks. Let us know how it\n> goes?\n\nThis was purely an fictional example ...\n\n\n",
"msg_date": "Tue, 13 Aug 2002 14:19:01 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "\nSounds good to me, but I have proven very unreliable in guessing others\nopinions on this issue.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > That does not change my opinion about the -X/PGXLOG switch though ---\n> > having a backup safety check is not an excuse for having a fundamentally\n> > insecure set of startup options.\n> \n> OK, so:\n> \n> 1. Leave -X option in initdb. Remove all other -X options.\n> \n> 2. Remove all uses of PGXLOG.\n> \n> 3. Symlink from PGDATA to desired location.\n> \n> 4. Implement pg_mvxlog to move xlog if server is shut down. (So no one\n> needs to know about 3.)\n> \n> In the future:\n> \n> Combine pg_mvxlog, pg_controldata, pg_resetxlog into pg_srvadm.\n> \n> Sounds good.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 15:52:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "Tom Lane writes:\n\n> That does not change my opinion about the -X/PGXLOG switch though ---\n> having a backup safety check is not an excuse for having a fundamentally\n> insecure set of startup options.\n\nOK, so:\n\n1. Leave -X option in initdb. Remove all other -X options.\n\n2. Remove all uses of PGXLOG.\n\n3. Symlink from PGDATA to desired location.\n\n4. Implement pg_mvxlog to move xlog if server is shut down. (So no one\nneeds to know about 3.)\n\nIn the future:\n\nCombine pg_mvxlog, pg_controldata, pg_resetxlog into pg_srvadm.\n\nSounds good.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 13 Aug 2002 21:53:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src "
},
{
"msg_contents": "On Tue, 13 Aug 2002, Bruce Momjian wrote:\n\n> If you move pg_xlog, you have to create a symlink in /data that points\n> to the new location. Initdb would do that automatically, but if you\n> move it after initdb, you would have to create the symlink yourself.\n> With Thomas's current code, you would add/change PGXLOG instead to point\n> to the new location, rather than modify the symlink.\n\nRight, but if the use of PGXLOG/-X it make sense then tryin got document\n\"hey, if you move pg_xlog, go into this directory and change the symlink\nto point to hte new locaiton\" ... as I said, I do not like symlinks ...\n\n> Uh, seems it could get messy, but, yea, that would work. It means\n> adding a file to pg_xlog and /data and somehow matching them. My\n> feeling was that the symlink was unambiguous and allowed for fewer\n> mistakes. I think that was Tom's opinion too.\n\nHrmmm ... how about some sort of 'tag' that gets written to the xlog\nfile(s) themselves instead?\n\n",
"msg_date": "Tue, 13 Aug 2002 23:07:14 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On Tue, 13 Aug 2002, scott.marlowe wrote:\n\n> My non-coding vote goes with Tom Lane on this. initdb can set pg_xlog,\n> and if you need to change it, use symlinks.\n\nI've not been following this thread, and thus I suppose I missed\nmy opportunity to vote, but just for the record I'm with the \"don't\nuse an environment variable\" crowd here, too. It's way, way to easy\nto start up with the wrong setting in your environment.\n\nThe log is part of the database. Therefore you should store the\ninformation on its location along with the rest of the database\ninformation. The idea is, you pass *one* piece of information to your\nprogram when you start it (in this case the database data directory\nlocation), and all of the rest of the information comes from there. Then\nyou have guaranteed consistency.\n\nHow the log location is stored within that area, I'm not so fussy\nabout. If a symlink is so terrible, why not put this information\nin the database config file?\n\nOh, and yes, it does need to be changable after an initdb. Say you\nstart out with only one disk on your system, but add a second disk\nlater, and want to move the log to that?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 15 Aug 2002 12:41:26 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> ... just for the record I'm with the \"don't\n> use an environment variable\" crowd here, too. It's way, way to easy\n> to start up with the wrong setting in your environment.\n\nWhat he said ...\n\n> Oh, and yes, it does need to be changable after an initdb. Say you\n> start out with only one disk on your system, but add a second disk\n> later, and want to move the log to that?\n\nSure, there should be *a* way to do that. It does not have to be as\neasy as \"change an environment variable\". And in fact the primary\nobjection to this patch is exactly that it is *not* as easy as \"change\nan environment variable\" --- what you get if you just change your\nenvironment variable is not a moved xlog, but a broken database.\nPossibly an irredeemably broken database.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 23:57:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "\nI would like to know how to move this item forward.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > ... just for the record I'm with the \"don't\n> > use an environment variable\" crowd here, too. It's way, way to easy\n> > to start up with the wrong setting in your environment.\n> \n> What he said ...\n> \n> > Oh, and yes, it does need to be changable after an initdb. Say you\n> > start out with only one disk on your system, but add a second disk\n> > later, and want to move the log to that?\n> \n> Sure, there should be *a* way to do that. It does not have to be as\n> easy as \"change an environment variable\". And in fact the primary\n> objection to this patch is exactly that it is *not* as easy as \"change\n> an environment variable\" --- what you get if you just change your\n> environment variable is not a moved xlog, but a broken database.\n> Possibly an irredeemably broken database.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 15 Aug 2002 00:01:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On Thu, 15 Aug 2002, Bruce Momjian wrote:\n\n> I would like to know how to move this item forward.\n\nRight now (i.e., in 7.2), the only two options we have for moving the\nlog file to a different spindle are mounting it on pg_xlog and using a\nsymlink. I doubt many people do the the former, and if they do they do\nnot need an option to init_db to move the logfile away from its default\nlocation.\n\nSo I propose we just continue to use the symlink method for the moment,\nuntil we agree on another way to store the log file location within the\ndata directory, and at that time we implement the code to do that.\n\nNote that if we don't move forward at all, we're still left in the symlink\nsituation, with the exception that you init_db, move the log directory and\ncreate the symlink by hand, and then start up the database. So this partial\nmove forward makes no difference to the symlink argument.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 15 Aug 2002 13:06:49 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Thu, 15 Aug 2002, Bruce Momjian wrote:\n> \n> > I would like to know how to move this item forward.\n> \n> Right now (i.e., in 7.2), the only two options we have for moving the\n> log file to a different spindle are mounting it on pg_xlog and using a\n> symlink. I doubt many people do the the former, and if they do they do\n> not need an option to init_db to move the logfile away from its default\n> location.\n> \n> So I propose we just continue to use the symlink method for the moment,\n> until we agree on another way to store the log file location within the\n> data directory, and at that time we implement the code to do that.\n> \n> Note that if we don't move forward at all, we're still left in the symlink\n> situation, with the exception that you init_db, move the log directory and\n> create the symlink by hand, and then start up the database. So this partial\n> move forward makes no difference to the symlink argument.\n\nPart of the reason we can't \"just continue to use the symlink method\" is\nthat the PGXLOG environment variable situation is currently in CVS\nbeyond initdb and in postmaster, postgres, and pg_ctl, so we do have to\ndo something before 7.3 or we will have new environment variable\nhandling in all those commands, and I don't think we have agreement on\nthat.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 16 Aug 2002 11:54:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "On Fri, 16 Aug 2002, Bruce Momjian wrote:\n\n> Part of the reason we can't \"just continue to use the symlink method\" is\n> that the PGXLOG environment variable situation is currently in CVS\n> beyond initdb and in postmaster, postgres, and pg_ctl, so we do have to\n> do something before 7.3 or we will have new environment variable\n> handling in all those commands, and I don't think we have agreement on\n> that.\n\nWell, let's take it out, then, and use the symlink instead. It may\nbe in CVS now, but it's never been in a release, so there should\nbe no problem with removing it.\n\nI think we've got some pretty strong opinions here that distributing\nconfiguration information amongst multiple environment variables\nis a Bad Idea.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Sat, 17 Aug 2002 15:30:01 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "\nOK, with two people now asking to have the patch removed, and with no\ncomment from Thomas, I have removed the patch. This removes XLogDir\nenvironment variable, and -X postmaster/postgres/initdb/pg_ctl flag.\n\nI have also removed the code that dynamically sized xlogdir.\n\nI will post the patch to patches, and keep the patch here in case it is\nneeded later.\n\n---------------------------------------------------------------------------\n\nCurt Sampson wrote:\n> On Fri, 16 Aug 2002, Bruce Momjian wrote:\n> \n> > Part of the reason we can't \"just continue to use the symlink method\" is\n> > that the PGXLOG environment variable situation is currently in CVS\n> > beyond initdb and in postmaster, postgres, and pg_ctl, so we do have to\n> > do something before 7.3 or we will have new environment variable\n> > handling in all those commands, and I don't think we have agreement on\n> > that.\n> \n> Well, let's take it out, then, and use the symlink instead. It may\n> be in CVS now, but it's never been in a release, so there should\n> be no problem with removing it.\n> \n> I think we've got some pretty strong opinions here that distributing\n> configuration information amongst multiple environment variables\n> is a Bad Idea.\n> \n> cjs\n> -- \n> Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n> Don't you know, in this new Dark Age, we're all light. --XTC\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 17 Aug 2002 11:13:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, with two people now asking to have the patch removed, and with no\n> comment from Thomas, I have removed the patch. This removes XLogDir\n> environment variable, and -X postmaster/postgres/initdb/pg_ctl flag.\n\nI thought we intended to keep the -X switch for initdb (only), and have\nit make a symlink.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Aug 2002 11:27:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, with two people now asking to have the patch removed, and with no\n> > comment from Thomas, I have removed the patch. This removes XLogDir\n> > environment variable, and -X postmaster/postgres/initdb/pg_ctl flag.\n> \n> I thought we intended to keep the -X switch for initdb (only), and have\n> it make a symlink.\n\nThe majority of the patch wasn't needed, so rather than muck it up, I\njust backed it all out. If we want that, and I think we do, someone\nshould implent it as a separate patch that people can review. All the\nwork is going to be done in initdb anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 17 Aug 2002 12:21:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "> OK, with two people now asking to have the patch removed, and with no\n> comment from Thomas, I have removed the patch. This removes XLogDir\n> environment variable, and -X postmaster/postgres/initdb/pg_ctl flag.\n> I have also removed the code that dynamically sized xlogdir.\n\n... Back in town...\n\nSorry to hear that this is the way it turned out. It is a bad precedent\nimho, and I see no way forward on my interest in this area. Hopefully\nsomeone else will pick it up; perhaps one of those so vehemently against\nthe details of this?\n\n - Thomas\n",
"msg_date": "Mon, 19 Aug 2002 09:24:21 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "\nYes, perhaps a bad precedent. I have very rarely done this. I do have\nthe patch here if anyone wants to use it. My guess is if someone\nimplements it, it will be done only in initdb, and use symlinks, which\nyou and Marc don't like, so we may be best leaving it undone for 7.3 and\nreturning with a clear slate in 7.4.\n\n---------------------------------------------------------------------------\n\nThomas Lockhart wrote:\n> > OK, with two people now asking to have the patch removed, and with no\n> > comment from Thomas, I have removed the patch. This removes XLogDir\n> > environment variable, and -X postmaster/postgres/initdb/pg_ctl flag.\n> > I have also removed the code that dynamically sized xlogdir.\n> \n> ... Back in town...\n> \n> Sorry to hear that this is the way it turned out. It is a bad precedent\n> imho, and I see no way forward on my interest in this area. Hopefully\n> someone else will pick it up; perhaps one of those so vehemently against\n> the details of this?\n> \n> - Thomas\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 19 Aug 2002 22:27:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Sorry to hear that this is the way it turned out. It is a bad precedent\n> imho, and I see no way forward on my interest in this area. Hopefully\n> someone else will pick it up; perhaps one of those so vehemently against\n> the details of this?\n\nI said I would be willing to make initdb create a symlink given a -X\nswitch; if you don't want to pick it up then I will do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Aug 2002 22:45:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src backend/tcop/postgres.cbacke "
}
] |
[
{
"msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql-server\nChanges by:\tthomas@postgresql.org\t02/08/04 02:44:47\n\nModified files:\n\tsrc/include/utils: timestamp.h \n\nLog message:\n\tDefine macros for handling typmod manipulation for date/time types.\n\tShould be more robust than all of that brute-force inline code.\n\tRename macros for masking and typmod manipulation to put TIMESTAMP_\n\tor INTERVAL_ in front of the macro name, to reduce the possibility\n\tof name space collisions.\n\nModified files:\n\tsrc/backend/utils/adt: date.c datetime.c format_type.c \n\t nabstime.c timestamp.c varlena.c \n\nLog message:\n\tAdd guard code to protect from buffer overruns on long date/time input\n\tstrings. Should go back in and look at doing this a bit more elegantly\n\tand (hopefully) cheaper. Probably not too bad anyway, but it seems a\n\tshame to scan the strings twice: once for length for this buffer overrun\n\tprotection, and once to parse the line.\n\tRemove use of pow() in date/time handling; was already gone from everything\n\t*but* the time data types.\n\tDefine macros for handling typmod manipulation for date/time types.\n\tShould be more robust than all of that brute-force inline code.\n\tRename macros for masking and typmod manipulation to put TIMESTAMP_\n\tor INTERVAL_ in front of the macro name, to reduce the possibility\n\tof name space collisions.\n\n",
"msg_date": "Sun, 4 Aug 2002 02:44:47 -0400 (EDT)",
"msg_from": "thomas@postgresql.org (Thomas Lockhart)",
"msg_from_op": true,
"msg_subject": "pgsql-server/src include/utils/timestamp.h bac ..."
},
{
"msg_contents": "thomas@postgresql.org (Thomas Lockhart) writes:\n\n> Log message:\n> \tAdd guard code to protect from buffer overruns on long date/time input\n> \tstrings. Should go back in and look at doing this a bit more elegantly\n> \tand (hopefully) cheaper. Probably not too bad anyway, but it seems a\n> \tshame to scan the strings twice: once for length for this buffer overrun\n> \tprotection, and once to parse the line.\n\nAre these changes available for 7.2, too? There is at least a DoS\npotential lurking here. :-(\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Sun, 04 Aug 2002 19:53:17 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src include/utils/timestamp.h bac ..."
},
{
"msg_contents": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE> writes:\n> thomas@postgresql.org (Thomas Lockhart) writes:\n> > Log message:\n> > \tAdd guard code to protect from buffer overruns on long date/time input\n> > \tstrings. Should go back in and look at doing this a bit more elegantly\n> > \tand (hopefully) cheaper. Probably not too bad anyway, but it seems a\n> > \tshame to scan the strings twice: once for length for this buffer overrun\n> > \tprotection, and once to parse the line.\n> \n> Are these changes available for 7.2, too? There is at least a DoS\n> potential lurking here. :-(\n\nThomas can correct me if I'm mistaken, but I believe these changes apply\nto the new integer datetime code Thomas wrote earlier in the 7.3\ndevelopment cycle -- i.e. there's no bug present in 7.2, or earlier CVS\ncode when compiled without --enable-integer-datetimes.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "04 Aug 2002 18:45:46 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src include/utils/timestamp.h bac ..."
},
{
"msg_contents": "...\n> Thomas can correct me if I'm mistaken, but I believe these changes apply\n> to the new integer datetime code Thomas wrote earlier in the 7.3\n> development cycle -- i.e. there's no bug present in 7.2, or earlier CVS\n> code when compiled without --enable-integer-datetimes.\n\nActually, it is probably an issue for the earlier stuff too, but the\ninteger value reading seems to have different sensitivities to really\nlong strings which is the symptom that was noticed just recently.\n\nThe same technique for guarding would work fine for 7.2 also.\n\n - Thomas\n",
"msg_date": "Sun, 04 Aug 2002 16:03:10 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src include/utils/timestamp.h bac ..."
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n\n> Thomas can correct me if I'm mistaken, but I believe these changes apply\n> to the new integer datetime code\n\nNo, it's possible to crash the backend in 7.2, too.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 05 Aug 2002 08:50:05 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql-server/src"
},
{
"msg_contents": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE> writes:\n\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n>\n>> Thomas can correct me if I'm mistaken, but I believe these changes apply\n>> to the new integer datetime code\n>\n> No, it's possible to crash the backend in 7.2, too.\n\nAnd 7.2.1, of course.\n\nLet me ask again: Do you plan to address this in an update for 7.2.1?\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Fri, 09 Aug 2002 21:50:04 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "[SECURITY] DoS attack on backend possible (was: Re:\n\t[COMMITTERS] pgsql-server/src)"
},
{
"msg_contents": "Hi Florian,\n\nIs it possible to crash a 7.2.1 backend without having an entry in the\npg_hba.conf file?\n\ni.e. Is every PostgreSQL 7.2.1 installation around vulnerable to a\nremote DoS (or worse) from any user anywhere, at this moment in time?\n\nRegards and best wishes,\n\nJustin Clift\n\n\nFlorian Weimer wrote:\n> \n> Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE> writes:\n> \n> > Neil Conway <nconway@klamath.dyndns.org> writes:\n> >\n> >> Thomas can correct me if I'm mistaken, but I believe these changes apply\n> >> to the new integer datetime code\n> >\n> > No, it's possible to crash the backend in 7.2, too.\n> \n> And 7.2.1, of course.\n> \n> Let me ask again: Do you plan to address this in an update for 7.2.1?\n> \n> --\n> Florian Weimer Weimer@CERT.Uni-Stuttgart.DE\n> University of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\n> RUS-CERT fax +49-711-685-5898\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 10 Aug 2002 05:59:45 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re: "
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n\n> Is it possible to crash a 7.2.1 backend without having an entry in the\n> pg_hba.conf file?\n\nNo, but think of web applications and things like that. The web\nfrontend might pass in a date string which crashes the server backend.\nSince the crash can be triggered by mere data, an attacker does not\nhave to be able to send specific SQL statements to the server.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Sun, 11 Aug 2002 17:00:40 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re: "
},
{
"msg_contents": "Hi Florian,\n\nAm I understanding this right:\n\n - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain\ndate values which would be accepted by standard \"front end\" parsing? \nSo, a web application layer can request a date from a user, do standard\nintegrity checks (like looking for weird characters and formatting\nhacks) on the date given, then use the date as part of a SQL query, and\nPostgreSQL will die?\n\n?\n\nRegards and best wishes,\n\nJustin Clift\n\n\nFlorian Weimer wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> \n> > Is it possible to crash a 7.2.1 backend without having an entry in the\n> > pg_hba.conf file?\n> \n> No, but think of web applications and things like that. The web\n> frontend might pass in a date string which crashes the server backend.\n> Since the crash can be triggered by mere data, an attacker does not\n> have to be able to send specific SQL statements to the server.\n> \n> --\n> Florian Weimer Weimer@CERT.Uni-Stuttgart.DE\n> University of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\n> RUS-CERT fax +49-711-685-5898\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 12 Aug 2002 02:26:56 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re: "
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Am I understanding this right:\n> - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain\n> date values which would be accepted by standard \"front end\" parsing? \n\nAFAIK it's a buffer overrun issue, so anything that looks like a\nreasonable date would *not* cause the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Aug 2002 13:09:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re: "
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n\n> - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain\n> date values which would be accepted by standard \"front end\" parsing? \n> So, a web application layer can request a date from a user, do standard\n> integrity checks (like looking for weird characters and formatting\n> hacks) on the date given, then use the date as part of a SQL query, and\n> PostgreSQL will die?\n\nIt depends on the checking. If you just check that the date consists\nof digits (and a few additional characters), it's possible to crash\nthe server.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Sun, 11 Aug 2002 19:17:20 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re: "
},
{
"msg_contents": "Hi Florian,\n\nVery hard call.\n\nIf this was even a \"fringe case\" whereby even only a few places that are\ndoing \"the right thing\" would be compromisable, then we should probably\ngo for a 7.2.2. Even if it's only 7.2.1 with this one bug fix.\n\nHowever, it sounds like this bug is really only going to affect those\nplaces which aren't correctly implementing *proper*, *decent* input\nvalidation, and are then passing this not-properly-checked value\nstraight into a SQL string for execution by the server.\n\nDoing that (not input checking properly) is a brain damaged concept all\nby itself. :(\n\nIs this scenario of not properly checking the input the only way\nPostgreSQL could be crashed by this bug In Real Life?\n\nHaving said this, is this what 7.2.2 here would require doing:\n\n- Create an archive of 7.2.1+bugfix, and call it 7.2.2, gzip, md5, etc,\nas appropriate, put on site\n- Update CVS appropriately\n- Create a new press release for 7.2.2, spread that appropriately too\n- Add an entry to the main website\n\nI reckon the only reason for making a 7.2.2 for this would be to help\nensure newbie (or very tired) coders don't get their servers taken out\nby clueful malicious types.\n\nRegards and best wishes,\n\nJustin Clift\n\n\nFlorian Weimer wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> \n> > - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain\n> > date values which would be accepted by standard \"front end\" parsing?\n> > So, a web application layer can request a date from a user, do standard\n> > integrity checks (like looking for weird characters and formatting\n> > hacks) on the date given, then use the date as part of a SQL query, and\n> > PostgreSQL will die?\n> \n> It depends on the checking. If you just check that the date consists\n> of digits (and a few additional characters), it's possible to crash\n> the server.\n> \n> --\n> Florian Weimer Weimer@CERT.Uni-Stuttgart.DE\n> University of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\n> RUS-CERT fax +49-711-685-5898\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 12 Aug 2002 04:24:15 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re: "
},
{
"msg_contents": "> Justin Clift <justin@postgresql.org> writes:\n> > Am I understanding this right:\n> > - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain\n> > date values which would be accepted by standard \"front end\" parsing?\n>\n> AFAIK it's a buffer overrun issue, so anything that looks like a\n> reasonable date would *not* cause the problem.\n\nStill, I believe this should require a 7.2.2 release. Imagine a university\ndatabase server for a course for example - the students would just crash it\nall the time.\n\nChris\n\n",
"msg_date": "Mon, 12 Aug 2002 10:25:18 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re: "
},
{
"msg_contents": "Hi Chris,\n\nChristopher Kings-Lynne wrote:\n> \n<snip> \n> Still, I believe this should require a 7.2.2 release. Imagine a university\n> database server for a course for example - the students would just crash it\n> all the time.\n\nHey yep, good point.\n\nIs this the only way that we know of non postgresql-superusers to be\nable to take out the server other than by extremely non-optimal,\nresource wasting queries?\n\nIf we release a 7.2.2 because of this, can we be pretty sure we have a\n\"no known vulnerabilities\" release, or are there other small holes which\nshould be fixed too?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Chris\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 12 Aug 2002 12:31:56 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "> Hey yep, good point.\n>\n> Is this the only way that we know of non postgresql-superusers to be\n> able to take out the server other than by extremely non-optimal,\n> resource wasting queries?\n>\n> If we release a 7.2.2 because of this, can we be pretty sure we have a\n> \"no known vulnerabilities\" release, or are there other small holes which\n> should be fixed too?\n\nWhat about that \"select cash_out(2) crashes because of opaque\" entry in the\nTODO? That really needs to be fixed.\n\nI was talking to a CS lecturer about switching to postgres from oracle when\n7.3 comes out and all he said was \"how easily is it hacked?\". He says their\nsystems are the most constantly bombarded in universities. What could I\nsay? That any unprivileged user can just go 'select cash_out(2)' to DOS the\nbackend?\n\nChris\n\n",
"msg_date": "Mon, 12 Aug 2002 10:37:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "On Mon, 12 Aug 2002, Justin Clift wrote:\n\n> Hi Chris,\n> \n> Christopher Kings-Lynne wrote:\n> > \n> <snip> \n> > Still, I believe this should require a 7.2.2 release. Imagine a university\n> > database server for a course for example - the students would just crash it\n> > all the time.\n> \n> Hey yep, good point.\n> \n> Is this the only way that we know of non postgresql-superusers to be\n> able to take out the server other than by extremely non-optimal,\n> resource wasting queries?\n> \n\nCheck the TODO:\n\nYou are now connected as new user s.\ntemplate1=> select cash_out(2);\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!> \\q\n[swm@laptop a]$ bin/psql template1\npsql: could not connect to server: Connection refused\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.3987\"?\n[swm@laptop a]$\n\n---\n\nGavin\n\n",
"msg_date": "Mon, 12 Aug 2002 12:41:15 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "\nYea, I added that TODO entry, and I am embarrased that a single cash_out\ncall could crash the backend. I thought about not making this public\nknowledge, but making it public hasn't marshalled any forces to fix it\nso maybe I was wrong to put it on TODO.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Mon, 12 Aug 2002, Justin Clift wrote:\n> \n> > Hi Chris,\n> > \n> > Christopher Kings-Lynne wrote:\n> > > \n> > <snip> \n> > > Still, I believe this should require a 7.2.2 release. Imagine a university\n> > > database server for a course for example - the students would just crash it\n> > > all the time.\n> > \n> > Hey yep, good point.\n> > \n> > Is this the only way that we know of non postgresql-superusers to be\n> > able to take out the server other than by extremely non-optimal,\n> > resource wasting queries?\n> > \n> \n> Check the TODO:\n> \n> You are now connected as new user s.\n> template1=> select cash_out(2);\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !> \\q\n> [swm@laptop a]$ bin/psql template1\n> psql: could not connect to server: Connection refused\n> Is the server running locally and accepting\n> connections on Unix domain socket \"/tmp/.s.PGSQL.3987\"?\n> [swm@laptop a]$\n> \n> ---\n> \n> Gavin\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 12 Aug 2002 01:09:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > Hey yep, good point.\n> >\n> > Is this the only way that we know of non postgresql-superusers to be\n> > able to take out the server other than by extremely non-optimal,\n> > resource wasting queries?\n> >\n> > If we release a 7.2.2 because of this, can we be pretty sure we have a\n> > \"no known vulnerabilities\" release, or are there other small holes which\n> > should be fixed too?\n> \n> What about that \"select cash_out(2) crashes because of opaque\" entry in the\n> TODO? That really needs to be fixed.\n> \n> I was talking to a CS lecturer about switching to postgres from oracle when\n> 7.3 comes out and all he said was \"how easily is it hacked?\". He says their\n> systems are the most constantly bombarded in universities. What could I\n> say? That any unprivileged user can just go 'select cash_out(2)' to DOS the\n> backend?\n\nIf he's using Oracle already, he ought to check out:\n\nhttp://www.cert.org/advisories/CA-2002-08.html\n\nI'd still think it would be a good policy to make a security release.\nHowever, without user resource limits in PostgreSQL, anyone can make a\nmachine useless with a query like:\n\nSELECT * \nFROM pg_class a, pg_class b, pg_class c, pg_class d, pg_class e, ... ;\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Mon, 12 Aug 2002 03:17:56 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Justin Clift <justin@postgresql.org> writes:\n>> Am I understanding this right:\n>> - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain\n>> date values which would be accepted by standard \"front end\" parsing? \n>\n> AFAIK it's a buffer overrun issue, so anything that looks like a\n> reasonable date would *not* cause the problem.\n\nYes, but if you just check that the date given by the user matches the\nregular expression \"[0-9]+-[0-9]+-[0-9]+\", it's still possible to\ncrash the backend.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 12 Aug 2002 10:19:19 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n\n> I'd still think it would be a good policy to make a security release.\n> However, without user resource limits in PostgreSQL, anyone can make a\n> machine useless with a query like:\n>\n> SELECT * \n> FROM pg_class a, pg_class b, pg_class c, pg_class d, pg_class e, ... ;\n\nBut this requires to be able to send arbitrary SQL commands; just\nfeeding a specially crafted date string usually does not.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 12 Aug 2002 10:23:29 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "On Mon, 12 Aug 2002, Florian Weimer wrote:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > Justin Clift <justin@postgresql.org> writes:\n> >> Am I understanding this right:\n> >> - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain\n> >> date values which would be accepted by standard \"front end\" parsing? \n> >\n> > AFAIK it's a buffer overrun issue, so anything that looks like a\n> > reasonable date would *not* cause the problem.\n> \n> Yes, but if you just check that the date given by the user matches the\n> regular expression \"[0-9]+-[0-9]+-[0-9]+\", it's still possible to\n> crash the backend.\n\nFlorian,\n\nAnyone who is using that regular expression in an attempt to validate a\nuser supplied date is already in trouble.\n\nGavin\n\n",
"msg_date": "Mon, 12 Aug 2002 18:27:27 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Well, if it's a buffer overrun, there is certainly potential for risks\nwell beyond that of simply crashing the \"be\". It's certainly possible\nthat a simple bug in one cgi script or web site could allow someone to\nexecute code on the database host because of this bug. Assuming they\nare running the \"be\" as \"postgres\" or some other seemingly harmless \nuser, it's still possible that complete destruction of any and all\ndatabases which are hosted and accessible by this user can be utterly\ndestroyed or miscellaneously corrupted.\n\nBuffer over runs should be treated with the up most urgency and\nrespect. IMO, any known buffer overrun is worthy of an emergency fix\nand corresponding advisory.\n\nGreg Copeland\n\n\nOn Sun, 2002-08-11 at 12:09, Tom Lane wrote:\n> Justin Clift <justin@postgresql.org> writes:\n> > Am I understanding this right:\n> > - A PostgreSQL 7.2.1 server can be crashed if it gets passed certain\n> > date values which would be accepted by standard \"front end\" parsing? \n> \n> AFAIK it's a buffer overrun issue, so anything that looks like a\n> reasonable date would *not* cause the problem.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html",
"msg_date": "12 Aug 2002 08:24:16 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n\n> Well, if it's a buffer overrun, there is certainly potential for risks\n> well beyond that of simply crashing the \"be\".\n\nIt's a buffer overrun, but the data has to pass through the date/time\nparser in the backend, so it's not entirely obvious how you can\nexploit this to run arbitrary code.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 12 Aug 2002 15:48:10 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n\n>> Yes, but if you just check that the date given by the user matches the\n>> regular expression \"[0-9]+-[0-9]+-[0-9]+\", it's still possible to\n>> crash the backend.\n\n> Anyone who is using that regular expression in an attempt to validate a\n> user supplied date is already in trouble.\n\nI don't understand why extremely strict syntax checks are necessary.\nThe database has to parse it again anyway, and if you can't rely on\nthe database to get this simple parsing right, will it store your\ndata? Such a reasoning doesn't seem to be too far-fetched to me\n\nI would probably impose a length limit in the frontend that uses the\ndatabase, but the PostgreSQL documentation does not state that this is\na requirement (because the parsers in the backend are so fragile).\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 12 Aug 2002 15:51:35 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "On Mon, 12 Aug 2002, Florian Weimer wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> \n> >> Yes, but if you just check that the date given by the user matches the\n> >> regular expression \"[0-9]+-[0-9]+-[0-9]+\", it's still possible to\n> >> crash the backend.\n> \n> > Anyone who is using that regular expression in an attempt to validate a\n> > user supplied date is already in trouble.\n> \n> I don't understand why extremely strict syntax checks are necessary.\n> The database has to parse it again anyway, and if you can't rely on\n> the database to get this simple parsing right, will it store your\n> data? Such a reasoning doesn't seem to be too far-fetched to me\n\nWhy attempt to validate the user data at all if you're going to do a bad\njob of it? Moreover, 'rely on the database to get this ... right': what\nkind of security principle is that? For someone interested in security,\nyou've just broken the most important principle.\n\nAs to your other point -- that this bug in the data/time code actually\n*reflects* the quality and reliability of the database itself -- you've\nreally gone too far. The best software has bugs. The reason that no one is\njumping up and down making releases and giving you a medal is that (1) it\nis still questionable as to whether or not this bug exists in 7.2.1 (2) it\ndoes not appear to be exploitable (3) it could only be used to cause a\ndenial of service by an authorised user (4) it is common practise for\ndatabase application developers to validate user input and if they don't\nthey have bigger problems than a potential DoS on their hands.\n\nGavin\n\n",
"msg_date": "Tue, 13 Aug 2002 00:15:01 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Gavin Sherry wrote:\n\n> As to your other point -- that this bug in the data/time code actually\n> *reflects* the quality and reliability of the database itself -- you've\n> really gone too far. The best software has bugs.\n\nFor example, in the current version of Oracle 9i, if a client (say \nSQL*Plus) is running on a linux box and talking to Oracle running on a \nSolaris box, executes the following:\n\ncreate table foo(i integer primary key, bar blob);\n\n... then later does ...\n\nupdate foo set bar=empty_blob() where i = <some key value>\n\nThe Oracle server on Solaris crashes. *the whole thing* BANG! \nShot-to-the-head-dead. Not the user's client - the server.\n\nThis means that any user with the right to update a single table with a \nblob can crash Oracle at will.\n\nWhat does this say about Oracle's overall reliability?\n\nAs Gavin says all software has bugs. Most of PG's bugs are far less \nspectacular than the Oracle bug I mention here.\n\nOverall I rate PG and Oracle as being about equivalent in terms of bugs.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Mon, 12 Aug 2002 07:26:44 -0700",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Weimer@CERT.Uni-Stuttgart.DE (Florian Weimer) wrote in \nnews:8765yg2niw.fsf@CERT.Uni-Stuttgart.DE:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> \n>>> Yes, but if you just check that the date given by the user matches the\n>>> regular expression \"[0-9]+-[0-9]+-[0-9]+\", it's still possible to\n>>> crash the backend.\n> \n>> Anyone who is using that regular expression in an attempt to validate a\n>> user supplied date is already in trouble.\n> \n> I don't understand why extremely strict syntax checks are necessary.\n> The database has to parse it again anyway, and if you can't rely on\n> the database to get this simple parsing right, will it store your\n> data? Such a reasoning doesn't seem to be too far-fetched to me\n\nI believe this is often referred to as the layered onion approach of \nsecurity, besides that what constitutes extremely strict syntax checking is \nsomewhat subjective. What about checking the input for backslash, quote, \nand double quote (\\'\")? If you are not taking care of those in input then \ncrashing the backend is going to be the least of your worries. I think \nthere needs to be some level of checking before the input is blindly passed \nto the backend for parsing. Typically the input for an individual field \nwouldnt be more than ~255 characters, unless you are dealing with TEXT or \nlo's. I dont consider adding a length check to the usual \\'\" check to be \nextreme... but perhaps just as necssary?\n\n",
"msg_date": "Mon, 12 Aug 2002 18:46:58 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Hi,\n\n-- ngpg@grymmjack.com wrote:\n\n> What about checking the input for backslash, quote, \n> and double quote (\\'\")? If you are not taking care of those in input\n> then crashing the backend is going to be the least of your worries. \n\nwith Perl and *using placeholders and bind values*, the application\ndeveloper has not to worry about this. So, usually I don't check the\nvalues in my applications (e.g. if only values between 1 and 5 are\nallowed and under normal circumstances only these are possible), it's the\ntask of the database (check constraint). \n\n\nCiao\n Alvar\n\n\n-- \n** ODEM ist für den poldi Award nominiert! http://www.poldiaward.de/\n** http://www.poldiaward.de/index.php?display=detail&cat=audi&item=24\n** http://odem.org/\n** Mehr Projekte: http://alvar.a-blast.org/\n\n\n",
"msg_date": "Sun, 18 Aug 2002 13:55:21 +0200",
"msg_from": "Alvar Freude <alvar@a-blast.org>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible (was: Re:"
},
{
"msg_contents": "Alvar Freude <alvar@a-blast.org> writes:\n\n>> What about checking the input for backslash, quote, \n>> and double quote (\\'\")? If you are not taking care of those in input\n>> then crashing the backend is going to be the least of your worries. \n>\n> with Perl and *using placeholders and bind values*, the application\n> developer has not to worry about this. So, usually I don't check the\n> values in my applications (e.g. if only values between 1 and 5 are\n> allowed and under normal circumstances only these are possible), it's the\n> task of the database (check constraint). \n\nThat's the idea. It's the job of the database to guarantee data\nintegrety.\n\nObviously, the PostgreSQL developers disagree. If I've got to do all\nchecking in the application anyway, I can almost use MySQL\ninstead. ;-)\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 19 Aug 2002 18:59:00 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "Hi Florian,\n\nYou guys *definitely* write scarey code.\n\n:-(\n\nRegards and best wishes,\n\nJustin Clift\n\n\nFlorian Weimer wrote:\n> \n> Alvar Freude <alvar@a-blast.org> writes:\n> \n> >> What about checking the input for backslash, quote,\n> >> and double quote (\\'\")? If you are not taking care of those in input\n> >> then crashing the backend is going to be the least of your worries.\n> >\n> > with Perl and *using placeholders and bind values*, the application\n> > developer has not to worry about this. So, usually I don't check the\n> > values in my applications (e.g. if only values between 1 and 5 are\n> > allowed and under normal circumstances only these are possible), it's the\n> > task of the database (check constraint).\n> \n> That's the idea. It's the job of the database to guarantee data\n> integrety.\n> \n> Obviously, the PostgreSQL developers disagree. If I've got to do all\n> checking in the application anyway, I can almost use MySQL\n> instead. ;-)\n> \n> --\n> Florian Weimer Weimer@CERT.Uni-Stuttgart.DE\n> University of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\n> RUS-CERT fax +49-711-685-5898\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 20 Aug 2002 03:07:30 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n\n> You guys *definitely* write scarey code.\n\nYes, indeed. My code has a lot of unnecessary and error-prone input\nvalidation checks because I don't trust the PostgreSQL parser.\n\nThat's scary. You don't trust your database that it processes a\nsimple text string, yet you still believe that it keeps all the data\nyou store, although this involves much more complex data structures\nand algorithms.\n\nWhat a strange asymmetry!\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 19 Aug 2002 19:14:18 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "On Mon, 2002-08-19 at 13:14, Florian Weimer wrote:\n> Justin Clift <justin@postgresql.org> writes:\n> \n> > You guys *definitely* write scarey code.\n> \n> Yes, indeed. My code has a lot of unnecessary and error-prone input\n> validation checks because I don't trust the PostgreSQL parser.\n\nBah.. Check the datatype is close and send it in.\n\nWould be much easier to capture database errors if you didn't have to\nbase all error matches on regular expressions (error codes will be\nnice).\n\n",
"msg_date": "19 Aug 2002 13:17:54 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE> writes:\n> That's the idea. It's the job of the database to guarantee data\n> integrety.\n\n> Obviously, the PostgreSQL developers disagree.\n\nLook: it's an acknowledged bug and it's fixed in current sources.\nThe disagreement is over whether this single bug is sufficient reason\nto force issuance of a 7.2.2 release. Given that we are within a couple\nof weeks of going beta for 7.3, the previous decision not to issue a\n7.2.2 release will stand, unless something *much* worse than this pops\nup.\n\nSaying or implying that the developers don't care about data integrity\ndoes not enhance your credibility.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Aug 2002 14:33:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Saying or implying that the developers don't care about data integrity\n> does not enhance your credibility.\n\nSorry, my fault. Indeed, I didn't check carefully whether the people\nwho go a bit too far in downplaying the problem at hand are in fact\nPostgreSQL developers.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Mon, 19 Aug 2002 20:38:23 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "Weimer@CERT.Uni-Stuttgart.DE (Florian Weimer) wrote\n> Alvar Freude <alvar@a-blast.org> writes:\n> \n>>> What about checking the input for backslash, quote, \n>>> and double quote (\\'\")? If you are not taking care of those in\n>>> input then crashing the backend is going to be the least of your\n>>> worries. \n>>\n>> with Perl and *using placeholders and bind values*, the application\n>> developer has not to worry about this. So, usually I don't check the\n>> values in my applications (e.g. if only values between 1 and 5 are\n>> allowed and under normal circumstances only these are possible), it's\n>> the task of the database (check constraint). \n> \n> That's the idea. It's the job of the database to guarantee data\n> integrety.\n> \n> Obviously, the PostgreSQL developers disagree. If I've got to do all\n> checking in the application anyway, I can almost use MySQL\n> instead. ;-)\n> \n\nperhaps I did not express myself very well.\nif you are going to be passing any user input to the database, you \nmust/should validate in some manner before blindly passing it to the db.\nThe db can and should guarantee data integrity, but the database cannot \nread your mind when it comes to how you structure your queries.\n\n$input = \"user'name\";\nINSERT INTO db (name) VALUES ('$input');\n\nwill fail because the ' in the input needs to be escaped with a \nbackslash. at some point this has to happen, because\n\nINSERT INTO db (name) VALUES ('user'name');\n\nis not a valid query.\n\nThe other thing is i think you are stretching the \"db integrity \nchecking\" argument a little too far. Its the db's responsibility to make \nsure only valid data is stored, but its not the db's responsibility to \ndirectly interact with your end users -- this is the job of your \napplication and interface. If you insert a new record and there is a \nconstraint violation, how is your application supposed to know what \nillegal value(s) is/are causing it? How are you supposed to convey the \nproper information to your user to get the input you are looking for?\n\nBesides all that, and i dont mean to insult you, but your just plain \nstupid if you blindly pass user inputted data to your db. For that \nmatter, your stupid if you blindly accept user input in any programming \nwithout checking it at some level.\n",
"msg_date": "Mon, 19 Aug 2002 20:54:53 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "On Mon, 19 Aug 2002 ngpg@grymmjack.com wrote:\n\n> $input = \"user'name\";\n> INSERT INTO db (name) VALUES ('$input');\n>\n> will fail because the ' in the input needs to be escaped with a\n> backslash.\n\nIt will fail because you're doing this a very, very, very bad way.\nWhy rewrite this kind of stuff when the vendor has already made\ncorrect code available?\n\n PreparedStatement stmt = connection.prepareStatement(\n\t\"INSERT INTO db (name) VALUES (?)\");\n stmt.setString(\"user'name\");\n stmt.execute();\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 20 Aug 2002 20:29:11 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "cjs@cynic.net (Curt Sampson) wrote in\n> On Mon, 19 Aug 2002 ngpg@grymmjack.com wrote:\n> \n>> $input = \"user'name\";\n>> INSERT INTO db (name) VALUES ('$input');\n>>\n>> will fail because the ' in the input needs to be escaped with a\n>> backslash.\n> \n> It will fail because you're doing this a very, very, very bad way.\n> Why rewrite this kind of stuff when the vendor has already made\n> correct code available?\n> \n> PreparedStatement stmt = connection.prepareStatement(\n> \"INSERT INTO db (name) VALUES (?)\");\n> stmt.setString(\"user'name\");\n> stmt.execute();\n> \n> cjs\n\nCurt:\nI am not doing it this way, I am trying to point out that doing it without \n\"doing something\" (whether it be using preparedstatement or WHATEVER), is, \nas you say, very very very bad (I am agreeing with you). I am further \nsaying that whatever it is you do, you should also be doing some other \nsimple validation, like the length of the inputs, because most inputs wont \nbe over 255 chars before being prepared. This is just an example, but you \nshould do whatever validation would apply to you (and this is probably true \ncoding for any user input whether it involves a db or not). I am just \nsaying this is good practice in my opinion and had these people that \nbrought up the issue in the first place were doing it, then pgsql's \nshortcomings would not have been as severe a problem. Things I am not \nsaying are: its ok for pgsql to have this DoS problem; its the frontends \nresponsibility to maintain data integrity not the backend.\n",
"msg_date": "Tue, 20 Aug 2002 13:08:26 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "ngpg@grymmjack.com writes:\n\n> if you are going to be passing any user input to the database, you \n> must/should validate in some manner before blindly passing it to the db.\n> The db can and should guarantee data integrity, but the database cannot \n> read your mind when it comes to how you structure your queries.\n\n[example of SQL injection attack deleted]\n\nThis is not the problem at hand. SQL injection attacks can be avoided\neasily. Bugs in the conversion of strings to internal PostgreSQL\nobjects are a different matter, though, and usually, devastating\neffects cannot be avoided by (reasonably complex) checks in the\nfrontend.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n",
"msg_date": "Wed, 21 Aug 2002 14:34:46 +0200",
"msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
},
{
"msg_contents": "Weimer@CERT.Uni-Stuttgart.DE (Florian Weimer) wrote \n\n> ngpg@grymmjack.com writes:\n> \n>> if you are going to be passing any user input to the database, you \n>> must/should validate in some manner before blindly passing it to the db.\n>> The db can and should guarantee data integrity, but the database cannot \n>> read your mind when it comes to how you structure your queries.\n> \n> [example of SQL injection attack deleted]\n> \n> This is not the problem at hand. SQL injection attacks can be avoided\n> easily. Bugs in the conversion of strings to internal PostgreSQL\n> objects are a different matter, though, and usually, devastating\n> effects cannot be avoided by (reasonably complex) checks in the\n> frontend.\n> \n\nyeah i wasnt aware that adding a if(strlen($input) > SOME_REASONABLE_MAX) \nwas complex. the sql injection attack was just an(other) example of why \nyou do not simply forward user input to the backend. all i was trying to \npoint out is that most of these buffer overflows in the backend can be \navoided just as easily as the sql injection attack.\n",
"msg_date": "Wed, 21 Aug 2002 21:28:52 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: [SECURITY] DoS attack on backend possible"
}
] |
[
{
"msg_contents": "I've committed changes to do the following:\n\no Fix buffer overrun possibilities in date/time handling\no Handle fixed-length char and bit literals\no Implement IS OF type predicate\no Define macros to manipulate date/time typmod values\no Map hex string literals to bit string type (may change later)\no Map CREATE TABLE/OF to inheritance. May change later\no Implement WAL log file location support using \"-X\" and PGXLOG\n\n - Thomas\n",
"msg_date": "Sun, 04 Aug 2002 00:02:25 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Set 'o patches"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I've committed changes to do the following:\n> o Fix buffer overrun possibilities in date/time handling\n> o Handle fixed-length char and bit literals\n> o Implement IS OF type predicate\n> o Define macros to manipulate date/time typmod values\n> o Map hex string literals to bit string type (may change later)\n> o Map CREATE TABLE/OF to inheritance. May change later\n> o Implement WAL log file location support using \"-X\" and PGXLOG\n\nWould it be out of line to question the fact that none of these commit\nmessages showed any documentation updates?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 03:12:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Set 'o patches "
},
{
"msg_contents": "...\n> Would it be out of line to question the fact that none of these commit\n> messages showed any documentation updates?\n\nNot at all :)\n\nI needed to dump the patches into the tree since with the recent changes\nto CVS I'm at risk of losing the ability to work and to generate patches\nfor the work I've already done.\n\nI'll try to work on docs when things settle down, but am likely to be\ndelayed at least a few days.\n\n - Thomas\n",
"msg_date": "Sun, 04 Aug 2002 00:42:06 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Set 'o patches"
}
] |
[
{
"msg_contents": "I think it's that XLog stuff:\n\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -R/home/c\nhriskl/local/lib -export-dynamic access/SUBSYS.o bootstrap/SUBSYS.o\ncatalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o\nlib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o\nport/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\nstorage/SUBSYS.o tcop/SUBSYS.o\nutils/SUBSYS.o -lz -lreadline -lcrypt -lcompat -lm -lutil -o postgres\ntcop/SUBSYS.o: In function `PostgresMain':\n/home/chriskl/pgsql-head/src/backend/tcop/postgres.c:1550: undefined\nreference to `XLogDir'\ngmake[2]: *** [postgres] Error 1\ngmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\ngmake: *** [all] Error 2\n\nChris\n\n\n",
"msg_date": "Sun, 4 Aug 2002 17:56:53 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Did someone break CVS?"
},
{
"msg_contents": "> I think it's that XLog stuff:\n\nDid you try a make clean? Did you update all of your directories? I\ndon't see a problem here, and I don't see any uncommitted files...\n\n - Thomas\n",
"msg_date": "Sun, 04 Aug 2002 07:48:22 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Did someone break CVS?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> I think it's that XLog stuff:\n> Did you try a make clean? Did you update all of your directories? I\n> don't see a problem here, and I don't see any uncommitted files...\n\nI see the same failure in a fresh-this-morning CVS checkout: the final\nbackend link step fails with\n\ntcop/SUBSYS.o: In function `PostgresMain':\n/home/tgl/pgsql/src/backend/tcop/postgres.c:1550: undefined reference to `XLogDir'\n\nAre you not building with --enable-cassert, perhaps? The fault seems to\nbe due to \"Assert(strlen(XLogDir) > 0)\". Also, there is still an extern\nfor XLogDir in access/xlog.h, which is bogus if it's going to be static.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 12:54:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Did someone break CVS? "
},
{
"msg_contents": "> >> I think it's that XLog stuff:\n> Are you not building with --enable-cassert, perhaps? The fault seems to\n> be due to \"Assert(strlen(XLogDir) > 0)\". Also, there is still an extern\n> for XLogDir in access/xlog.h, which is bogus if it's going to be static.\n\nI've updated xlog.h and xlog.c to fix the problem.\n\nI've removed the arbitrary upper limit on the allowed size of the\ndirectory path at the same time, though it was a reasonably large limit\nso would not likely be a problem in practice.\n\nSometimes I compile with assert checking on, and sometimes not; didn't\ncheck this one both ways.\n\n - Thomas\n",
"msg_date": "Sun, 04 Aug 2002 18:27:36 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Did someone break CVS?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I've removed the arbitrary upper limit on the allowed size of the\n> directory path at the same time,\n\nOh? Have you fixed *every* place that constrains the length limit of\nan XLOG-derived file name? Just making XLogDir dynamically allocated\nwill not improve matters, but IMHO make 'em worse because failures will\noccur on-the-fly instead of at startup.\n\nIn general I think that removing MAX_PG_PATH limits is a rather\npointless exercise --- no one has yet complained that MAX_PG_PATH is too\nsmall.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 22:08:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Did someone break CVS? "
},
{
"msg_contents": "> In general I think that removing MAX_PG_PATH limits is a rather\n> pointless exercise --- no one has yet complained that MAX_PG_PATH is too\n> small.\n\nI just mentioned it in passing; that wasn't the point of the changes.\n\nIs there a design pattern that would ask us to enforce that length\nlimit? If so, I'd be happy to do so; if not, it doesn't much matter...\n\n - Thomas\n",
"msg_date": "Sun, 04 Aug 2002 19:23:11 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Did someone break CVS?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Is there a design pattern that would ask us to enforce that length\n> limit? If so, I'd be happy to do so; if not, it doesn't much matter...\n\nWell, the issue is that the backend is just full of code like\n\n char tmppath[MAXPGPATH];\n\n snprintf(tmppath, MAXPGPATH, \"%s/xlogtemp.%d\",\n XLogDir, (int) getpid());\n\nI suppose we could run around and try to replace every single such\noccurrence with dynamically-sized buffers, but it seems hardly worth the\ntrouble --- and if you want a positive argument, I'd prefer not to\nintroduce another potential source of elogs (namely out-of-memory)\ninto code segments that run as critical sections, as some of the xlog\nmanipulation code does. Any elog there becomes a database panic. Is\nit worth taking such a risk to eliminate a limit that *no one* has ever\ncomplained about?\n\nIt would actually be better to limit XLogDir to MAXPGPATH minus a couple\ndozen characters, to ensure that filenames formed in the style above\ncannot overflow their buffer variables.\n\nBTW: was there anything in that patch that ensured XLogDir would be\nan absolute path? A relative path is guaranteed not to work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 10:58:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Did someone break CVS? "
},
{
"msg_contents": "> Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Is there a design pattern that would ask us to enforce that length\n>> limit? If so, I'd be happy to do so; if not, it doesn't much matter...\n> \n> Well, the issue is that the backend is just full of code like\n> \n> char tmppath[MAXPGPATH];\n> \n> snprintf(tmppath, MAXPGPATH, \"%s/xlogtemp.%d\",\n> XLogDir, (int) getpid());\n> \n> I suppose we could run around and try to replace every single such\n> occurrence with dynamically-sized buffers, but it seems hardly worth the\n> trouble --- and if you want a positive argument, I'd prefer not to\n> introduce another potential source of elogs (namely out-of-memory)\n> into code segments that run as critical sections, as some of the xlog\n> manipulation code does. Any elog there becomes a database panic. Is\n> it worth taking such a risk to eliminate a limit that *no one* has ever\n> complained about?\n\nIf that one person did exist, would it not be possible for them to just \nincrease the value of MAXPGPATH and recompile?\n",
"msg_date": "Mon, 5 Aug 2002 15:26:54 +0000 (UTC)",
"msg_from": "ngpg@grymmjack.com",
"msg_from_op": false,
"msg_subject": "Re: Did someone break CVS?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've recently observed some strange behavior using postgresql 7.2.1 on a \nRH-7.3 system. I've got a service for fetching rss feeds from the web. \nThese feeds are stored into a text column. Here's how the table looks like:\n\n Column | Type | Modifiers\n------------------+--------------------------+-----------\nchannel_id | integer |\ndataurl | character varying(1000) |\n...\n...\nlatest_feed | text |\nfeedidx | txtidx |\nIndexes: hellofeedidx\n\n(feedidx uses tsearch and indexes the faulty column, namely latest_feed).\n\nThis is not on a production system. After running this service for a \nwhile (almost a week). I get messages like:\n\n# select * from rss_channels;\nERROR: missing chunk number 1 for toast value 797979\n\nIf I try to delete the culprit row. I get:\n# delete from rss_channels where channel_id=116;\nERROR: simple_heap_delete: tuple already updated by self\n\nI'm running a full vacuum daily:\nsu - postgres -c 'vacuumdb --full --analyze --quiet --all'\n\nand a simple vacuum once every hour:\nsu - postgres -c 'vacuumdb --analyze --quiet --all'\n\nHere's how I configure and run PG:\n./configure --enable-multibyte=UNICODE --with-tcl --with-perl \n--enable-syslog\n\nsu -l postgres -c 'export LANG=C; /usr/local/pgsql/bin/postmaster -N 200 \n-B 1000 -o \"-S 2000\" -S -D /usr/local/pgsql/data '\n\n\nNote that I cannot do a pg_dump on the rss_channels table.\n\nAny idea of what might be the problem?\n\nBest wishes,\nNeophytos\n\n",
"msg_date": "Mon, 05 Aug 2002 00:15:41 +0300",
"msg_from": "Neophytos Demetriou <k2pts@cytanet.com.cy>",
"msg_from_op": true,
"msg_subject": "Error: missing chunk number ..."
},
{
"msg_contents": "Neophytos Demetriou <k2pts@cytanet.com.cy> writes:\n> # select * from rss_channels;\n> ERROR: missing chunk number 1 for toast value 797979\n\n> If I try to delete the culprit row. I get:\n> # delete from rss_channels where channel_id=116;\n> ERROR: simple_heap_delete: tuple already updated by self\n\nBizarre. Evidently there's something broken about the TOAST data for\nyour table. I am thinking that the toast table's index might be\ncorrupt, in which case you could probably recover by reindexing that\nindex. But first it would be nice to see if we can figure out exactly\nwhat happened --- is this the result of a software bug, or a hardware\nglitch? Would you be willing to let someone poke around in your\ndatabase, or perhaps if the DB is not too large, tar it all up to\nsend to someone for analysis?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 18:02:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error: missing chunk number ... "
},
{
"msg_contents": ">\n>\n>But first it would be nice to see if we can figure out exactly\n>what happened --- is this the result of a software bug, or a hardware\n>glitch? Would you be willing to let someone poke around in your\n>database, or perhaps if the DB is not too large, tar it all up to\n>send to someone for analysis?\n>\n\nSure. Who should I send the tarball to (~19MB the whole data/ directory)?\n\nBtw, vacuumdb does not work either (it used to work though before this \nproblem occured):\nERROR: missing chunk number 1 for toast value 797979\nvacuumdb: vacuum xxxxx-db failed\n\nBest wishes,\nNeophytos\n\n",
"msg_date": "Mon, 05 Aug 2002 08:49:59 +0300",
"msg_from": "Neophytos Demetriou <k2pts@cytanet.com.cy>",
"msg_from_op": true,
"msg_subject": "Re: Error: missing chunk number ..."
},
{
"msg_contents": "On Mon, 5 Aug 2002, Neophytos Demetriou wrote:\n\n> >\n> >\n> >Bizarre. Evidently there's something broken about the TOAST data for\n> >your table. I am thinking that the toast table's index might be\n> >corrupt, in which case you could probably recover by reindexing that\n> >index.\n> >\n> \n> That's it! Doing the following fixed the problem:\n> \n> ./oid2name -d xxxxx-db -t rss_channels\n> Oid of table rss_channels from database \"xxxxx-db\":\n> _______________________________\n> 796690 = rss_channels\n> \n> backend> reindex index pg_toast_796690_idx;\n> \n> I still don't know what caused this but I'll wait a bit and see how it \n> goes -- this was not the first time I had this problem. In the past \n> (couple of weeks), I used to drop and recreate the table to resolve this.\n> \n\nIf you can, run memtest86 on the machine for 24 hours. Probably a hardware\nmemory problem.\n\nGavin\n\n",
"msg_date": "Mon, 5 Aug 2002 19:10:22 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Error: missing chunk number ..."
},
{
"msg_contents": ">\n>\n>Bizarre. Evidently there's something broken about the TOAST data for\n>your table. I am thinking that the toast table's index might be\n>corrupt, in which case you could probably recover by reindexing that\n>index.\n>\n\nThat's it! Doing the following fixed the problem:\n\n ./oid2name -d xxxxx-db -t rss_channels\nOid of table rss_channels from database \"xxxxx-db\":\n_______________________________\n796690 = rss_channels\n\nbackend> reindex index pg_toast_796690_idx;\n\nI still don't know what caused this but I'll wait a bit and see how it \ngoes -- this was not the first time I had this problem. In the past \n(couple of weeks), I used to drop and recreate the table to resolve this.\n\nThanks for the help.\n\nBest wishes,\nNeophytos\n\n",
"msg_date": "Mon, 05 Aug 2002 12:21:18 +0300",
"msg_from": "Neophytos Demetriou <k2pts@cytanet.com.cy>",
"msg_from_op": true,
"msg_subject": "Re: Error: missing chunk number ..."
},
{
"msg_contents": ">\n>\n>If you can, run memtest86 on the machine for 24 hours. Probably a hardware\n>memory problem.\n>\n\nAfter running memtest86 for two hours, it revealed nothing. I'm gonna \ntry it again this evening and let it run until tomorrow morning.\n\nBest regards,\nNeophytos\n\n",
"msg_date": "Mon, 05 Aug 2002 16:21:19 +0300",
"msg_from": "Neophytos Demetriou <k2pts@cytanet.com.cy>",
"msg_from_op": true,
"msg_subject": "Re: Error: missing chunk number ..."
},
{
"msg_contents": "Neophytos Demetriou <k2pts@cytanet.com.cy> writes:\n> Sure. Who should I send the tarball to (~19MB the whole data/ directory)?\n\nMe. Please shut down the postmaster before you tar up the data/\ndirectory.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 09:44:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error: missing chunk number ... "
},
{
"msg_contents": ">\n>\n>>Sure. Who should I send the tarball to (~19MB the whole data/ directory)?\n>>\n>\n>Me. Please shut down the postmaster before you tar up the data/\n>directory.\n>\n\nOk, sent. Note that it occured again after fixing it this morning.\n\nBest wishes,\nNeophytos\n\n",
"msg_date": "Mon, 05 Aug 2002 18:34:54 +0300",
"msg_from": "Neophytos Demetriou <k2pts@cytanet.com.cy>",
"msg_from_op": true,
"msg_subject": "Re: Error: missing chunk number ..."
},
{
"msg_contents": "On Mon, 5 Aug 2002, Neophytos Demetriou wrote:\n\n> >\n> >\n> >If you can, run memtest86 on the machine for 24 hours. Probably a hardware\n> >memory problem.\n> >\n> \n> After running memtest86 for two hours, it revealed nothing. I'm gonna \n> try it again this evening and let it run until tomorrow morning.\n\nAlso check your drive subsystem for bad blocks. man badblocks in linux, \nnot sure what program for other Oses. Postgresql is good, but it can't \nmake up for bad drives or memory. :-)\n\n",
"msg_date": "Mon, 5 Aug 2002 11:01:37 -0600 (MDT)",
"msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: Error: missing chunk number ..."
}
] |
[
{
"msg_contents": "On Mon, 2002-08-05 at 07:26, Gene Selkov, Jr. wrote:\n> Hi Everybody!\n> \n> I'm sorry I dropped out for so long -- was switching jobs and was on\n> the verge of deportation for a while. Still not entirely back to\n> normal, but can raise my head and look around.\n> \n> The first thing I discovered in the current version (7.2.1) -- as well\n> as in 7.1.3 -- seems to be an old problem with the hash am. It's\n> clustering too much. \n\n...\n\n> The quality of the hash function can be a factor here, but probably\n> not the major one. I was able to jack my limit up to over 3.7M rows by\n> reversing the order of bytes in hashvarlena() -- I made the pointer go\n> down instead of up. That spread the hash values more sparsely, but it\n> failed with the same message when I fed it with more than 4M rows.\n> \n> I saw Tom answer a similar question a year ago, by saying that the\n> hash access method is poorly supported and that there is no advantage\n> to using it. I am not sure about the former, but the latter is not\n> entirely true: we saw at least 20% gain in performance when we\n> switched from btree to hash, and my boss considers 20% a big enough\n> improvement. Besides, he knows the database theory and he is a\n> long-time BerkelyDB user,\n\nAs BerkelyDB came into being by splitting index methods out of an early\nversion of Postgres, it should still have some similar structure left,\nso one possibility is to check what they are doing to not be that bad.\n\nHave you tried to index your dataset into a BerkelyDB database ?\n\n> and in his world, hash is greatly superior\n> to btree, so he is wondering why are the postgres implementations so\n> close. Besides, it's a tough challenge to explain it to a Libertarian\n> that he'd better not do something.\n\n-------------\nHannu\n\n",
"msg_date": "05 Aug 2002 05:49:21 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: HASH: Out of overflow pages. Out of luck"
},
{
"msg_contents": "Hi Everybody!\n\nI'm sorry I dropped out for so long -- was switching jobs and was on\nthe verge of deportation for a while. Still not entirely back to\nnormal, but can raise my head and look around.\n\nThe first thing I discovered in the current version (7.2.1) -- as well\nas in 7.1.3 -- seems to be an old problem with the hash am. It's\nclustering too much. \n\nThe data I'm trying to index is of the type text, so it uses hashvarlena(). The values\nare something like this:\n\nREC00014\nREC00015\n....\nREC02463\nRBS00001\nRBS00002\n....\nRBS11021\n....\n\nIt's like several very long sequences with multiple gaps. \n\nWith the existing hashvarlena(), I can index about 2M rows, but not\nmuch more -- it comes back with the message about the overflow pages.\n\nhashvarlena() responds to this input in a piecewise linear fashion. That is,\nthe succession 'REC00010' .. 'REC00019' results in a linear order\n1346868191 .. 1346868200. The next ten hash values will also be sequential, \nbut in a different range. \n\nThe quality of the hash function can be a factor here, but probably\nnot the major one. I was able to jack my limit up to over 3.7M rows by\nreversing the order of bytes in hashvarlena() -- I made the pointer go\ndown instead of up. That spread the hash values more sparsely, but it\nfailed with the same message when I fed it with more than 4M rows.\n\nI saw Tom answer a similar question a year ago, by saying that the\nhash access method is poorly supported and that there is no advantage\nto using it. I am not sure about the former, but the latter is not\nentirely true: we saw at least 20% gain in performance when we\nswitched from btree to hash, and my boss considers 20% a big enough\nimprovement. Besides, he knows the database theory and he is a\nlong-time BerkelyDB user, and in his world, hash is greatly superior\nto btree, so he is wondering why are the postgres implementations so\nclose. Besides, it's a tough challenge to explain it to a Libertarian\nthat he'd better not do something.\n\nI guess we can make such people happy by either fixing hash, or by\nmaking btree very much worse -- whichever is easier :)\n\nSeriously, though, if anybody wants to look at the problem, I saved\nthe set of keys that caused it as\n\nhttp://home.xnet.com/~selkovjr/tmp/prot.gz\n\nAlso, I heard that other database systems have special access methods\nfor sequences, that address this issue, and I heard as well that\nwithout an option to use an arbitrary hash function instead of a\nsingle built-in, such problems are bound to happen with particular\ndata sets. How true is that? If a better access method exists, what is\nit?\n\nThank you,\n\nGene\n\n\n",
"msg_date": "Sun, 04 Aug 2002 21:26:16 -0500",
"msg_from": "\"Gene Selkov, Jr.\" <selkovjr@xnet.com>",
"msg_from_op": false,
"msg_subject": "HASH: Out of overflow pages. Out of luck"
},
{
"msg_contents": "> I saw Tom answer a similar question a year ago, by saying that the\n> hash access method is poorly supported and that there is no advantage\n> to using it. I am not sure about the former, but the latter is not\n> entirely true: we saw at least 20% gain in performance when we\n> switched from btree to hash, and my boss considers 20% a big enough\n> improvement. Besides, he knows the database theory and he is a\n> long-time BerkelyDB user, and in his world, hash is greatly superior\n> to btree, so he is wondering why are the postgres implementations so\n> close. Besides, it's a tough challenge to explain it to a Libertarian\n> that he'd better not do something.\n>\n> I guess we can make such people happy by either fixing hash, or by\n> making btree very much worse -- whichever is easier :)\n\nCool. I'm sure that making btree much worse is definitely within my\nability - I'll submit a patch shortly with new pg_bench results.\n\nChris\n\n",
"msg_date": "Mon, 5 Aug 2002 10:36:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: HASH: Out of overflow pages. Out of luck"
},
{
"msg_contents": "\"Gene Selkov, Jr.\" <selkovjr@xnet.com> writes:\n> I saw Tom answer a similar question a year ago, by saying that the\n> hash access method is poorly supported and that there is no advantage\n> to using it. I am not sure about the former, but the latter is not\n> entirely true: we saw at least 20% gain in performance when we\n> switched from btree to hash, and my boss considers 20% a big enough\n> improvement. Besides, he knows the database theory and he is a\n> long-time BerkelyDB user, and in his world, hash is greatly superior\n> to btree, so he is wondering why are the postgres implementations so\n> close. Besides, it's a tough challenge to explain it to a Libertarian\n> that he'd better not do something.\n\nHey, if he wants to fix the hash code, more power to him ;-). Patches\nwill be gladly accepted.\n\nThe real problem with the PG hash index code is that approximately zero\neffort has been put into it since the code left Berkeley, while quite\na lot of work has been put into the btree code. Thus, for most purposes\nthe btree index type leaves hash in the dust, no matter what theoretical\nconcerns may say.\n\nIf you or he would like to expend the effort to bring hash indexing up\nto speed, I'll surely not stand in your way. But be advised that\nthere's a lot of work to be done there (concurrency issues and WAL\nsupport being at the top of my list) ... are you sure you believe that\nhash is worth the effort?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Aug 2002 23:13:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HASH: Out of overflow pages. Out of luck "
},
{
"msg_contents": "\n> From: Hannu Krosing <hannu@tm.ee>\n> \n> As BerkelyDB came into being by splitting index methods out of an early\n> version of Postgres, it should still have some similar structure left,\n> so one possibility is to check what they are doing to not be that bad.\n> \n> Have you tried to index your dataset into a BerkelyDB database ?\n\nYes, it works fine with BerkelyDB. I looked at both codes and I was\nstupefied with their complexity. Even if there is a similar structure,\nit must be very well disguised. Some of the data structures resemble\neach other's counterparts; the only piece that is exactly the same\nas one of the five BerkelyDB's hash functions. \n\nThe only useful experiment that I feel I am capable of making is\ntrying their __ham_hash5() function, with they claim is generally\nbetter than the other four, for most purposes. But they warn in their\ncomments that there is no such thing as \"a hash function\" -- there\nmust be one for each purpose.\n\nSo another experiment I might try is writing an adapter for a\nuser-supplied hash -- that might help in figuring out the role of the\nhash function in bin overflows. That should be easy enough to do, but\nfixing or re-writing the access method itself -- I'm sorry: the level \nof complexity scares me. Appears like a couple man-months \n(those Mythical Man-Months :).\n\n--Gene\n",
"msg_date": "Wed, 07 Aug 2002 00:41:04 -0500",
"msg_from": "selkovjr@xnet.com",
"msg_from_op": false,
"msg_subject": "Re: HASH: Out of overflow pages. Out of luck "
},
{
"msg_contents": "On Wed, Aug 07, 2002 at 12:41:04AM -0500, selkovjr@xnet.com wrote:\n> Some of the data structures resemble each other's counterparts;\n> the only piece that is exactly the same as one of the five\n> BerkelyDB's hash functions. \n\nFYI, the development version of PostgreSQL uses a completely\ndifferent (and higher quality) hash function.\n\n> The only useful experiment that I feel I am capable of making is\n> trying their __ham_hash5() function, with they claim is generally\n> better than the other four, for most purposes.\n\nI'm skeptical that changing the hash function would make a significant\ndifference to the usability of hash indexes. At the very least, the\nreproducible deadlocks under concurrent access need to be fixed, as well\nas a host of other issues.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Wed, 7 Aug 2002 10:46:58 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: HASH: Out of overflow pages. Out of luck"
}
] |
[
{
"msg_contents": "\nGot it, I think ... If you download 'repository' instead of 'pgsql', it\nshould grab everything required, so that you can use CVS locally ...\n\nI looks like its working as expected over here, so that I can 'setenv\nCVSROOT' to this new location, and it will use the modules file properly\n... please confirm for me that this is the case on your end ...\n\n\nOn Sun, 4 Aug 2002, Marc G. Fournier wrote:\n\n>\n> Ah, okay, this is something I'm going to fix today ... CVSup is known to\n> be broken, due to the changes ... what I'm going to do is add a\n> 'repository' target to the available CVSup targets, that will pull down\n> both CVSROOT *and* everything else, so that 'cvs co pgsql' does work again\n> ... sh ould be in place later this aft ...\n>\n>\n> On Sun, 4 Aug 2002, Joe Conway wrote:\n>\n> > Marc G. Fournier wrote:\n> > > On Sat, 3 Aug 2002, Joe Conway wrote:\n> > > Okay, I've tested both using the regular CVS and the anoncvs servers, and\n> > > both co pgsql just fine ... can you email me (off list) the errors you are\n> > > seeing, as well as your CVSROOT?\n> >\n> > I use cvsup to sync my local CVS server with the main PostgreSQL CVS. A\n> > few people mentioned to delete the old source tree, so I did. What I'm\n> > getting now is a CVSROOT/pgsql-server created. When I want to locally\n> > check out pgsql, if I type `cvs co pgsql` I get:\n> > cvs checkout: cannot find module `pgsql' - ignored\n> > which I think is more-or-less expected since there is now no \"pgsql\"\n> > project under CVSROOT, but \"pgsql-server\" instead.\n> >\n> > I *can* do `cvs co pgsql-server` and get a pgsql-server tree, which\n> > *looks* just like pgsql used too, as far as I can tell.\n> >\n> > I've attached my cvsup script which I got from the linked cvsup\n> > instructions before the link went bad ;-)\n> >\n> > I can live OK with things like this, but I didn't think this was the\n> > intended state of affairs. I have done an anonymous cvs checkout from\n> > postgresql.org and it works just as advertised.\n> >\n> > Thanks for your help,\n> >\n> > Joe\n> >\n>\n>\n\n",
"msg_date": "Sun, 4 Aug 2002 22:18:15 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: cvs changes and broken links"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> Got it, I think ... If you download 'repository' instead of 'pgsql', it\n> should grab everything required, so that you can use CVS locally ...\n> \n> I looks like its working as expected over here, so that I can 'setenv\n> CVSROOT' to this new location, and it will use the modules file properly\n> ... please confirm for me that this is the case on your end ...\n> \n\nYup, appears that did it.\n\nThanks!\n\nJoe\n\n\n",
"msg_date": "Sun, 04 Aug 2002 18:23:54 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: cvs changes and broken links"
},
{
"msg_contents": "On Sun, 4 Aug 2002, Joe Conway wrote:\n\n> Marc G. Fournier wrote:\n> > Got it, I think ... If you download 'repository' instead of 'pgsql', it\n> > should grab everything required, so that you can use CVS locally ...\n> >\n> > I looks like its working as expected over here, so that I can 'setenv\n> > CVSROOT' to this new location, and it will use the modules file properly\n> > ... please confirm for me that this is the case on your end ...\n> >\n>\n> Yup, appears that did it.\n>\n> Thanks!\n\nGreat, thanks for the update ...\n\n\n",
"msg_date": "Sun, 4 Aug 2002 23:01:33 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: cvs changes and broken links"
}
] |
[
{
"msg_contents": "Could we overload \"ALTER TABLE/DROP COLUMN oid\" to allow someone to change a\ntable to be WITHOUT OIDs at runtime?\n\nChris\n\n",
"msg_date": "Mon, 5 Aug 2002 13:27:58 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Wacky OID idea"
},
{
"msg_contents": "Christopher Kings-Lynne dijo: \n\n> Could we overload \"ALTER TABLE/DROP COLUMN oid\" to allow someone to change a\n> table to be WITHOUT OIDs at runtime?\n\nCreate new filenode, copy tuples over, change relhasoids. Seems easy.\nAm I missing something?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"No es bueno caminar con un hombre muerto\"\n\n",
"msg_date": "Mon, 5 Aug 2002 01:51:22 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Wacky OID idea"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> Christopher Kings-Lynne dijo: \n> > Could we overload \"ALTER TABLE/DROP COLUMN oid\" to allow someone to change a\n> > table to be WITHOUT OIDs at runtime?\n\nI don't think it would be easy to do this without rewriting the table,\nas Alvaro suggested. And if you're going to give this DROP COLUMN\nvariant totally different behavior from any other form of DROP COLUMN,\nISTM that it doesn't belong with DROP COLUMN.\n\nThat said, being able to remove the OIDs from a table would be fairly\nuseful, IMHO.\n\n> Create new filenode, copy tuples over, change relhasoids. Seems easy.\n> Am I missing something?\n\nYes -- DROP COLUMN currently doesn't require that the entire table be\nre-written, it just modifies some meta-data.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "05 Aug 2002 03:27:15 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Wacky OID idea"
},
{
"msg_contents": "> I don't think it would be easy to do this without rewriting the table,\n> as Alvaro suggested. And if you're going to give this DROP COLUMN\n> variant totally different behavior from any other form of DROP COLUMN,\n> ISTM that it doesn't belong with DROP COLUMN.\n\nEspecially if we start saving the 4 bytes that a NULL oid takes up in a\ntable tuple on disk.\n\nPerhaps:\n\nALTER TABLE tab SET WITHOUT OIDS;\n\nI think the reverse operation would really be impossible...? Unless we run\nthrough the entire table and insert an oid for each row from the oid\ncounter?\n\n> That said, being able to remove the OIDs from a table would be fairly\n> useful, IMHO.\n\nHell yeah.\n\nBy the way - I'm not saying I'll be implementing this any time soon!\n\nChris\n\n",
"msg_date": "Mon, 5 Aug 2002 15:36:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Wacky OID idea"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> I don't think it would be easy to do this without rewriting the table,\n>> as Alvaro suggested. And if you're going to give this DROP COLUMN\n>> variant totally different behavior from any other form of DROP COLUMN,\n>> ISTM that it doesn't belong with DROP COLUMN.\n\n> Especially if we start saving the 4 bytes that a NULL oid takes up in a\n> table tuple on disk.\n\nIt's only difficult because of the recent changes to tuple header\nformat. I still feel that it's a bad idea not to be using a t_infomask\nbit to indicate whether an OID field is present or not in a particular\ntuple. If we did that, then it'd be possible to implement DROP OID by\njust zapping the pg_attribute entry and unsetting relhasoids in\npg_class. As with dropping a plain column, you'd expect the space to be\nreclaimed over time not instantaneously.\n\n> I think the reverse operation would really be impossible...? Unless we run\n> through the entire table and insert an oid for each row from the oid\n> counter?\n\nCheck.\n\n> By the way - I'm not saying I'll be implementing this any time soon!\n\nIf the infomask change happens then it'd just be a few more lines of\ncode, at least for the DROP case which seems the more useful.\n\nI've refrained from touching the tuple-header issues until Manfred\nreturns from vacation and can defend himself ;-) ... but that stuff\ndefinitely needs to get addressed in the next week or two.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 10:06:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wacky OID idea "
},
{
"msg_contents": "\nAdded to TODO:\n\n\t o Add ALTER TABLE tab SET WITHOUT OIDS \n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > I don't think it would be easy to do this without rewriting the table,\n> > as Alvaro suggested. And if you're going to give this DROP COLUMN\n> > variant totally different behavior from any other form of DROP COLUMN,\n> > ISTM that it doesn't belong with DROP COLUMN.\n> \n> Especially if we start saving the 4 bytes that a NULL oid takes up in a\n> table tuple on disk.\n> \n> Perhaps:\n> \n> ALTER TABLE tab SET WITHOUT OIDS;\n> \n> I think the reverse operation would really be impossible...? Unless we run\n> through the entire table and insert an oid for each row from the oid\n> counter?\n> \n> > That said, being able to remove the OIDs from a table would be fairly\n> > useful, IMHO.\n> \n> Hell yeah.\n> \n> By the way - I'm not saying I'll be implementing this any time soon!\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 00:57:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wacky OID idea"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 05 August 2002 04:56\n> To: Joe Conway\n> Cc: Bruce Momjian; Thomas Lockhart; Neil Conway; PostgreSQL Hackers\n> Subject: Re: [HACKERS] FUNC_MAX_ARGS benchmarks \n> \n> \n> Joe Conway <mail@joeconway.com> writes:\n> > These are all with FUNC_MAX_ARGS = 16.\n> \n> > #define NAMEDATALEN 32\n> > 2.7M /opt/data/pgsql/data/base/1\n> \n> > #define NAMEDATALEN 64\n> > 3.0M /opt/data/pgsql/data/base/1\n> \n> > #define NAMEDATALEN 128\n> > 3.8M /opt/data/pgsql/data/base/1\n> \n> Based on Joe's numbers, I'm kind of thinking that we should \n> go for FUNC_MAX_ARGS=32 and NAMEDATALEN=64 as defaults in 7.3.\n> \n> Although NAMEDATALEN=128 would be needed for full SQL \n> compliance, the space penalty seems severe. I'm thinking we \n> should back off until someone wants to do the legwork needed \n> to make the name type be truly variable-length.\n> \n> Comments?\n\nIn Joe's last test he had only about 2Mb growth per db (I guess this\nwould not be the case had he used the name type in some of his tables).\nI would rather lose a measly few Mb and be standards compliant myself.\n\n$0.02\n\nRegards, Dave.\n",
"msg_date": "Mon, 5 Aug 2002 08:20:58 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: FUNC_MAX_ARGS benchmarks "
}
] |
[
{
"msg_contents": "Hello together,\n\nfinally i've got PostgreSQL 7.2.1 running on NetWare.\n\n\nNow I have two questions:\n\n1. Where can I find the latest version 7.3 of PostgreSQL?\nI want to check where I have to make changes and\n\n2. Who is the best person to contact for submitting patches.\nI would like to add the changes I've made to PostgreSQL\nmain source tree.\n\n\nAnd very important: PostgreSQL is really great. I hope\nI can contribute more to this wonderful project in the\nfuture.\n\n\nUlrich\n----------------------------------\n This e-mail is virus scanned\n Diese e-mail ist virusgeprueft\n\n",
"msg_date": "Mon, 05 Aug 2002 17:25:21 +0200",
"msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.2.1 on NetWare"
},
{
"msg_contents": "> 1. Where can I find the latest version 7.3 of PostgreSQL?\n> I want to check where I have to make changes and\n\nYou will need to download it via CVS. Do this on the command line:\n\ncvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot checkout\npgsql\n\nThis will create a subdirectory called 'pgsql' with the complete source in\nit.\n\nTo compile it, type 'make' or 'gmake' in the pgsql directory. You will need\nBison installed to compile from cvs, and gmake if it's not a linux system.\n\n> 2. Who is the best person to contact for submitting patches.\n> I would like to add the changes I've made to PostgreSQL\n> main source tree.\n\nTo generate an acceptable patch, go:\n\ncvs diff -c > patchname.txt\n\n>From the pgsql directory. This will generate a patch of the entire cvs\ntree. You can specify a single filename if that's all you want to diff.\nYou should then email the patch to pgsql-patches@postgresql.org along with\nan explanation of the patch and a documentation patch if it changes\nsomething that's user visible.\n\nLooking forward to seeing a Netware port...\n\nCheers,\n\nChris\n\n",
"msg_date": "Tue, 6 Aug 2002 09:49:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 7.2.1 on NetWare"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'd love to submit a patch for this, but I can't get my head round the\nintricacies of the build system.\n\nI'm building CVS from earlier today, and get the following errors:\n\nmake -C mb SUBSYS.o\nmake[4]: Entering directory\n`/home/jgray/postgres/pgbuild2/src/backend/utils/mb'make[5]: Entering\ndirectory\n`/home/jgray/postgres/pgbuild2/src/backend/utils/mb/conversion_procs'\nmake[6]: Entering directory\n`/home/jgray/postgres/pgbuild2/src/backend/utils/mb/conversion_procs/utf8_and_ascii'\nMakefile:11: ../proc.mk: No such file or directory\nmake[6]: *** No rule to make target `../proc.mk'. Stop.\n\nThe file proc.mk is not found from the source tree, which I presume is\nthe problem \n\nI can't just link across proc.mk because it seems to have other\ndependencies. Also, I note in proc.mk at line 19:\n\ninclude $(top_builddir)/src/Makefile.shlib\n\nAs the prep_buildtree script doesn't copy Makefile.shlib into the\nbuildtree, I assume this should really read:\n\ninclude $(top_srcdir)/src/Makefile.shlib\n\n--not that this helps either. I now get:\n\nmake[6]: Entering directory\n`/home/jgray/postgres/pgbuild2/src/backend/utils/mb/conversion_procs/utf8_and_ascii'\nmake[6]: *** No rule to make target `utf8_and_ascii.o', needed by\n`libutf8_and_ascii.so.0.0'. Stop.\n\nAny suggestions (bearing in mind I am not a make expert at all!)\n\n\nRegards\n\nJohn\n(configure options: --enable-debug --enable-depend\n--enable-integer-datetimes --with-pgport=5433\nGNU make: 3.79.1\n\n\n-- \nJohn Gray\t\nAzuli IT\t\nwww.azuli.co.uk\t\n\n\n",
"msg_date": "05 Aug 2002 18:21:54 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": true,
"msg_subject": "Current CVS build fails in src/backend/utils/mb/conversion_procs\n\t(VPATH)"
}
] |
[
{
"msg_contents": "There is one thing I really miss when working with PostgreSQL.\nIn many cases I have to optimize complex queries (8-10 tables, views and \ntables, ...).\nI can define if indexes are used or not but it would be fine to turn \nindexes on certain tables off temporarily.\n\nAt OSCON Gavin Sherry has mentioned that it should not be hard to add \nthis to PostgreSQL. I guess we'd need an internal list which sets the \ncosts of an index to infinite in order to make it unused.\n\nThis would make many things so much easier and so much faster.\nI agree with Tom when saying that improving the optimizer would be \nbetter but this would be an immediate, easy to achieve help for many \npeople out there.\n\nCould anybody implement that? I am not enough into Postgres internals to \nimplement it myself.\n\n\tRegards,\n\t\tHans\n\n",
"msg_date": "Mon, 05 Aug 2002 19:25:05 +0200",
"msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Feature Request: Optimizer tuning"
}
] |
[
{
"msg_contents": "So far as I recall, no one's really taken up the challenge of deciding\nhow psql's various \\d commands should work in the presence of schemas.\nHere's a straw-man proposal:\n\n1. A wildcardable pattern must consist of either \"namepattern\" or\n\"namepattern.namepattern\". In the first case we match against all names\nvisible in the current search path. In the second case, we consider all\nnames matching the second part of the pattern within all schemas\nmatching the first part, without regard to search path visibility.\n(For the moment, anyway, patterns containing more than one dot are an\nerror.)\n\n2. I'd like to switch over to using explicit wildcard characters.\nThere are presently some cases where psql assumes an implicit \"*\" at the\nend of a name pattern, but I find this surprising. Seems like it would\nbe more consistent if foo meant foo, and you had to write \"foo*\" to get\na wildcard search.\n\n3. As for the specific wildcard characters, I propose accepting \"*\"\nand \"?\" with the same meanings as in common shell filename globbing.\nThis could be extended to include character classes (eg, [0-9]) if\nanyone feels like it. Following shell practice rather than (say)\nregexp or LIKE rules avoids problems with dot and underscore, two\ncharacters that we definitely don't want to be pattern match characters\nin this context.\n\n4. The wildcard characters \"*\" and \"?\" are problematic for \\do\n(display operators), since they are valid characters in operator names.\nI can see three possible answers to this:\n A. Don't do any wildcarding in operator searches.\n B. Treat \"*\" and \"?\" as wildcards, and expect the user to quote\n\tthem with backslashes if he wants to use them as regular\n\tcharacters in an operator search.\n C. Treat \"*\" and \"?\" as regular characters in operator search,\n\tand let \"\\*\" and \"\\?\" be the wildcards in this context.\nA is the current behavior but lacks functionality. C might be the most\nconvenient once you got used to it, but I suspect people will find it\ntoo confusing. So I'm leaning to B.\n\nComments, better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 13:26:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Proposal for psql wildcarding behavior w/schemas"
},
{
"msg_contents": "Tom Lane wrote:\n> 1. A wildcardable pattern must consist of either \"namepattern\" or\n> \"namepattern.namepattern\". In the first case we match against all names\n> visible in the current search path. In the second case, we consider all\n> names matching the second part of the pattern within all schemas\n> matching the first part, without regard to search path visibility.\n> (For the moment, anyway, patterns containing more than one dot are an\n> error.)\n\nI like this.\n\n> 2. I'd like to switch over to using explicit wildcard characters.\n> There are presently some cases where psql assumes an implicit \"*\" at the\n> end of a name pattern, but I find this surprising. Seems like it would\n> be more consistent if foo meant foo, and you had to write \"foo*\" to get\n> a wildcard search.\n\nAgree\n\n> \n> 3. As for the specific wildcard characters, I propose accepting \"*\"\n> and \"?\" with the same meanings as in common shell filename globbing.\n> This could be extended to include character classes (eg, [0-9]) if\n> anyone feels like it. Following shell practice rather than (say)\n> regexp or LIKE rules avoids problems with dot and underscore, two\n> characters that we definitely don't want to be pattern match characters\n> in this context.\n\nAgree again\n\n\n> \n> 4. The wildcard characters \"*\" and \"?\" are problematic for \\do\n> (display operators), since they are valid characters in operator names.\n> I can see three possible answers to this:\n> A. Don't do any wildcarding in operator searches.\n> B. Treat \"*\" and \"?\" as wildcards, and expect the user to quote\n> \tthem with backslashes if he wants to use them as regular\n> \tcharacters in an operator search.\n> C. Treat \"*\" and \"?\" as regular characters in operator search,\n> \tand let \"\\*\" and \"\\?\" be the wildcards in this context.\n> A is the current behavior but lacks functionality. C might be the most\n> convenient once you got used to it, but I suspect people will find it\n> too confusing. So I'm leaning to B.\n\nI would definitely vote for B.\n\nJoe\n\n",
"msg_date": "Mon, 05 Aug 2002 10:34:43 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas"
},
{
"msg_contents": "I said:\n> So far as I recall, no one's really taken up the challenge of deciding\n> how psql's various \\d commands should work in the presence of schemas.\n> Here's a straw-man proposal:\n\nIt occurs to me that I wasn't thinking about the effects of\ndouble-quoted identifiers. Should dot, star, and question mark\nbe taken as non-special characters if they're inside double quotes?\n(Probably.) Does that mean that we don't need backslash-oriented\nescaping conventions? (Maybe; would people expect 'em anyway?)\nAny other implications I missed? (Very likely.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 23:08:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas "
},
{
"msg_contents": "Tom Lane wrote:\n> I said:\n> > So far as I recall, no one's really taken up the challenge of deciding\n> > how psql's various \\d commands should work in the presence of schemas.\n> > Here's a straw-man proposal:\n> \n> It occurs to me that I wasn't thinking about the effects of\n> double-quoted identifiers. Should dot, star, and question mark\n> be taken as non-special characters if they're inside double quotes?\n> (Probably.) Does that mean that we don't need backslash-oriented\n> escaping conventions? (Maybe; would people expect 'em anyway?)\n> Any other implications I missed? (Very likely.)\n\nUh, if we follow the shell rules, quote-star-quote means star has no\nspecial meaning:\n\n\t$ echo \"*\"\n\t*\n\n\t$ echo \\*\n\t*\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 01:15:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Uh, if we follow the shell rules, quote-star-quote means star has no\n> special meaning:\n\nInteresting analogy. We can't take it too far, because the shell quote\nrules don't agree with SQL:\n\n$ echo \"aaa\"\"zzz\"\naaazzz\n\nUnder SQL rules the produced identifier would be aaa\"zzz. Still, this\nprovides some ammunition for not processing wildcard characters that\nare within quotes.\n\n> \t$ echo \\*\n> \t*\n\nThat analogy says we need to accept both quote and backslash quoting.\nNot sure about this. Again, SQL doesn't quite agree with the shell\nabout how these interact. For example:\n\negression=# select \"foo\\bar\";\nERROR: Attribute \"foo\\bar\" not found\nregression=# \\q\n$ echo \"foo\\bar\"\nfoar <--- \\b went to backspace\n\nSo backslash isn't special within quotes according to SQL, but it\nis according to the shell.\n\nI still like \"use the shell wildcards\" as a rough design principle,\nbut the devil is in the details ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 01:24:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas "
},
{
"msg_contents": "Tom Lane dijo: \n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Uh, if we follow the shell rules, quote-star-quote means star has no\n> > special meaning:\n> \n> Interesting analogy. We can't take it too far, because the shell quote\n> rules don't agree with SQL:\n[...]\n\n> $ echo \"foo\\bar\"\n> foar <--- \\b went to backspace\n> \n> So backslash isn't special within quotes according to SQL, but it\n> is according to the shell.\n\nNote that GNU echo has actually two different behaviours:\n\n$ echo \"a\\bb\"\na\\bb\n$ echo -e \"a\\bb\"\nb\n\nAlso note that since the backslash is between quotes you are not actually\ntesting shell behaviour but echo(1) behaviour. bash(1) and tcsh(1) both\nsay\n\n$ echo a\\bb\nabb\n\nThe shell will interpret anything that is outside quotes and leave\nanything inside quotes alone, but of course you already knew that. It's\necho that's interpreting further the backslashed string. In that light,\nI'd say * should be left alone (no special behaviour) if between quotes.\n\nMy 10 chilean pesos.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Porque Kim no hacia nada, pero, eso si,\ncon extraordinario exito\" (\"Kim\", Kipling)\n\n",
"msg_date": "Tue, 6 Aug 2002 01:36:51 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas "
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> Also note that since the backslash is between quotes you are not actually\n> testing shell behaviour but echo(1) behaviour.\n\nDuh. Time to go to bed ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 01:42:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas "
},
{
"msg_contents": "On Tue, 6 Aug 2002, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > Also note that since the backslash is between quotes you are not actually\n> > testing shell behaviour but echo(1) behaviour.\n> \n> Duh. Time to go to bed ;-)\n\n\nHmm...that's not how I've always understood shell quoting, at least for bash:\n\n~$ aa=3\n~$ perl -e 'print join(\",\",@ARGV), \"\\n\";' \"1 $aa 2 3\" 4 5 6\n1 3 2 3,4,5,6\n~$\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Tue, 6 Aug 2002 10:26:52 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas"
},
{
"msg_contents": "Nigel J. Andrews dijo: \n\n> Hmm...that's not how I've always understood shell quoting, at least for bash:\n> \n> ~$ aa=3\n> ~$ perl -e 'print join(\",\",@ARGV), \"\\n\";' \"1 $aa 2 3\" 4 5 6\n> 1 3 2 3,4,5,6\n> ~$\n\nWhat's the difference? What your example is saying basically is that\nthe shell is treating the \"1 $aa 2 3\" as a single parameter (i.e. spaces\ndo not have the usual parameter-separating behaviour), _but_ variables\nare interpreted. Using '' prevents variable substitution, so \n\n> ~$ perl -e 'print join(\",\",@ARGV), \"\\n\";' '1 $aa 2 3' 4 5 6\n\nshould give\n1 $aa 2 3,4,5,6\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nFOO MANE PADME HUM\n\n",
"msg_date": "Tue, 6 Aug 2002 11:53:52 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Alvaro Herrera wrote:\n\n> Nigel J. Andrews dijo: \n> \n> > Hmm...that's not how I've always understood shell quoting, at least for bash:\n> > \n> > ~$ aa=3\n> > ~$ perl -e 'print join(\",\",@ARGV), \"\\n\";' \"1 $aa 2 3\" 4 5 6\n> > 1 3 2 3,4,5,6\n> > ~$\n> \n> What's the difference? What your example is saying basically is that\n> the shell is treating the \"1 $aa 2 3\" as a single parameter (i.e. spaces\n> do not have the usual parameter-separating behaviour), _but_ variables\n> are interpreted. Using '' prevents variable substitution, so \n> \n> > ~$ perl -e 'print join(\",\",@ARGV), \"\\n\";' '1 $aa 2 3' 4 5 6\n> \n> should give\n> 1 $aa 2 3,4,5,6\n\n\nOops, I've just realised the original was about glob expansion whereas I was\nlooking at other special characters.\n\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Tue, 6 Aug 2002 16:58:46 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas"
},
{
"msg_contents": "Tom Lane writes:\n\n> 1. A wildcardable pattern must consist of either \"namepattern\" or\n> \"namepattern.namepattern\".\n\nRegarding the use of quotes: Would\n\n\\d \"foo.bar\"\n\nshow the table \"foo.bar\", whereas\n\n\\d \"foo\".\"bar\"\n\nwould show the table \"bar\" in schema \"foo\"?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 8 Aug 2002 21:32:56 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> 1. A wildcardable pattern must consist of either \"namepattern\" or\n>> \"namepattern.namepattern\".\n\n> Regarding the use of quotes: Would\n\n> \\d \"foo.bar\"\n\n> show the table \"foo.bar\", whereas\n\n> \\d \"foo\".\"bar\"\n\n> would show the table \"bar\" in schema \"foo\"?\n\nThat'd be my interpretation of what it should do. Okay with you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Aug 2002 15:49:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for psql wildcarding behavior w/schemas "
}
] |
[
{
"msg_contents": "Patch for current CVS. It add test of lca() to ltree test suite.\n\n\n-- \nTeodor Sigaev\nteodor@stack.net",
"msg_date": "Mon, 05 Aug 2002 21:30:18 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "Please, apply patch for ltree"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nTeodor Sigaev wrote:\n> Patch for current CVS. It add test of lca() to ltree test suite.\n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ application/gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 5 Aug 2002 13:51:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please, apply patch for ltree"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nTeodor Sigaev wrote:\n> Patch for current CVS. It add test of lca() to ltree test suite.\n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ application/gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 01:35:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Please, apply patch for ltree"
}
] |
[
{
"msg_contents": "Hello\n\nplease add the attached files to src/backend/port/dynloader\n\nThank you\n\nUlrich\n\n----------------------------------\n This e-mail is virus scanned\n Diese e-mail ist virusgeprueft",
"msg_date": "Mon, 05 Aug 2002 20:03:54 +0200",
"msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Please add these files"
}
] |
[
{
"msg_contents": "I upgraded PostgreSQL to 7.2.1 from a 7.2beta (yeah, I know). One of my\nusers requested plperl, so I got it to createlang, but it SIGSEGV's on\nany simple perl. \n\nThe compile gives:\n\nplperl_installdir='$(DESTDIR)/usr/local/pgsql/lib' \\\n/usr/bin/perl Makefile.PL INC='-I. -I../../../src/include -I/usr/local/include'\nWriting Makefile for plperl\ngmake -f Makefile all VPATH=\ngmake[1]: Entering directory `/home/ler/pg-prod/postgresql-7.2.1/src/pl/plperl'\ncc -c -I. -I../../../src/include -I/usr/local/include -I/usr/local/include -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" -Kpic -I/usr/local/lib/perl5/5.6.1/i386-svr5/CORE plperl.c\nUX:acomp: WARNING: \"/usr/local/lib/perl5/5.6.1/i386-svr5/CORE/perl.h\", line 469: macro redefined: USE_LOCALE\nUX:acomp: WARNING: \"/usr/local/lib/perl5/5.6.1/i386-svr5/CORE/perl.h\", line 2155: macro redefined: DEBUG\nUX:acomp: WARNING: \"plperl.c\", line 244: syntax error: empty declaration\ncc -c -I. -I../../../src/include -I/usr/local/include -I/usr/local/include -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" -Kpic -I/usr/local/lib/perl5/5.6.1/i386-svr5/CORE eloglvl.c\n/usr/bin/perl -I/usr/local/lib/perl5/5.6.1/i386-svr5 -I/usr/local/lib/perl5/5.6.1 /usr/local/lib/perl5/5.6.1/ExtUtils/xsubpp -typemap /usr/local/lib/perl5/5.6.1/ExtUtils/typemap SPI.xs > SPI.c\ncc -c -I. -I../../../src/include -I/usr/local/include -I/usr/local/include -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" -Kpic -I/usr/local/lib/perl5/5.6.1/i386-svr5/CORE SPI.c\nUX:acomp: WARNING: \"/usr/local/lib/perl5/5.6.1/i386-svr5/CORE/perl.h\", line 469: macro redefined: USE_LOCALE\nUX:acomp: WARNING: \"/usr/local/lib/perl5/5.6.1/i386-svr5/CORE/perl.h\", line 2155: macro redefined: DEBUG\nUX:acomp: WARNING: \"SPI.c\", line 73: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 88: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 103: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 118: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 133: end-of-loop code not reached\nRunning Mkbootstrap for plperl ()\nchmod 644 plperl.bs\nrm -f blib/arch/auto/plperl/plperl.so\nLD_RUN_PATH=\"\" cc -G -Wl,-Bexport -L/usr/local/lib -L/usr/gnu/lib plperl.o eloglvl.o SPI.o -L/usr/local/lib -L/usr/gnu/lib /usr/local/lib/perl5/5.6.1/i386-svr5/auto/DynaLoader/DynaLoader.a -L/usr/local/lib/perl5/5.6.1/i386-svr5/CORE -lperl -lsocket -lnsl -ldl -lld -lm -lcrypt -lutil -o blib/arch/auto/plperl/plperl.so \nchmod 755 blib/arch/auto/plperl/plperl.so\ncp plperl.bs blib/arch/auto/plperl/plperl.bs\nchmod 644 blib/arch/auto/plperl/plperl.bs\ngmake[1]: Leaving directory `/home/ler/pg-prod/postgresql-7.2.1/src/pl/plperl'\n\nand any run gives the following backtrace:\n$ debug -ic -c core.16991 /usr/local/pgsql/bin/postgres\nWarning: No debugging information in /usr/local/pgsql/bin/postgres\nCore image of postgres (process p1) created\nCORE FILE [Perl_hv_fetch in hv.c]\nSIGNALED 11 (segv code[SEGV_MAPERR] address[0x20202020]) in p1\n 0xa7cc306e (Perl_hv_fetc+126:) cmpl $0,(%eax)\ndebug> stack\nStack Trace for p1, Program postgres\n*[0] Perl_hv_fetch(0x8431d54, 0xbfffd090, 0x4, 0x2, 0xbfffd090,\n0xa7d217d8, 0x2) [0xa7cc306e]\n [1] Perl_gv_fetchpv(presumed: 0xa7d217d8, 0x2, 0xb) [0xa7c81675]\n [2] S_init_main_stash(presumed: 0xbfffd4a4, 0xa7d34924, 0) \n[0xa7c7c8df]\n [3] S_parse_body(0, 0xa7d3ab70) [0xa7c79556]\n [4] perl_parse(presumed: 0x8431c08, 0xa7d3ab70, 0x3) [0xa7c79370]\n [5] plperl_init_interp(presumed: 0xa7d3e25c, 0x8427a7c, 0xa7d3a77d) \n[0xa7d3a711]\n [6] plperl_init_all(presumed: 0, 0x8427cb4, 0x8427a7c) [0xa7d3a689]\n [7] plperl_call_handler(presumed: 0xbfffd51c, 0x8427a7c, 0x8427cb4) \n[0xa7d3a778]\n [8] ExecMakeFunctionResult(0x8427cb4, 0, 0x8427a7c, 0xbfffd7ab,\n0xbfffd610) [0x80e813d]\n [9] ExecEvalFunc(presumed: 0x8427834, 0x8427a7c, 0xbfffd7ab) \n[0x80e8276]\n [10] ExecEvalExpr(presumed: 0x8427834, 0x8427a7c, 0xbfffd7ab) \n[0x80e7000]\n [11] ExecTargetList(presumed: 0x84278b8, 0x1, 0x8427ac8) \n[0x80e8a83]\n [12] ExecProject(presumed: 0x84278d4, 0xbfffd80c, 0x842792c) \n[0x80e8db8]\n [13] ExecResult(presumed: 0x842792c, 0x84279d4, 0x84279d4) \n[0x80eedcc]\n [14] ExecProcNode(presumed: 0x842792c, 0, 0x842792c) [0x80e6b3d]\n [15] ExecutePlan(presumed: 0x84279d4, 0x842792c, 0x1) [0x80e4d06]\n [16] ExecutorRun(0x84279b8, 0x84279d4, 0x3, 0) [0x80e5387]\n [17] ProcessQuery(presumed: 0x841943c, 0x842792c, 0x2) [0x8140d02]\n [18] pg_exec_query_string(0x8419108, 0x2, 0x83cf7a4, 0x8419108) \n[0x813f354]\n [19] PostgresMain(0x4, 0xbfffdb38, 0x83cb8e9) [0x8140117]\n [20] DoBackend(0x83cb7b8, 0x8378aa8) [0x811f3e3]\n [21] BackendStartup(presumed: 0x83cb7b8, 0x8317384, 0xffff1538) \n[0x811eaa6]\n [22] ServerLoop(presumed: 0xbfffee04, 0x1f, 0x8378a10) [0x811e8aa]\n [23] PostmasterMain(presumed: 0x2, 0x8378a10, 0xbfffedf8) \n[0x811da7a]\n [24] main(0x2, 0xbfffee04, 0xbfffee10) [0x80f9b58]\n [25] _start() [0x8067e6c]\ndebug> \n\nAny ideas? \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "05 Aug 2002 17:01:03 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "PL/Perl?"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I upgraded PostgreSQL to 7.2.1 from a 7.2beta (yeah, I know). One of my\n> users requested plperl, so I got it to createlang, but it SIGSEGV's on\n> any simple perl. \n\nI was seeing the same with perl 5.6.1 and PG 7.2.* on HPUX 10.20.\nHowever, I have just verified that perl 5.8.0 works okay with PG CVS tip\n(not much testing, but it handles a simple plperl function). Could you\nsee whether 5.8.0 plays any nicer on your setup?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Sep 2002 18:54:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl? "
},
{
"msg_contents": "On Wed, 2002-09-04 at 17:54, Tom Lane wrote:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > I upgraded PostgreSQL to 7.2.1 from a 7.2beta (yeah, I know). One of my\n> > users requested plperl, so I got it to createlang, but it SIGSEGV's on\n> > any simple perl. \n> \n> I was seeing the same with perl 5.6.1 and PG 7.2.* on HPUX 10.20.\n> However, I have just verified that perl 5.8.0 works okay with PG CVS tip\n> (not much testing, but it handles a simple plperl function). Could you\n> see whether 5.8.0 plays any nicer on your setup?\nNeed to check with my user, I'll let ya know.\n\nLER\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "04 Sep 2002 19:41:48 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: PL/Perl?"
},
{
"msg_contents": "On Wed, 2002-09-04 at 19:41, Larry Rosenman wrote:\n> On Wed, 2002-09-04 at 17:54, Tom Lane wrote:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > I upgraded PostgreSQL to 7.2.1 from a 7.2beta (yeah, I know). One of my\n> > > users requested plperl, so I got it to createlang, but it SIGSEGV's on\n> > > any simple perl. \n> > \n> > I was seeing the same with perl 5.6.1 and PG 7.2.* on HPUX 10.20.\n> > However, I have just verified that perl 5.8.0 works okay with PG CVS tip\n> > (not much testing, but it handles a simple plperl function). Could you\n> > see whether 5.8.0 plays any nicer on your setup?\n> Need to check with my user, I'll let ya know.\n> \nWell, I tried to install 5.8.0 on my 8.0.1 (beta) system, and blew cc up\nwith an internal compiler error. I'll have to wait for Caldera to fix\nthat. Sorry.....\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "05 Sep 2002 15:16:38 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: PL/Perl?"
},
{
"msg_contents": "On Thu, 2002-09-05 at 15:16, Larry Rosenman wrote:\n> On Wed, 2002-09-04 at 19:41, Larry Rosenman wrote:\n> > On Wed, 2002-09-04 at 17:54, Tom Lane wrote:\n> > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > I upgraded PostgreSQL to 7.2.1 from a 7.2beta (yeah, I know). One of my\n> > > > users requested plperl, so I got it to createlang, but it SIGSEGV's on\n> > > > any simple perl. \n> > > \n> > > I was seeing the same with perl 5.6.1 and PG 7.2.* on HPUX 10.20.\n> > > However, I have just verified that perl 5.8.0 works okay with PG CVS tip\n> > > (not much testing, but it handles a simple plperl function). Could you\n> > > see whether 5.8.0 plays any nicer on your setup?\n> > Need to check with my user, I'll let ya know.\n> > \n> Well, I tried to install 5.8.0 on my 8.0.1 (beta) system, and blew cc up\n> with an internal compiler error. I'll have to wait for Caldera to fix\n> that. Sorry.....\nWell, this system has GCC as well, and GCC groked PERL 5.8.0, and, TA\nDA, PL/PERL works with 5.8.0...\n\nTHanks all...\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "05 Sep 2002 20:35:33 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: PL/Perl?"
},
{
"msg_contents": "Well I spent half a day and half a night, pgsql compiles ok for\nme. However I'm still figuring why I have majodormo probs...\n\nSo I went back for now.\n\nRegards\nOn 5 Sep 2002, Larry Rosenman wrote:\n\n> Date: 05 Sep 2002 15:16:38 -0500\n> From: Larry Rosenman <ler@lerctr.org>\n> To: Tom Lane <tgl@sss.pgh.pa.us>, pgsql-hackers@postgresql.org,\n> Olivier PRENANT <ohp@pyrenet.fr>\n> Subject: Re: [HACKERS] PL/Perl?\n> \n> On Wed, 2002-09-04 at 19:41, Larry Rosenman wrote:\n> > On Wed, 2002-09-04 at 17:54, Tom Lane wrote:\n> > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > I upgraded PostgreSQL to 7.2.1 from a 7.2beta (yeah, I know). One of my\n> > > > users requested plperl, so I got it to createlang, but it SIGSEGV's on\n> > > > any simple perl. \n> > > \n> > > I was seeing the same with perl 5.6.1 and PG 7.2.* on HPUX 10.20.\n> > > However, I have just verified that perl 5.8.0 works okay with PG CVS tip\n> > > (not much testing, but it handles a simple plperl function). Could you\n> > > see whether 5.8.0 plays any nicer on your setup?\n> > Need to check with my user, I'll let ya know.\n> > \n> Well, I tried to install 5.8.0 on my 8.0.1 (beta) system, and blew cc up\n> with an internal compiler error. I'll have to wait for Caldera to fix\n> that. Sorry.....\n> \n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Fri, 6 Sep 2002 16:06:41 +0200 (MET DST)",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": false,
"msg_subject": "Re: PL/Perl?"
}
] |
[
{
"msg_contents": "I am wondering if the contrib module tsearch supports reqular expressions\nand\nif it has any kind of indexing over the stored text data like for instance\na suffix trie?\n\n\tTnks\n\tAmancio\n\n\n\n",
"msg_date": "Mon, 5 Aug 2002 17:46:43 -0700",
"msg_from": "\"Amancio Hasty, Jr\" <amanciohasty@attbi.com>",
"msg_from_op": true,
"msg_subject": "tsearch -- regular expressions?"
},
{
"msg_contents": "On Mon, 5 Aug 2002, Amancio Hasty, Jr wrote:\n\n> I am wondering if the contrib module tsearch supports reqular expressions\n\ncurrently no. I'd imagine we could add support of LIKE to tsearch but\nwithout index support\n\n> and\n> if it has any kind of indexing over the stored text data like for instance\n> a suffix trie?\n\nno, suffix trie is not a balanced tree, while GiST is a height balanced.\n\n\n>\n> \tTnks\n> \tAmancio\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 6 Aug 2002 21:24:55 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: tsearch -- regular expressions?"
},
{
"msg_contents": "Curious , what are your thoughts on:\n\nThe String B-Tree: A New Data Structure for\nString Search in External Memory\nhttp://citeseer.nj.nec.com/ferragina98string.html\n\nor\n\nSuffix Binary Search Trees And Suffix Arrays\nhttp://citeseer.nj.nec.com/irving01suffix.html\n\n\n\tTnks\n\tAmancio\n\n\n-----Original Message-----\nFrom: Oleg Bartunov [mailto:oleg@sai.msu.su]\nSent: Tuesday, August 06, 2002 11:25 AM\nTo: Amancio Hasty, Jr\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] tsearch -- regular expressions?\n\n\nOn Mon, 5 Aug 2002, Amancio Hasty, Jr wrote:\n\n> I am wondering if the contrib module tsearch supports reqular expressions\n\ncurrently no. I'd imagine we could add support of LIKE to tsearch but\nwithout index support\n\n> and\n> if it has any kind of indexing over the stored text data like for instance\n> a suffix trie?\n\nno, suffix trie is not a balanced tree, while GiST is a height balanced.\n\n\n>\n> \tTnks\n> \tAmancio\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n",
"msg_date": "Tue, 6 Aug 2002 12:00:12 -0700",
"msg_from": "\"Amancio Hasty, Jr\" <amanciohasty@attbi.com>",
"msg_from_op": true,
"msg_subject": "Re: tsearch -- regular expressions?"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Amancio Hasty, Jr wrote:\n\n> Curious , what are your thoughts on:\n>\n> The String B-Tree: A New Data Structure for\n> String Search in External Memory\n> http://citeseer.nj.nec.com/ferragina98string.html\n\nI have this paper in my archive. Don't know if I'll have a time\nto study. We're looking, of course, for effective ADT for searching\nbut we're limited by GiST framework, because it's the only way to\nimplement something new into postgres without touching the core.\n\nIf you feel you're ready to invest your time and brains into\nfull text searhing you're welcome !\n\n>\n> or\n>\n> Suffix Binary Search Trees And Suffix Arrays\n> http://citeseer.nj.nec.com/irving01suffix.html\n>\n>\n> \tTnks\n> \tAmancio\n>\n>\n> -----Original Message-----\n> From: Oleg Bartunov [mailto:oleg@sai.msu.su]\n> Sent: Tuesday, August 06, 2002 11:25 AM\n> To: Amancio Hasty, Jr\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] tsearch -- regular expressions?\n>\n>\n> On Mon, 5 Aug 2002, Amancio Hasty, Jr wrote:\n>\n> > I am wondering if the contrib module tsearch supports reqular expressions\n>\n> currently no. I'd imagine we could add support of LIKE to tsearch but\n> without index support\n>\n> > and\n> > if it has any kind of indexing over the stored text data like for instance\n> > a suffix trie?\n>\n> no, suffix trie is not a balanced tree, while GiST is a height balanced.\n>\n>\n> >\n> > \tTnks\n> > \tAmancio\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 7 Aug 2002 15:57:33 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: tsearch -- regular expressions?"
}
] |
[
{
"msg_contents": "Hey Peter,\n\nI notice that here:\n\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/ddl-alter.html\n\nYou mention that you cannot remove a column. This is no longer true as of\nlast friday.\n\nAnd here:\n\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/ddl-depend.html\n\nYou say:\n\n\"All drop commands in PostgreSQL support specifying CASCADE. Of course\"\n\nWhich isn't technically true for the ALTER TABLE/DROP NOT NULL statement..\n\nChris\n\n",
"msg_date": "Tue, 6 Aug 2002 10:27:50 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "New manual chapters"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> You say:\n> \"All drop commands in PostgreSQL support specifying CASCADE. Of course\"\n> Which isn't technically true for the ALTER TABLE/DROP NOT NULL statement..\n\nCome to think of it, that's probably a bug: you should not be able to\nDROP NOT NULL on a column that's part of a PRIMARY KEY. Unless you \ncascade to remove the primary key, that is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 22:56:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New manual chapters "
},
{
"msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > You say:\n> > \"All drop commands in PostgreSQL support specifying CASCADE. Of course\"\n> > Which isn't technically true for the ALTER TABLE/DROP NOT NULL\n> statement..\n>\n> Come to think of it, that's probably a bug: you should not be able to\n> DROP NOT NULL on a column that's part of a PRIMARY KEY. Unless you\n> cascade to remove the primary key, that is.\n\nI did ask you about this before, Tom :)\n\nThe DROP NOT NULL code I submitted will not allow you to drop a not null on\na column that participates in a primary key. I was very careful about that.\nSo basically, it's a restrict-only implementation. Although it would be\nfairly easy I guess to make it support cascade and restrict keywords...?\nPerhaps not thru the dependency mechanism, but it can be done explicitly.\n\nChris\n\n",
"msg_date": "Tue, 6 Aug 2002 11:06:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: New manual chapters "
},
{
"msg_contents": "On Tue, 2002-08-06 at 09:58, Christopher Kings-Lynne wrote:\n> \n> Anyone have any ideas on how we could implement changing column type. We'd\n> have to move all dependencies and any object that refers to the column by\n> its attnum, etc... I guess we could steal a lot of the renameatt() code...\n\nAs discussed some time ago, introducing attlognum would help here a lot\nif you want the changed column not to hop to the end of column list for\n\"SELECT * \" . Or are attlognum changes done already ?\n\n-----------\nHannu\n\n",
"msg_date": "06 Aug 2002 08:36:23 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: New manual chapters"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Come to think of it, that's probably a bug: you should not be able to\n>> DROP NOT NULL on a column that's part of a PRIMARY KEY. Unless you\n>> cascade to remove the primary key, that is.\n\n> I did ask you about this before, Tom :)\n\n> The DROP NOT NULL code I submitted will not allow you to drop a not null on\n> a column that participates in a primary key. I was very careful about that.\n\nDuh, so you were.\n\n> So basically, it's a restrict-only implementation. Although it would be\n> fairly easy I guess to make it support cascade and restrict keywords...?\n> Perhaps not thru the dependency mechanism, but it can be done explicitly.\n\nYeah, I doubt it's worth trying to force NOT NULL into the dependency\nmechanism for this. Do you feel like trying to do it \"by hand\"? It\ndoesn't seem like a very important issue to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Aug 2002 23:54:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New manual chapters "
},
{
"msg_contents": "On Tue, 2002-08-06 at 11:02, Christopher Kings-Lynne wrote:\n> > But we *have* drop column.\n> \n> I know that....I think I must have been on crazy pills when I mentioned it\n> in regards to DROP COLUMN...\n> \n> > If we don't do attlognum for 7.3, there's little point in doing it at\n> > all. By the time 7.4 comes out, clients that formerly expected a\n> > consecutive series of attnums will have found some way to cope.\n> \n> I think that what Hannu is saying is that if we implemented changing column\n> type with a drop and an add, all queries that used to go 'select * from'\n> would return the modified column at the end of the list of columns instead\n> of in its original position.\n\nThere is at least one type of change that can be safely done in-place,\nnamely making variable length types longer (varchar(2) --> varchar(5)).\n\nMaking them shorter would be doable in-place, but then there are two\nways to go - \n\n1. we check that the data fits the new length\n\n2. i/o funtions would enforce length\n\nthe 2. way has an unexpected behaviour if we first make a field shorter\nand then longer again.\n\n> > I'm not sure that I feel any strong sense of urgency about this ---\n> > 7.3 will break clients that examine the system catalogs in many ways,\n> > and this doesn't seem like the nastiest of 'em.\n\nIt's not about inspecting system catalogs by clients, it is the change\nin select * after a column type change (say from float to numeric(15,2))\nif done by \"add column + update + drop column + rename column\" if we had\nlognums, we could do \"move column\" as a final step.\n\nUsing the above scenario we already can do \"alter table alter column\ntype\" manually, so I'd suggest that implementing attlognum (move column)\ngives more additional functionality than putting in a frontend to do\nmove column manually.\n\nThe \"drop column + rename column + move column\" could also be rolled\ninto one \"replace column\" command ;)\n\n-----------\nHannu\n\n",
"msg_date": "06 Aug 2002 09:23:28 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: New manual chapters"
},
{
"msg_contents": "> > So basically, it's a restrict-only implementation. Although it would be\n> > fairly easy I guess to make it support cascade and restrict keywords...?\n> > Perhaps not thru the dependency mechanism, but it can be done\n> explicitly.\n>\n> Yeah, I doubt it's worth trying to force NOT NULL into the dependency\n> mechanism for this. Do you feel like trying to do it \"by hand\"? It\n> doesn't seem like a very important issue to me.\n\nWell it'd be nice from a consistency point of view. And it wouldn't be too\nhard. I don't think it's essential for 7.3, as it would be perfectly\nbackwards compatible. I'll add it to my TODO list, right after changing\ncolumn type :)\n\nAnyone have any ideas on how we could implement changing column type. We'd\nhave to move all dependencies and any object that refers to the column by\nits attnum, etc... I guess we could steal a lot of the renameatt() code...\n\nChris\n\n",
"msg_date": "Tue, 6 Aug 2002 12:58:28 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: New manual chapters "
},
{
"msg_contents": "> As discussed some time ago, introducing attlognum would help here a lot\n> if you want the changed column not to hop to the end of column list for\n> \"SELECT * \" . Or are attlognum changes done already ?\n\nArg. That's something I didn't think of. No, attlognums aren't done. I\nthink having drop column is more important than the above concern tho. I'll\nadd it to my TODO.\n\nChris\n\n",
"msg_date": "Tue, 6 Aug 2002 13:46:23 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: New manual chapters"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Arg. That's something I didn't think of. No, attlognums aren't done. I\n> think having drop column is more important than the above concern tho. I'll\n> add it to my TODO.\n\nBut we *have* drop column.\n\nIf we don't do attlognum for 7.3, there's little point in doing it at\nall. By the time 7.4 comes out, clients that formerly expected a\nconsecutive series of attnums will have found some way to cope.\n\nI'm not sure that I feel any strong sense of urgency about this ---\n7.3 will break clients that examine the system catalogs in many ways,\nand this doesn't seem like the nastiest of 'em.\n\nI just wanted to point out that \"we'll do it later\" isn't a profitable\nattitude towards attlognum. Either it's done by the end of August,\nor it never gets done at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 01:54:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New manual chapters "
},
{
"msg_contents": "> But we *have* drop column.\n\nI know that....I think I must have been on crazy pills when I mentioned it\nin regards to DROP COLUMN...\n\n> If we don't do attlognum for 7.3, there's little point in doing it at\n> all. By the time 7.4 comes out, clients that formerly expected a\n> consecutive series of attnums will have found some way to cope.\n\nI think that what Hannu is saying is that if we implemented changing column\ntype with a drop and an add, all queries that used to go 'select * from'\nwould return the modified column at the end of the list of columns instead\nof in its original position.\n\n> I'm not sure that I feel any strong sense of urgency about this ---\n> 7.3 will break clients that examine the system catalogs in many ways,\n> and this doesn't seem like the nastiest of 'em.\n>\n> I just wanted to point out that \"we'll do it later\" isn't a profitable\n> attitude towards attlognum. Either it's done by the end of August,\n> or it never gets done at all.\n\nOk - don't know if I'll be able to tho.\n\nChris\n\n",
"msg_date": "Tue, 6 Aug 2002 14:02:53 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: New manual chapters "
}
] |
[
{
"msg_contents": "I have added SQL99's CONVERT() function. docs and regression tests\nalso updated. Our own convert() functions can also be used. Example\nusage of CONVERT():\n\nconvert('PostgreSQL' using iso8859_1_to_utf8)\n\nwill return 'PostgreSQL' in UTF-8 encoding. See \"String Functions and\nOperators\" section of Users's guide for more details and currently\navailable (predefined) conversions.\n\nI believe remaining work for CONVERSION stuffs is some conversions for\ncyrillic and win874/1250/1251/1256 encodings.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 06 Aug 2002 14:55:04 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "SQL99 CONVERT() function"
},
{
"msg_contents": "> I believe remaining work for CONVERSION stuffs is some conversions for\n> cyrillic and win874/1250/1251/1256 encodings.\n\nOops. I forgot the cygwin (and AIX) issue. Need to address it before\nbeta freeze...\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 06 Aug 2002 15:57:35 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: SQL99 CONVERT() function"
},
{
"msg_contents": "On Tue, Aug 06, 2002 at 02:55:04PM +0900, Tatsuo Ishii wrote:\n> I have added SQL99's CONVERT() function. docs and regression tests\n> also updated. Our own convert() functions can also be used. Example\n> usage of CONVERT():\n> \n> convert('PostgreSQL' using iso8859_1_to_utf8)\n ^^^^^\n What is it? Is it really in standard? Sorry, but it seems\n strange. What 'ISO8859-1' as name?\n\n CAST( int_as_char ) ? :-)\n\n .... CONVERT('PostgreSQL' USING 'ISO8859-1' TO 'UTF8')\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 6 Aug 2002 09:54:17 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: SQL99 CONVERT() function"
},
{
"msg_contents": "> On Tue, Aug 06, 2002 at 02:55:04PM +0900, Tatsuo Ishii wrote:\n> > I have added SQL99's CONVERT() function. docs and regression tests\n> > also updated. Our own convert() functions can also be used. Example\n> > usage of CONVERT():\n> > \n> > convert('PostgreSQL' using iso8859_1_to_utf8)\n> ^^^^^\n> What is it? Is it really in standard?\n\nIt's a \"conversion name\".\n\n>From SQL99:\n CONVERT <left paren> <character value expression>\n USING <form-of-use conversion name> <right paren>\n\n> Sorry, but it seems\n> strange. What 'ISO8859-1' as name?\n\nSure, you can use '-' instead of '_' if you don't mind quoting it\nwith \"(i.e. delimited identifier).\n\nconvert('PostgreSQL' using \"iso8859-1-to-utf8\")\n\nI'm sure people don't like that way...\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 06 Aug 2002 17:19:50 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: SQL99 CONVERT() function"
},
{
"msg_contents": "On Tue, Aug 06, 2002 at 05:19:50PM +0900, Tatsuo Ishii wrote:\n> > On Tue, Aug 06, 2002 at 02:55:04PM +0900, Tatsuo Ishii wrote:\n> > > I have added SQL99's CONVERT() function. docs and regression tests\n> > > also updated. Our own convert() functions can also be used. Example\n> > > usage of CONVERT():\n> > > \n> > > convert('PostgreSQL' using iso8859_1_to_utf8)\n> > ^^^^^\n> > What is it? Is it really in standard?\n> \n> It's a \"conversion name\".\n> \n> >From SQL99:\n> CONVERT <left paren> <character value expression>\n> USING <form-of-use conversion name> <right paren>\n\n Ah.. conversion name. I was thinking it are names of encodings. \n It's right.\n \n Karel\n \n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 6 Aug 2002 10:49:59 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: SQL99 CONVERT() function"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > Sorry, but it seems\n> > strange. What 'ISO8859-1' as name?\n>\n> Sure, you can use '-' instead of '_' if you don't mind quoting it\n> with \"(i.e. delimited identifier).\n>\n> convert('PostgreSQL' using \"iso8859-1-to-utf8\")\n>\n> I'm sure people don't like that way...\n\nI suggest using the official IANA names and replace all the non-identifier\ncharacters by underscores and all upper-case letters with lower-case.\nSo it would be iso_8859_1_to_utf_8. That way it's almost as pretty but a\nlot more predictable.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 6 Aug 2002 23:18:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL99 CONVERT() function"
},
{
"msg_contents": "> I suggest using the official IANA names and replace all the non-identifier\n> characters by underscores and all upper-case letters with lower-case.\n> So it would be iso_8859_1_to_utf_8. That way it's almost as pretty but a\n> lot more predictable.\n\nSounds reasonable. I'll look into this. However I have to examin each\nencodings carefully. Because:\n\n(1) some encodings are not listed IANA (e.g. TCVN, WIN874...)\n\n(2) some offcial IANA names seem not appropriate\n (e.g. Extended_UNIX_Code_Packed_Format_for_Japanese)\n:\n:\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 07 Aug 2002 21:56:08 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: SQL99 CONVERT() function"
},
{
"msg_contents": "Hello,\n\nThe attached patch adds CONVERSION stuff for cyrillic and\nwin874/1250/1251/1256 encodings.\n\nThank you.\n\nFrom: Tatsuo Ishii <t-ishii@sra.co.jp>\nSubject: [HACKERS] SQL99 CONVERT() function\nDate: Tue, 06 Aug 2002 14:55:04 +0900 (JST)\nMessage-ID: <20020806.145504.35027319.t-ishii@sra.co.jp>\n\n> I have added SQL99's CONVERT() function. docs and regression tests\n> also updated. Our own convert() functions can also be used. Example\n> usage of CONVERT():\n> \n> convert('PostgreSQL' using iso8859_1_to_utf8)\n> \n> will return 'PostgreSQL' in UTF-8 encoding. See \"String Functions and\n> Operators\" section of Users's guide for more details and currently\n> available (predefined) conversions.\n> \n> I believe remaining work for CONVERSION stuffs is some conversions for\n> cyrillic and win874/1250/1251/1256 encodings.\n> --\n> Tatsuo Ishii\n\n-------------------\n Kaori Inaba \n i-kaori@sra.co.jp",
"msg_date": "Tue, 13 Aug 2002 16:00:58 +0900 (JST)",
"msg_from": "Kaori Inaba <i-kaori@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SQL99 CONVERT() function"
},
{
"msg_contents": "> The attached patch adds CONVERSION stuff for cyrillic and\n> win874/1250/1251/1256 encodings.\n\nThanks. I'll take care of this.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 13 Aug 2002 16:06:05 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SQL99 CONVERT() function"
},
{
"msg_contents": "> > The attached patch adds CONVERSION stuff for cyrillic and\n> > win874/1250/1251/1256 encodings.\n> \n> Thanks. I'll take care of this.\n\nDone. Documents and regression tests have been updated also. I think\nnow we have implemented all encoding conversions for 7.3 release.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 14 Aug 2002 11:52:24 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SQL99 CONVERT() function"
},
{
"msg_contents": "> > I suggest using the official IANA names and replace all the non-identifier\n> > characters by underscores and all upper-case letters with lower-case.\n> > So it would be iso_8859_1_to_utf_8. That way it's almost as pretty but a\n> > lot more predictable.\n> \n> Sounds reasonable. I'll look into this. However I have to examin each\n> encodings carefully. Because:\n> \n> (1) some encodings are not listed IANA (e.g. TCVN, WIN874...)\n> \n> (2) some offcial IANA names seem not appropriate\n> (e.g. Extended_UNIX_Code_Packed_Format_for_Japanese)\n> :\n> :\n\nDone. See current doc (user's guide \"6.4. String Functions and\nOperators\" Table 6-7 \"Available conversion names\") how I changed the\nconversion names.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 14 Aug 2002 14:31:12 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: SQL99 CONVERT() function"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> Done. See current doc (user's guide \"6.4. String Functions and\n> Operators\" Table 6-7 \"Available conversion names\") how I changed the\n> conversion names.\n\nWhat guideline did you follow? For example, should koi8r be koi8_r? Or\nshould winXXX be win_XXX?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n",
"msg_date": "Tue, 20 Aug 2002 23:22:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL99 CONVERT() function"
}
] |
[
{
"msg_contents": "****************************** WARNING *******************************\nThis message has been scanned by MDaemon/DKAV and was found to contain\ninfected attachment(s). Please review the list below.\n\nAttachment Virus name Action taken\n----------------------------------------------------------------------\ncf297827475.att Exploit.IFrame.FileDownloadRemoved\nBULL..exe ??? Removed\n\n\n**********************************************************************",
"msg_date": "Tue, 6 Aug 2002 01:57:23 -0400 (EDT)",
"msg_from": "lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "From GROUPE BULL."
}
] |
[
{
"msg_contents": "pgman wrote:\n> Peter Eisentraut wrote:\n> > Bruce Momjian writes:\n> > \n> > > OK, I have attached a patch for testing. Sample output is:\n> > >\n> > > \t$ sql -U guest test\n> > > \tpsql: FATAL: user \"test.guest\" does not exist\n> > > \t$ createuser test.guest\n> > \n> > I will object to any scheme that makes any characters in the user name\n> > magic. Two reasons: First, do it right, make a separate column.\n> > Second, several tools use URI syntax to specify data sources. This will\n> > break any feature that relies on being able to put special characters into\n> > the user name.\n> > \n> > The right solution to having database-local user names is putting extra\n> > information into pg_shadow regarding which database this user applies to.\n> > It could be an array or some separate \"authentication domain\" thing.\n> \n> OK, if you object, you can say goodbye to this feature for 7.3. I can\n> supply the patch to Marc and anyone else who wants it but I am not\n> inclined nor convinced we need that level of work for this feature.\n> \n> So we end up with nothing.\n\nI have given this some thought. Peter's objection was that he objects\nto any change that \"makes any characters in the user name magic\".\n\nI don't think my patch does that. If you don't enable the feature,\neverything works just the same. If you turn it on, it unconditionally\nprefixes the username with the database name and a period. You can\nstill have periods in the username. The code doesn't check for any\nperiods in the username passed to the backend.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 02:43:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "DB-local usernames"
},
{
"msg_contents": "On Tue, 2002-08-06 at 08:43, Bruce Momjian wrote:\n> I have given this some thought. Peter's objection was that he objects\n> to any change that \"makes any characters in the user name magic\".\n> \n> I don't think my patch does that. If you don't enable the feature,\n> everything works just the same. If you turn it on, it unconditionally\n> prefixes the username with the database name and a period. You can\n> still have periods in the username. The code doesn't check for any\n> periods in the username passed to the backend.\n\nwhat about :\n\n[hannu@taru hannu]$ createdb this.is.legal.database.name\nCREATE DATABASE\n[hannu@taru hannu]$ psql this.is.legal.database.name\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nthis.is.legal.database.name=# \n\n---------------\nHannu\n\n",
"msg_date": "06 Aug 2002 10:10:02 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: DB-local usernames"
},
{
"msg_contents": "\nOh, well backend sees the user as this.is.legal.database.name.user.\n\nThe only case I can see a problem would be you have my.db.name and\ndb.user as legal _and_ my.db and name.db.user as legal. That is clearly\na problem becuase they appear the same when logging in.\n\nCan anyone think of a way to get this to work _without_ pushing the\ncomplexity into the user administration commands? That is what is\npreventing me from creating a separate field in pg_shadow.\n\n---------------------------------------------------------------------------\n\nHannu Krosing wrote:\n> On Tue, 2002-08-06 at 08:43, Bruce Momjian wrote:\n> > I have given this some thought. Peter's objection was that he objects\n> > to any change that \"makes any characters in the user name magic\".\n> > \n> > I don't think my patch does that. If you don't enable the feature,\n> > everything works just the same. If you turn it on, it unconditionally\n> > prefixes the username with the database name and a period. You can\n> > still have periods in the username. The code doesn't check for any\n> > periods in the username passed to the backend.\n> \n> what about :\n> \n> [hannu@taru hannu]$ createdb this.is.legal.database.name\n> CREATE DATABASE\n> [hannu@taru hannu]$ psql this.is.legal.database.name\n> Welcome to psql, the PostgreSQL interactive terminal.\n> \n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n> \n> this.is.legal.database.name=# \n> \n> ---------------\n> Hannu\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 08:20:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: DB-local usernames"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can anyone think of a way to get this to work _without_ pushing the\n> complexity into the user administration commands? That is what is\n> preventing me from creating a separate field in pg_shadow.\n\nI'd definitely prefer not to do that. We have not really thought\nthrough the implications. The original idea here was a quick-and-dirty,\neasily ignored, optional feature to support per-database user name\nassignment. Turning it into something more will require a lot of design\nwork that we haven't done, and IMHO don't have time for before 7.3.\n\nBTW, I still prefer \"user@dbname\" to \"dbname.user\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 10:04:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DB-local usernames "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tatsuo Ishii [mailto:t-ishii@sra.co.jp] \n> Sent: 06 August 2002 07:58\n> To: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] SQL99 CONVERT() function\n> \n> \n> > I believe remaining work for CONVERSION stuffs is some \n> conversions for \n> > cyrillic and win874/1250/1251/1256 encodings.\n> \n> Oops. I forgot the cygwin (and AIX) issue. Need to address it \n> before beta freeze...\n\nYes please, it's halted pgAdmin development right now :-(\n\nRegards, Dave.\n",
"msg_date": "Tue, 6 Aug 2002 08:18:37 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: SQL99 CONVERT() function"
},
{
"msg_contents": "> > Oops. I forgot the cygwin (and AIX) issue. Need to address it \n> > before beta freeze...\n> \n> Yes please, it's halted pgAdmin development right now :-(\n\nI need to create the cygwin environment first. Sorry for the\ninconvenience.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 06 Aug 2002 16:24:50 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: SQL99 CONVERT() function"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tatsuo Ishii [mailto:t-ishii@sra.co.jp] \n> Sent: 06 August 2002 08:25\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] SQL99 CONVERT() function\n> \n> \n> > > Oops. I forgot the cygwin (and AIX) issue. Need to address it\n> > > before beta freeze...\n> > \n> > Yes please, it's halted pgAdmin development right now :-(\n> \n> I need to create the cygwin environment first. Sorry for the \n> inconvenience.\n\nIf there's anything I can do to help just let me know. Unfortunately I'm\nnot particuarly wise in the ways of gmake...\n\nRegards, Dave.\n",
"msg_date": "Tue, 6 Aug 2002 08:35:21 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: SQL99 CONVERT() function"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nWe need someone who's a member of the Australian Unix Users Group to\nstep forward and say Hi about now.\n\n:)\n\nThe AUUG is having a conference soon, and at this conference they're\nhaving an \"Australian Open Source Awards\" night. I personally feel we\nshould nominate Christopher Kings-Lynne\" for the \"Technology\" category,\nas he has been doing a *very* large amount of significant coding on\nPostgreSQL, both recently and over time:\n\nhttp://www.auug.org.au/aosa/aosa2002.html\n\nBut... only AUUG members can nominate. So, if you're an AUUG member and\nwouldn't mind helping out with this, would you be able to get in contact\nwith me?\n\nNominations close on the 12th August, which is only 1 week away, so we\nshould probably hurry a bit.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 06 Aug 2002 18:07:38 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "[Fwd: Who here is an AUUG member? (Australian Unix Users Group)]"
},
{
"msg_contents": "> Hi everyone,\n>\n> We need someone who's a member of the Australian Unix Users Group to\n> step forward and say Hi about now.\n>\n> :)\n>\n> The AUUG is having a conference soon, and at this conference they're\n> having an \"Australian Open Source Awards\" night. I personally feel we\n> should nominate Christopher Kings-Lynne\" for the \"Technology\" category,\n> as he has been doing a *very* large amount of significant coding on\n> PostgreSQL, both recently and over time:\n\nWell - a lot of coding for an Australian contributor at least :)\n\nI couldn't have done it without Tom's & Bruce's guidance and Hiroshi's\noriginal inspiration.\n\nBut hey - it would look nice on my resume :)\n\nChris\n\n",
"msg_date": "Tue, 6 Aug 2002 16:10:20 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Who here is an AUUG member? (Australian Unix Users Group)]"
}
] |
[
{
"msg_contents": "OK,\n\nThis is HEAD after a gmake clean && gmake:\n\ngmake[4]: Entering directory\n`/home/chriskl/pgsql-head/src/backend/access/heap'\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../..\n/../src/include -c -o heapam.o heapam.c -MMD\nheapam.c: In function `heap_insert':\nheapam.c:1158: structure has no member named `rd_istemp'\nheapam.c: In function `heap_delete':\nheapam.c:1341: structure has no member named `rd_istemp'\nheapam.c: In function `heap_update':\nheapam.c:1677: structure has no member named `rd_istemp'\nheapam.c: In function `log_heap_clean':\nheapam.c:1952: structure has no member named `rd_istemp'\nheapam.c: In function `log_heap_update':\nheapam.c:2006: structure has no member named `rd_istemp'\ngmake[4]: *** [heapam.o] Error 1\ngmake[4]: Leaving directory\n`/home/chriskl/pgsql-head/src/backend/access/heap'\ngmake[3]: *** [heap-recursive] Error 2\ngmake[3]: Leaving directory `/home/chriskl/pgsql-head/src/backend/access'\ngmake[2]: *** [access-recursive] Error 2\ngmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/backend'\ngmake[1]: *** [install] Error 2\ngmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\ngmake: *** [install] Error 2\n\nI suspect this is related to Tom's recent patch?\n\nChris\n\n",
"msg_date": "Tue, 6 Aug 2002 17:01:33 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "CVS broken again?"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> heapam.c: In function `heap_insert':\n> heapam.c:1158: structure has no member named `rd_istemp'\n\n> I suspect this is related to Tom's recent patch?\n\nIt looks like you have an updated heapam.c and not an updated\ninclude/utils/rel.h ... perhaps you cvs update'd at just the wrong time,\nand got only part of the patch?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 09:57:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CVS broken again? "
}
] |
[
{
"msg_contents": "Hi all,\n\nAttached is a small patch to scan.l for consideration. It hands\nyyerror() the position in the query string of the token which caused a\nparse error. It is not even close to an implementation of error handling\na-la SQL99 but it certainly makes debugging complicated queries easier.\n\nI've done some testing and it appears to hit the offending token pretty\naccurately.\n\nCan anyone find a way to break this? If not, I'd love to see it in 7.3.\n\nGavin",
"msg_date": "Tue, 6 Aug 2002 22:18:00 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Better handling of parse errors"
},
{
"msg_contents": "\nCan we see some sample output?\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> Hi all,\n> \n> Attached is a small patch to scan.l for consideration. It hands\n> yyerror() the position in the query string of the token which caused a\n> parse error. It is not even close to an implementation of error handling\n> a-la SQL99 but it certainly makes debugging complicated queries easier.\n> \n> I've done some testing and it appears to hit the offending token pretty\n> accurately.\n> \n> Can anyone find a way to break this? If not, I'd love to see it in 7.3.\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 08:23:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Better handling of parse errors"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Bruce Momjian wrote:\n\n> \n> Can we see some sample output?\n\ntemplate1=# select * frum pg_class;\nERROR: parser: parse error at or near \"frum\" at character 10\ntemplate1=# select relname from pg_class a excepr select relname from\npg_class limit a;\nERROR: parser: parse error at or near \"excepr\" at character 32\ntemplate1=# create table a (desc int);\nERROR: parser: parse error at or near \"desc\" at character 17\n\nGavin\n\n> \n> ---------------------------------------------------------------------------\n> \n> Gavin Sherry wrote:\n> > Hi all,\n> > \n> > Attached is a small patch to scan.l for consideration. It hands\n> > yyerror() the position in the query string of the token which caused a\n> > parse error. It is not even close to an implementation of error handling\n> > a-la SQL99 but it certainly makes debugging complicated queries easier.\n> > \n> > I've done some testing and it appears to hit the offending token pretty\n> > accurately.\n> > \n> > Can anyone find a way to break this? If not, I'd love to see it in 7.3.\n> > \n> > Gavin\n> \n> Content-Description: \n> \n> [ Attachment, skipping... ]\n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> \n> \n\n",
"msg_date": "Tue, 6 Aug 2002 22:27:59 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Better handling of parse errors"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Bruce Momjian wrote:\n\n> \n> Can we see some sample output?\n\nOne other important one:\n\ntemplate1=# select\ntemplate1-# *\ntemplate1-#\ntemplate1-# frum\ntemplate1-# pg_class;\nERROR: parser: parse error at or near \"frum\" at character 10\n\nNote that it counts non-printable characters. This could be both good and\nbad.\n\nGavin\n\n\n",
"msg_date": "Tue, 6 Aug 2002 22:39:18 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Better handling of parse errors"
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> Attached is a small patch to scan.l for consideration. It hands\n> yyerror() the position in the query string of the token which caused a\n> parse error.\n\nIsn't that the hard way to do it? Seems like you could just subtract\nscanbuf from the error pointer, instead of adding overhead to the basic\nlex loop.\n\nThings that need to be decided (and documented somewhere):\n\nIs the number an offset (counted from 0) or an index (counted from 1)?\nWhich end of the token does it point at? Can the message be phrased\nso as to make it reasonably clear what the number is?\n\n\nA related change I'd been meaning to make is to get it to say\n\tparse error at end of input\nwhen that's the case, rather than the rather useless\n\tparse error at or near \"\"\nthat you get now. I'd still be inclined to just say \"end of input\"\nin that case, and not bother with a character count.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 11:01:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Better handling of parse errors "
},
{
"msg_contents": "\nGavin, have you answered these issues brought up about the patch?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > Attached is a small patch to scan.l for consideration. It hands\n> > yyerror() the position in the query string of the token which caused a\n> > parse error.\n> \n> Isn't that the hard way to do it? Seems like you could just subtract\n> scanbuf from the error pointer, instead of adding overhead to the basic\n> lex loop.\n> \n> Things that need to be decided (and documented somewhere):\n> \n> Is the number an offset (counted from 0) or an index (counted from 1)?\n> Which end of the token does it point at? Can the message be phrased\n> so as to make it reasonably clear what the number is?\n> \n> \n> A related change I'd been meaning to make is to get it to say\n> \tparse error at end of input\n> when that's the case, rather than the rather useless\n> \tparse error at or near \"\"\n> that you get now. I'd still be inclined to just say \"end of input\"\n> in that case, and not bother with a character count.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 01:09:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Better handling of parse errors"
},
{
"msg_contents": "Bruce,\n\nI was working on this on the train this morning. There are a few\npossibilities I'd like to look at before I submit another patch.\n\nBefore I do, there is one important question that I didn't raise when I\nsubmitted the first patch (which was only a proof of concept kind of\npatch). Namely: do we want to modify every 7.2 error message? Many people\nhave written error message parsers into their applications to make up for\nthe fact that Postgres doesn't use error codes or issue 'standardised'\nerror messages.\n\nIs this going to annoy people too much? Should it be delayed until we have\nSQL99 diagnostics/error codes?\n\nGavin\n\nOn Wed, 14 Aug 2002, Bruce Momjian wrote:\n\n> \n> Gavin, have you answered these issues brought up about the patch?\n> \n> ---------------------------------------------------------------------------\n> \n> Tom Lane wrote:\n> > Gavin Sherry <swm@linuxworld.com.au> writes:\n> > > Attached is a small patch to scan.l for consideration. It hands\n> > > yyerror() the position in the query string of the token which caused a\n> > > parse error.\n> > \n> > Isn't that the hard way to do it? Seems like you could just subtract\n> > scanbuf from the error pointer, instead of adding overhead to the basic\n> > lex loop.\n> > \n> > Things that need to be decided (and documented somewhere):\n> > \n> > Is the number an offset (counted from 0) or an index (counted from 1)?\n> > Which end of the token does it point at? Can the message be phrased\n> > so as to make it reasonably clear what the number is?\n> > \n> > \n> > A related change I'd been meaning to make is to get it to say\n> > \tparse error at end of input\n> > when that's the case, rather than the rather useless\n> > \tparse error at or near \"\"\n> > that you get now. I'd still be inclined to just say \"end of input\"\n> > in that case, and not bother with a character count.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> \n> \n\n",
"msg_date": "Wed, 14 Aug 2002 15:19:54 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Better handling of parse errors"
},
{
"msg_contents": "\nI don't think anyone will mind, but you can throw a message to 'general'\njust to be sure.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> Bruce,\n> \n> I was working on this on the train this morning. There are a few\n> possibilities I'd like to look at before I submit another patch.\n> \n> Before I do, there is one important question that I didn't raise when I\n> submitted the first patch (which was only a proof of concept kind of\n> patch). Namely: do we want to modify every 7.2 error message? Many people\n> have written error message parsers into their applications to make up for\n> the fact that Postgres doesn't use error codes or issue 'standardised'\n> error messages.\n> \n> Is this going to annoy people too much? Should it be delayed until we have\n> SQL99 diagnostics/error codes?\n> \n> Gavin\n> \n> On Wed, 14 Aug 2002, Bruce Momjian wrote:\n> \n> > \n> > Gavin, have you answered these issues brought up about the patch?\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Tom Lane wrote:\n> > > Gavin Sherry <swm@linuxworld.com.au> writes:\n> > > > Attached is a small patch to scan.l for consideration. It hands\n> > > > yyerror() the position in the query string of the token which caused a\n> > > > parse error.\n> > > \n> > > Isn't that the hard way to do it? Seems like you could just subtract\n> > > scanbuf from the error pointer, instead of adding overhead to the basic\n> > > lex loop.\n> > > \n> > > Things that need to be decided (and documented somewhere):\n> > > \n> > > Is the number an offset (counted from 0) or an index (counted from 1)?\n> > > Which end of the token does it point at? Can the message be phrased\n> > > so as to make it reasonably clear what the number is?\n> > > \n> > > \n> > > A related change I'd been meaning to make is to get it to say\n> > > \tparse error at end of input\n> > > when that's the case, rather than the rather useless\n> > > \tparse error at or near \"\"\n> > > that you get now. I'd still be inclined to just say \"end of input\"\n> > > in that case, and not bother with a character count.\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > > \n> > \n> > \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 01:21:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Better handling of parse errors"
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> ... do we want to modify every 7.2 error message?\n\nNyet ... but I don't think tacking an offset onto the end of\n\"parse error at or near foo\" messages is likely to cause the\nsort of generalized havoc you suggest ...\n\nI'm on record as favoring a thorough redesign of the error-reporting\nfacility, and I hope to see that happen in conjunction with a frontend\nprotocol change in 7.4. But I think your proposed change in this patch\nis fairly harmless and could be done now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Aug 2002 01:44:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Better handling of parse errors "
},
{
"msg_contents": "On Wed, 14 Aug 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > ... do we want to modify every 7.2 error message?\n> \n> Nyet ... but I don't think tacking an offset onto the end of\n> \"parse error at or near foo\" messages is likely to cause the\n> sort of generalized havoc you suggest ...\n\nIn that case, attached is a patch which locates the beginning of the\noffending token more efficiently (per your suggestion of using\nscanbuf). The new patch does the same as before:\n\ntemplate1=# select * frum pg_class;\nERROR: parser: parse error at or near \"frum\" at character 10\n\nIt also implement's Tom's suggestion:\n\ntemplate1=# select * from pg_class where\\g\nERROR: parse: parse error at end of input\n\nGavin",
"msg_date": "Thu, 15 Aug 2002 00:30:36 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Better handling of parse errors "
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nGavin Sherry wrote:\n> On Wed, 14 Aug 2002, Tom Lane wrote:\n> \n> > Gavin Sherry <swm@linuxworld.com.au> writes:\n> > > ... do we want to modify every 7.2 error message?\n> > \n> > Nyet ... but I don't think tacking an offset onto the end of\n> > \"parse error at or near foo\" messages is likely to cause the\n> > sort of generalized havoc you suggest ...\n> \n> In that case, attached is a patch which locates the beginning of the\n> offending token more efficiently (per your suggestion of using\n> scanbuf). The new patch does the same as before:\n> \n> template1=# select * frum pg_class;\n> ERROR: parser: parse error at or near \"frum\" at character 10\n> \n> It also implement's Tom's suggestion:\n> \n> template1=# select * from pg_class where\\g\n> ERROR: parse: parse error at end of input\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 16 Aug 2002 00:51:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Better handling of parse errors"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nGavin Sherry wrote:\n> On Wed, 14 Aug 2002, Tom Lane wrote:\n> \n> > Gavin Sherry <swm@linuxworld.com.au> writes:\n> > > ... do we want to modify every 7.2 error message?\n> > \n> > Nyet ... but I don't think tacking an offset onto the end of\n> > \"parse error at or near foo\" messages is likely to cause the\n> > sort of generalized havoc you suggest ...\n> \n> In that case, attached is a patch which locates the beginning of the\n> offending token more efficiently (per your suggestion of using\n> scanbuf). The new patch does the same as before:\n> \n> template1=# select * frum pg_class;\n> ERROR: parser: parse error at or near \"frum\" at character 10\n> \n> It also implement's Tom's suggestion:\n> \n> template1=# select * from pg_class where\\g\n> ERROR: parse: parse error at end of input\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 17 Aug 2002 09:06:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Better handling of parse errors"
},
{
"msg_contents": "Gavin Sherry writes:\n\n> In that case, attached is a patch which locates the beginning of the\n> offending token more efficiently (per your suggestion of using\n> scanbuf).\n\nIn the regression tests there are a couple of cases that could be\nimproved:\n\nIn strings.sql:\n\n-- illegal string continuation syntax\nSELECT 'first line'\n' - next line' /* this comment is not allowed here */\n' - third line'\n AS \"Illegal comment within continuation\";\nERROR: parser: parse error at or near \"' - third line'\" at character 89\n\nCharacter 89 is the end of the \"third line\" line, but the parse error is\nat the beginning of that line.\n\nIn create_function_1.sql:\n\nCREATE FUNCTION test1 (int) RETURNS int LANGUAGE sql\n AS 'not even SQL';\nERROR: parser: parse error at or near \"not\" at character 1\n\nClearly confusing.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 18 Aug 2002 11:36:32 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Better handling of parse errors "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> In strings.sql:\n\n> -- illegal string continuation syntax\n> SELECT 'first line'\n> ' - next line' /* this comment is not allowed here */\n> ' - third line'\n> AS \"Illegal comment within continuation\";\n> ERROR: parser: parse error at or near \"' - third line'\" at character 89\n\n> Character 89 is the end of the \"third line\" line, but the parse error is\n> at the beginning of that line.\n\nThis is fixed as of my later commit.\n\n> In create_function_1.sql:\n\n> CREATE FUNCTION test1 (int) RETURNS int LANGUAGE sql\n> AS 'not even SQL';\n> ERROR: parser: parse error at or near \"not\" at character 1\n\n> Clearly confusing.\n\n\"Character 1\" is correct as of the context that the parser is working\nin, namely the function body. I don't think we can do much to change\nthat, but perhaps we could make the message read like\nERROR: parser: parse error at or near \"not\" at character 1 of function body\nThis would require giving the parser some sort of context-identifying\nstring to tack onto the message, but that doesn't seem too hard.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Aug 2002 12:53:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Better handling of parse errors "
},
{
"msg_contents": "On Sun, 18 Aug 2002, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > In strings.sql:\n> \n> > -- illegal string continuation syntax\n> > SELECT 'first line'\n> > ' - next line' /* this comment is not allowed here */\n> > ' - third line'\n> > AS \"Illegal comment within continuation\";\n> > ERROR: parser: parse error at or near \"' - third line'\" at character 89\n> \n> > Character 89 is the end of the \"third line\" line, but the parse error is\n> > at the beginning of that line.\n> \n> This is fixed as of my later commit.\n> \n> > In create_function_1.sql:\n> \n> > CREATE FUNCTION test1 (int) RETURNS int LANGUAGE sql\n> > AS 'not even SQL';\n> > ERROR: parser: parse error at or near \"not\" at character 1\n> \n> > Clearly confusing.\n> \n> \"Character 1\" is correct as of the context that the parser is working\n> in, namely the function body. I don't think we can do much to change\n> that, but perhaps we could make the message read like\n> ERROR: parser: parse error at or near \"not\" at character 1 of function body\n> This would require giving the parser some sort of context-identifying\n> string to tack onto the message, but that doesn't seem too hard.\n\nTom,\n\nReworking the code to taken into account token_start seems to work.\n\n elog(ERROR, \"parser: %s at or near \\\"%s\\\" at character %i\",\n message,token_start ? token_start : yytext,\n token_start ? (unsigned int)(token_start - scanbuf + 1) :\n (unsigned int)(yytext - scanbuf + 1));\n\nI will submit a patch once I do some more testing to find other possible\nsituations where this plays up.\n\nGavin\n\n",
"msg_date": "Mon, 19 Aug 2002 03:05:40 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Better handling of parse errors "
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> Reworking the code to taken into account token_start seems to work.\n\nYes, I did that last night ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Aug 2002 15:31:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Better handling of parse errors "
},
{
"msg_contents": "Tom Lane writes:\n\n> \"Character 1\" is correct as of the context that the parser is working\n> in, namely the function body. I don't think we can do much to change\n> that, but perhaps we could make the message read like\n> ERROR: parser: parse error at or near \"not\" at character 1 of function body\n\nThat would be better.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 18 Aug 2002 23:35:30 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Better handling of parse errors "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> \"Character 1\" is correct as of the context that the parser is working\n>> in, namely the function body. I don't think we can do much to change\n>> that, but perhaps we could make the message read like\n>> ERROR: parser: parse error at or near \"not\" at character 1 of function body\n\n> That would be better.\n\nAfter a quick look through the sources, it seems we could fairly easily\ndo this: callers of pg_parse_and_rewrite() and some related routines\ncould pass a string like \"SQL function body\", which would get plugged\ninto the parse-error message. Two issues though:\n\n* Is this okay from an internationalization point of view? We can\ngettext() the \"SQL function body\" string but I don't know if there\nare serious problems with pasting that into\n\tparse error at or near \"%s\" at character %d of %s\nOn the other hand I'm not comfortable with having the far-end caller\nsupply that whole string, either, since most of it is the lexer's\nresponsibility.\n\n* The natural thing to say in _SPI_execute's call is \"SPI query\",\nbut this will probably not go over big with plpgsql users, who will\nsee that and probably have no idea what SPI is. But I'm very\nloathe to change the SPI API so that plpgsql can pass down the\ncontext string --- that'll break existing user functions that use\nSPI. Do we want to uglify the SPI API to the extent of having\nparallel calls that just add a context string parameter? Is there\na better way?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Aug 2002 18:25:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Better handling of parse errors "
}
] |
[
{
"msg_contents": "On Tue, 2002-08-06 at 14:03, Oleg Bartunov wrote:\n> make[4]: Entering directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> gcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heapam.o heapam.c\n> heapam.c: In function \beap_insert':\n> heapam.c:1158: structure has no member named \u0012d_istemp'\n> heapam.c: In function \beap_delete':\n> heapam.c:1341: structure has no member named \u0012d_istemp'\n> heapam.c: In function \beap_update':\n> heapam.c:1677: structure has no member named \u0012d_istemp'\n> make[4]: *** [heapam.o] Error 1\n> make[4]: Leaving directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> make[3]: *** [heap-recursive] Error 2\n> make[3]: Leaving directory /db1/pgsql/cvs/pgsql-serve\n> \n\nI get this too. I think there may be a CVS problem of some sort. Through\nthe web interface, it's clear that src/include/utils/rel.h (the relevant\ninclude file) has been at version 1.61 (2002/08/06 02:36:35) for 10\nhours -and the last change to it added rd_istemp and rd_isnew.\n\nHowever, cvs log (for me, on anonymous CVS) still shows rel.h at 1.60.\nMeanwhile, src/backend/access/heap/heapam.c is at 1.144 (updated\n2002/08/06 02:36:33) \n\nIt seems that bits of updates are going missing somewhere -so its not\nsurprising that it won't compile.\n\nRegards\n\nJohn\n\n\n-- \nJohn Gray\t\nAzuli IT\t\nwww.azuli.co.uk\t\n\n\n",
"msg_date": "06 Aug 2002 14:02:22 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: CVS sources doesn't compiles"
},
{
"msg_contents": "make[4]: Entering directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\ngcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heapam.o heapam.c\nheapam.c: In function \beap_insert':\nheapam.c:1158: structure has no member named \u0012d_istemp'\nheapam.c: In function \beap_delete':\nheapam.c:1341: structure has no member named \u0012d_istemp'\nheapam.c: In function \beap_update':\nheapam.c:1677: structure has no member named \u0012d_istemp'\nmake[4]: *** [heapam.o] Error 1\nmake[4]: Leaving directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\nmake[3]: *** [heap-recursive] Error 2\nmake[3]: Leaving directory /db1/pgsql/cvs/pgsql-serve\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 6 Aug 2002 16:03:51 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "CVS sources doesn't compiles"
},
{
"msg_contents": "John Gray <jgray@azuli.co.uk> writes:\n> On Tue, 2002-08-06 at 14:03, Oleg Bartunov wrote:\n>> make[4]: Entering directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n>> gcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heapam.o heapam.c\n>> heapam.c: In function \beap_insert':\n>> heapam.c:1158: structure has no member named \u0012d_istemp'\n>> heapam.c: In function \beap_delete':\n>> heapam.c:1341: structure has no member named \u0012d_istemp'\n>> heapam.c: In function \beap_update':\n>> heapam.c:1677: structure has no member named \u0012d_istemp'\n>> make[4]: *** [heapam.o] Error 1\n>> make[4]: Leaving directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n\nControl-H? Control-R? You seem to have a corrupted copy of heapam.c.\nIf you move it out of the way and do a \"cvs update\", do you get a copy\nwith the identical errors?\n\nI can report that the master CVS server delivers a correct copy. If\nthere is a CVS problem then it's only on the anoncvs mirror ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 09:49:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CVS sources doesn't compiles "
},
{
"msg_contents": "On Tue, 2002-08-06 at 14:49, Tom Lane wrote:\n> John Gray <jgray@azuli.co.uk> writes:\n> > On Tue, 2002-08-06 at 14:03, Oleg Bartunov wrote:\n> >> make[4]: Entering directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> >> gcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heapam.o heapam.c\n> >> heapam.c: In function \beap_insert':\n> >> heapam.c:1158: structure has no member named \u0012d_istemp'\n> >> heapam.c: In function \beap_delete':\n> >> heapam.c:1341: structure has no member named \u0012d_istemp'\n> >> heapam.c: In function \beap_update':\n> >> heapam.c:1677: structure has no member named \u0012d_istemp'\n> >> make[4]: *** [heapam.o] Error 1\n> >> make[4]: Leaving directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> \n> Control-H? Control-R? You seem to have a corrupted copy of heapam.c.\n> If you move it out of the way and do a \"cvs update\", do you get a copy\n> with the identical errors?\n> \n\nI should have checked what I was quoting first! The messages I get have\nno funny characters in -and the reason for the error is that rel.h has\nthe following (as you can see, it doesn't have rd_istemp replacing\nrd_myxactonly):\n\n/*\n * Here are the contents of a relation cache entry.\n */\n\ntypedef struct RelationData\n{\n File rd_fd; /* open file descriptor,\nor -1 if none */\n RelFileNode rd_node; /* file node (physical\nidentifier) */\n BlockNumber rd_nblocks; /* number of blocks in rel */\n BlockNumber rd_targblock; /* current insertion target\nblock, or\n *\nInvalidBlockNumber */\n int rd_refcnt; /* reference\ncount */\n bool rd_myxactonly; /* rel uses the local buffer mgr\n*/\n bool rd_isnailed; /* rel is nailed in cache */\n bool rd_indexfound; /* true if rd_indexlist is valid\n*/\n bool rd_uniqueindex; /* true if rel is a UNIQUE index\n*/\n\n[rest snipped]. This is version 1.60. Tom's patch produced 1.61. I can't\nget anonCVS to give me 1.61. (But annoyingly, it gives me Tom's updated\nheapam.c 1.144).\n\n> I can report that the master CVS server delivers a correct copy. If\n> there is a CVS problem then it's only on the anoncvs mirror ...\n> \n\nWell, that seems likely -as cvsweb reports the file OK. \n\nI wonder whether our CVS mirroring is sufficiently atomic? i.e. did we\nget an inconsistent snapshot because it was taken partway through a\npatch being applied. \n\nThis is clearly going to be a bit of a pain if it is a consequence of\nheavier development activity - not least because it consumes everyone's\ntime chasing imaginary bugs. I'm assuming that it will just be a\ntransient issue - but there has been no change in it for several hours,\nso presumably the mirroring is not run that often...\n\nRegards\n\nJohn\n\n\n-- \nJohn Gray\t\nAzuli IT\t\nwww.azuli.co.uk\t\n\n\n",
"msg_date": "06 Aug 2002 15:17:56 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: CVS sources doesn't compiles"
},
{
"msg_contents": "\nTry it now ... rsycn, for some reason, is dumping core, so I just swithed\nto using CVSup to pull it down ... does that fix it?\n\nOn 6 Aug 2002, John Gray wrote:\n\n> On Tue, 2002-08-06 at 14:49, Tom Lane wrote:\n> > John Gray <jgray@azuli.co.uk> writes:\n> > > On Tue, 2002-08-06 at 14:03, Oleg Bartunov wrote:\n> > >> make[4]: Entering directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> > >> gcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heapam.o heapam.c\n> > >> heapam.c: In function \beap_insert':\n> > >> heapam.c:1158: structure has no member named \u0012d_istemp'\n> > >> heapam.c: In function \beap_delete':\n> > >> heapam.c:1341: structure has no member named \u0012d_istemp'\n> > >> heapam.c: In function \beap_update':\n> > >> heapam.c:1677: structure has no member named \u0012d_istemp'\n> > >> make[4]: *** [heapam.o] Error 1\n> > >> make[4]: Leaving directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> >\n> > Control-H? Control-R? You seem to have a corrupted copy of heapam.c.\n> > If you move it out of the way and do a \"cvs update\", do you get a copy\n> > with the identical errors?\n> >\n>\n> I should have checked what I was quoting first! The messages I get have\n> no funny characters in -and the reason for the error is that rel.h has\n> the following (as you can see, it doesn't have rd_istemp replacing\n> rd_myxactonly):\n>\n> /*\n> * Here are the contents of a relation cache entry.\n> */\n>\n> typedef struct RelationData\n> {\n> File rd_fd; /* open file descriptor,\n> or -1 if none */\n> RelFileNode rd_node; /* file node (physical\n> identifier) */\n> BlockNumber rd_nblocks; /* number of blocks in rel */\n> BlockNumber rd_targblock; /* current insertion target\n> block, or\n> *\n> InvalidBlockNumber */\n> int rd_refcnt; /* reference\n> count */\n> bool rd_myxactonly; /* rel uses the local buffer mgr\n> */\n> bool rd_isnailed; /* rel is nailed in cache */\n> bool rd_indexfound; /* true if rd_indexlist is valid\n> */\n> bool rd_uniqueindex; /* true if rel is a UNIQUE index\n> */\n>\n> [rest snipped]. This is version 1.60. Tom's patch produced 1.61. I can't\n> get anonCVS to give me 1.61. (But annoyingly, it gives me Tom's updated\n> heapam.c 1.144).\n>\n> > I can report that the master CVS server delivers a correct copy. If\n> > there is a CVS problem then it's only on the anoncvs mirror ...\n> >\n>\n> Well, that seems likely -as cvsweb reports the file OK.\n>\n> I wonder whether our CVS mirroring is sufficiently atomic? i.e. did we\n> get an inconsistent snapshot because it was taken partway through a\n> patch being applied.\n>\n> This is clearly going to be a bit of a pain if it is a consequence of\n> heavier development activity - not least because it consumes everyone's\n> time chasing imaginary bugs. I'm assuming that it will just be a\n> transient issue - but there has been no change in it for several hours,\n> so presumably the mirroring is not run that often...\n>\n> Regards\n>\n> John\n>\n>\n> --\n> John Gray\n> Azuli IT\n> www.azuli.co.uk\n>\n>\n>\n\n",
"msg_date": "Tue, 6 Aug 2002 11:29:25 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: CVS sources doesn't compiles"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Try it now ... rsycn, for some reason, is dumping core, so I just swithed\n> to using CVSup to pull it down ... does that fix it?\n\nYes. I had just finished pulling a checkout from the anoncvs server,\nand it was indeed out of sync. A cvs update now brings it back in sync.\n\nWonder why rsync stopped working?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 10:40:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CVS sources doesn't compiles "
},
{
"msg_contents": "On Tue, 6 Aug 2002, Tom Lane wrote:\n\n> John Gray <jgray@azuli.co.uk> writes:\n> > On Tue, 2002-08-06 at 14:03, Oleg Bartunov wrote:\n> >> make[4]: Entering directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> >> gcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heapam.o heapam.c\n> >> heapam.c: In function \beap_insert':\n> >> heapam.c:1158: structure has no member named \u0012d_istemp'\n> >> heapam.c: In function \beap_delete':\n> >> heapam.c:1341: structure has no member named \u0012d_istemp'\n> >> heapam.c: In function \beap_update':\n> >> heapam.c:1677: structure has no member named \u0012d_istemp'\n> >> make[4]: *** [heapam.o] Error 1\n> >> make[4]: Leaving directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n>\n> Control-H? Control-R? You seem to have a corrupted copy of heapam.c.\n> If you move it out of the way and do a \"cvs update\", do you get a copy\n> with the identical errors?\n\nsorry, it's cut-n-paste problem in editor (joe) I used to compose message.\nVi is smarter about this.\n\n>\n> I can report that the master CVS server delivers a correct copy. If\n> there is a CVS problem then it's only on the anoncvs mirror ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 6 Aug 2002 18:26:59 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: CVS sources doesn't compiles "
},
{
"msg_contents": "It works now\n\nOn Tue, 6 Aug 2002, Marc G. Fournier wrote:\n\n>\n> Try it now ... rsycn, for some reason, is dumping core, so I just swithed\n> to using CVSup to pull it down ... does that fix it?\n>\n> On 6 Aug 2002, John Gray wrote:\n>\n> > On Tue, 2002-08-06 at 14:49, Tom Lane wrote:\n> > > John Gray <jgray@azuli.co.uk> writes:\n> > > > On Tue, 2002-08-06 at 14:03, Oleg Bartunov wrote:\n> > > >> make[4]: Entering directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> > > >> gcc -O2 -mpentiumpro -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heapam.o heapam.c\n> > > >> heapam.c: In function \beap_insert':\n> > > >> heapam.c:1158: structure has no member named \u0012d_istemp'\n> > > >> heapam.c: In function \beap_delete':\n> > > >> heapam.c:1341: structure has no member named \u0012d_istemp'\n> > > >> heapam.c: In function \beap_update':\n> > > >> heapam.c:1677: structure has no member named \u0012d_istemp'\n> > > >> make[4]: *** [heapam.o] Error 1\n> > > >> make[4]: Leaving directory /db1/pgsql/cvs/pgsql-server/src/backend/access/heap'\n> > >\n> > > Control-H? Control-R? You seem to have a corrupted copy of heapam.c.\n> > > If you move it out of the way and do a \"cvs update\", do you get a copy\n> > > with the identical errors?\n> > >\n> >\n> > I should have checked what I was quoting first! The messages I get have\n> > no funny characters in -and the reason for the error is that rel.h has\n> > the following (as you can see, it doesn't have rd_istemp replacing\n> > rd_myxactonly):\n> >\n> > /*\n> > * Here are the contents of a relation cache entry.\n> > */\n> >\n> > typedef struct RelationData\n> > {\n> > File rd_fd; /* open file descriptor,\n> > or -1 if none */\n> > RelFileNode rd_node; /* file node (physical\n> > identifier) */\n> > BlockNumber rd_nblocks; /* number of blocks in rel */\n> > BlockNumber rd_targblock; /* current insertion target\n> > block, or\n> > *\n> > InvalidBlockNumber */\n> > int rd_refcnt; /* reference\n> > count */\n> > bool rd_myxactonly; /* rel uses the local buffer mgr\n> > */\n> > bool rd_isnailed; /* rel is nailed in cache */\n> > bool rd_indexfound; /* true if rd_indexlist is valid\n> > */\n> > bool rd_uniqueindex; /* true if rel is a UNIQUE index\n> > */\n> >\n> > [rest snipped]. This is version 1.60. Tom's patch produced 1.61. I can't\n> > get anonCVS to give me 1.61. (But annoyingly, it gives me Tom's updated\n> > heapam.c 1.144).\n> >\n> > > I can report that the master CVS server delivers a correct copy. If\n> > > there is a CVS problem then it's only on the anoncvs mirror ...\n> > >\n> >\n> > Well, that seems likely -as cvsweb reports the file OK.\n> >\n> > I wonder whether our CVS mirroring is sufficiently atomic? i.e. did we\n> > get an inconsistent snapshot because it was taken partway through a\n> > patch being applied.\n> >\n> > This is clearly going to be a bit of a pain if it is a consequence of\n> > heavier development activity - not least because it consumes everyone's\n> > time chasing imaginary bugs. I'm assuming that it will just be a\n> > transient issue - but there has been no change in it for several hours,\n> > so presumably the mirroring is not run that often...\n> >\n> > Regards\n> >\n> > John\n> >\n> >\n> > --\n> > John Gray\n> > Azuli IT\n> > www.azuli.co.uk\n> >\n> >\n> >\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 6 Aug 2002 18:30:13 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: CVS sources doesn't compiles"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Try it now ... rsycn, for some reason, is dumping core, so I just swithed\n> > to using CVSup to pull it down ... does that fix it?\n>\n> Yes. I had just finished pulling a checkout from the anoncvs server,\n> and it was indeed out of sync. A cvs update now brings it back in sync.\n>\n> Wonder why rsync stopped working?\n\nNot sure, cause it works if I do it from the command line :(\n\nCVSup should be updating every hour, on the :59 (just like rsync was setup\nfor) ...\n\n\n",
"msg_date": "Tue, 6 Aug 2002 13:16:31 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: CVS sources doesn't compiles "
}
] |
[
{
"msg_contents": "Please, apply patch for contrib/intarray (current CVS)\n\nChanges:\n\nAugust 6, 2002\n 1. Reworked patch from Andrey Oktyabrski (ano@spider.ru) with\n functions: icount, sort, sort_asc, uniq, idx, subarray\n operations: #, +, -, |, &\n\nFUNCTIONS:\n\n int icount(int[]) - the number of elements in intarray\n int[] sort(int[], 'asc' | 'desc') - sort intarray\n int[] sort(int[]) - sort in ascending order\n int[] sort_asc(int[]),sort_desc(int[]) - shortcuts for sort\n int[] uniq(int[]) - returns unique elements\n int idx(int[], int item) - returns index of first intarray matching element\n to item, or '0' if matching failed.\n int[] subarray(int[],int START [, int LEN]) - returns part of intarray\n starting from element number START (from 1)\n and length LEN.\nOPERATIONS:\n\n int[] && int[] - overlap - returns TRUE if arrays has at least one common elements.\n int[] @ int[] - contains - returns TRUE if left array contains right array\n int[] ~ int[] - contained - returns TRUE if left array is contained in right array\n # int[] - return the number of elements in array\n int[] + int - push element to array ( add to end of array)\n int[] + int[] - merge of arrays (right array added to the end of left one)\n int[] - int - remove entries matched by right argument from array\n int[] - int[] - remove left array from right\n int[] | int - returns intarray - union of arguments\n int[] | int[] - returns intarray as a union of two arrays\n int[] & int[] - returns intersection of arrays\n\n\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Tue, 6 Aug 2002 21:14:37 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "apply patch for contrib/intarray (CVS)"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nOleg Bartunov wrote:\n> Please, apply patch for contrib/intarray (current CVS)\n> \n> Changes:\n> \n> August 6, 2002\n> 1. Reworked patch from Andrey Oktyabrski (ano@spider.ru) with\n> functions: icount, sort, sort_asc, uniq, idx, subarray\n> operations: #, +, -, |, &\n> \n> FUNCTIONS:\n> \n> int icount(int[]) - the number of elements in intarray\n> int[] sort(int[], 'asc' | 'desc') - sort intarray\n> int[] sort(int[]) - sort in ascending order\n> int[] sort_asc(int[]),sort_desc(int[]) - shortcuts for sort\n> int[] uniq(int[]) - returns unique elements\n> int idx(int[], int item) - returns index of first intarray matching element\n> to item, or '0' if matching failed.\n> int[] subarray(int[],int START [, int LEN]) - returns part of intarray\n> starting from element number START (from 1)\n> and length LEN.\n> OPERATIONS:\n> \n> int[] && int[] - overlap - returns TRUE if arrays has at least one common elements.\n> int[] @ int[] - contains - returns TRUE if left array contains right array\n> int[] ~ int[] - contained - returns TRUE if left array is contained in right array\n> # int[] - return the number of elements in array\n> int[] + int - push element to array ( add to end of array)\n> int[] + int[] - merge of arrays (right array added to the end of left one)\n> int[] - int - remove entries matched by right argument from array\n> int[] - int[] - remove left array from right\n> int[] | int - returns intarray - union of arguments\n> int[] | int[] - returns intarray as a union of two arrays\n> int[] & int[] - returns intersection of arrays\n> \n> \n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Aug 2002 21:29:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: apply patch for contrib/intarray (CVS)"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nOleg Bartunov wrote:\n> Please, apply patch for contrib/intarray (current CVS)\n> \n> Changes:\n> \n> August 6, 2002\n> 1. Reworked patch from Andrey Oktyabrski (ano@spider.ru) with\n> functions: icount, sort, sort_asc, uniq, idx, subarray\n> operations: #, +, -, |, &\n> \n> FUNCTIONS:\n> \n> int icount(int[]) - the number of elements in intarray\n> int[] sort(int[], 'asc' | 'desc') - sort intarray\n> int[] sort(int[]) - sort in ascending order\n> int[] sort_asc(int[]),sort_desc(int[]) - shortcuts for sort\n> int[] uniq(int[]) - returns unique elements\n> int idx(int[], int item) - returns index of first intarray matching element\n> to item, or '0' if matching failed.\n> int[] subarray(int[],int START [, int LEN]) - returns part of intarray\n> starting from element number START (from 1)\n> and length LEN.\n> OPERATIONS:\n> \n> int[] && int[] - overlap - returns TRUE if arrays has at least one common elements.\n> int[] @ int[] - contains - returns TRUE if left array contains right array\n> int[] ~ int[] - contained - returns TRUE if left array is contained in right array\n> # int[] - return the number of elements in array\n> int[] + int - push element to array ( add to end of array)\n> int[] + int[] - merge of arrays (right array added to the end of left one)\n> int[] - int - remove entries matched by right argument from array\n> int[] - int[] - remove left array from right\n> int[] | int - returns intarray - union of arguments\n> int[] | int[] - returns intarray as a union of two arrays\n> int[] & int[] - returns intersection of arrays\n> \n> \n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 10 Aug 2002 16:38:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: apply patch for contrib/intarray (CVS)"
}
] |
[
{
"msg_contents": "Can we make the fact that the \"explicit\" inner join syntax also controls\nthe join order optional? It's pretty annoying, because that syntax is\nsupposed to be a convenience but with PostgreSQL it's a recipe to slow\ndown your applications.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 6 Aug 2002 22:26:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Join syntax and join order"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Can we make the fact that the \"explicit\" inner join syntax also controls\n> the join order optional?\n\nI'm certainly not wedded to it, but what's the implementation plan,\nand how will you make the control available to them that needs it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Aug 2002 23:55:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Join syntax and join order "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n>>Can we make the fact that the \"explicit\" inner join syntax also controls\n>>the join order optional?\n> \n> \n> I'm certainly not wedded to it, but what's the implementation plan,\n> and how will you make the control available to them that needs it?\n\nHow do you do it now with the non-standard syntax? You can get some \ncontrol by use of subselects. In Oracle we use this trick in some cases \nbecause it has the nasty habit of applying PL/SQL functions to all rows \nof a relationship *before* qualification.\n\nThe problem with the current situtation is that someone who writes a \nstandard SQL92 query mixing inner and outer joins gets the benefit of \noptimizer decisions in Oracle 9i and other commercial databases that \nsupports the syntax, but not PostgreSQL.\n\nFor those of us who write applications designed to work on multiple \nRDBMSs this poses something of a problem. In our case not much of a \ncurrent one because we still support Oracle 8i which doesn't have SQL92 \njoin syntax, so we're forced to write different outer join queries for \nit and PG anyway. But eventually we'll drop Oracle 8i support and look \nto share queries, at least for new code.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Wed, 07 Aug 2002 05:30:00 -0700",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Join syntax and join orde"
}
] |
[
{
"msg_contents": "Hello all,\n\nwe have a fairly complicated query that we thought might run a bit \nfaster, so ran postgress under a profiler. We're running on Solaris 8 \nusing the Sun Forte 7 compilers and it's profiler tools that can collect \nexecution profiles on a clock based profile as well as various hardware \ncounter overflow profiles. The database is fairly small at 800MB in size.\n\nUsing the collected data and poking around a bit it looks like memcpy() \nis being called a lot of times. The total run time for the query is 223 \ncpu seconds of which 15.3 is in memcpy(). Of these 15 seconds, 6.8 \ncomes from LogicalTapeRead() and 4 from BufFileRead().\n\nNow here's the rub: LogicalTapeRead() calls ltsReadBlock() which calls \nBufFileRead() which calls BufFileLoadBuffer() which calls FileRead().\n\nFileRead() actually calls the read() system call which puts the relevant \nbytes into a buffer in user space (and involves a copy of the data). \n Working our way back up the call chain, BufFileRead() calls memcpy(), \nperforming another copy of the bytes into a second buffer. Going \nfurther back up the call chain we find that LogicalTapeRead() again \ncalls memcpy() copying the bytes a *third* time into another buffer.\n\n\nIs it possible that one of these copies can be removed? It seems to me \nthat one can be, but I'm only looking at the source for the first time \nand perhaps I'm missing something. Yes, I know, this is only 6% of the \ncpu time, but memory busses are relatively slow and this copying is \nprobably flushing some cpu cache as well.\n\n\n-- Alan\nstange@rentec.com\n\n",
"msg_date": "Tue, 06 Aug 2002 16:40:01 -0400",
"msg_from": "Alan Stange <stange@rentec.com>",
"msg_from_op": true,
"msg_subject": "excessive memcpy() calls?"
}
] |
[
{
"msg_contents": "I am implementing a trigger based database replicator.\nAll the updates run on the master database, and the\nmaster fires the trigger once there is any update and\nwrites the change to the slave database.\nI am new to pgsql and I'd like some suggestion where to\nstart.\n\nThanks. \n\nAngela\nQi Deng\n",
"msg_date": "Tue, 06 Aug 2002 17:35:41 -0400 (EDT)",
"msg_from": "qdeng@cbnco.com",
"msg_from_op": true,
"msg_subject": "How the "
},
{
"msg_contents": "Use one that's already written:\n\nhttp://gborg.postgresql.org/\n\nChris\n\n\nOn Tue, 6 Aug 2002 qdeng@cbnco.com wrote:\n\n> I am implementing a trigger based database replicator.\n> All the updates run on the master database, and the\n> master fires the trigger once there is any update and\n> writes the change to the slave database.\n> I am new to pgsql and I'd like some suggestion where to\n> start.\n>\n> Thanks.\n>\n> Angela\n> Qi Deng\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Thu, 8 Aug 2002 22:40:57 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: How the "
},
{
"msg_contents": "On Thu, Aug 08, 2002 at 10:40:57PM +0800, Christopher Kings-Lynne wrote:\n> Use one that's already written:\n> \n> http://gborg.postgresql.org/\n\nOr even \n\n$ cd contrib/rserv\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 8 Aug 2002 13:32:25 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: How the"
}
] |
[
{
"msg_contents": "I'm still getting ltree failures on 64bit freebsd:\n\nsed 's,MODULE_PATHNAME,$libdir/ltree,g' ltree.sql.in >ltree.sql\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPI\nC -DLOWER_NODE -I. -I../../src/include -c -o ltree_io.o ltree_io.c -MMD\nltree_io.c: In function `ltree_in':\nltree_io.c:57: warning: int format, different type arg (arg 3)\nltree_io.c:63: warning: int format, different type arg (arg 4)\nltree_io.c:68: warning: int format, different type arg (arg 3)\nltree_io.c:78: warning: int format, different type arg (arg 4)\nltree_io.c: In function `lquery_in':\nltree_io.c:185: warning: int format, different type arg (arg 3)\nltree_io.c:193: warning: int format, different type arg (arg 3)\nltree_io.c:197: warning: int format, different type arg (arg 3)\nltree_io.c:202: warning: int format, different type arg (arg 3)\nltree_io.c:207: warning: int format, different type arg (arg 3)\nltree_io.c:217: warning: int format, different type arg (arg 4)\nltree_io.c:226: warning: int format, different type arg (arg 4)\nltree_io.c:231: warning: int format, different type arg (arg 3)\nltree_io.c:233: warning: int format, different type arg (arg 3)\nltree_io.c:243: warning: int format, different type arg (arg 3)\nltree_io.c:251: warning: int format, different type arg (arg 3)\nltree_io.c:260: warning: int format, different type arg (arg 3)\nltree_io.c:265: warning: int format, different type arg (arg 3)\nltree_io.c:273: warning: int format, different type arg (arg 3)\nltree_io.c:279: warning: int format, different type arg (arg 3)\nltree_io.c:296: warning: int format, different type arg (arg 4)\n\nI think it's to do with the printf % thingy used in the elog...\n\nChris\n\n",
"msg_date": "Wed, 7 Aug 2002 10:24:50 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "contrib/ltree"
},
{
"msg_contents": "The patch solves this problem, I hope...\n\n\nChristopher Kings-Lynne wrote:\n> I'm still getting ltree failures on 64bit freebsd:\n> \n> sed 's,MODULE_PATHNAME,$libdir/ltree,g' ltree.sql.in >ltree.sql\n> gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPI\n> C -DLOWER_NODE -I. -I../../src/include -c -o ltree_io.o ltree_io.c -MMD\n> ltree_io.c: In function `ltree_in':\n> ltree_io.c:57: warning: int format, different type arg (arg 3)\n> ltree_io.c:63: warning: int format, different type arg (arg 4)\n> ltree_io.c:68: warning: int format, different type arg (arg 3)\n> ltree_io.c:78: warning: int format, different type arg (arg 4)\n> ltree_io.c: In function `lquery_in':\n> ltree_io.c:185: warning: int format, different type arg (arg 3)\n> ltree_io.c:193: warning: int format, different type arg (arg 3)\n> ltree_io.c:197: warning: int format, different type arg (arg 3)\n> ltree_io.c:202: warning: int format, different type arg (arg 3)\n> ltree_io.c:207: warning: int format, different type arg (arg 3)\n> ltree_io.c:217: warning: int format, different type arg (arg 4)\n> ltree_io.c:226: warning: int format, different type arg (arg 4)\n> ltree_io.c:231: warning: int format, different type arg (arg 3)\n> ltree_io.c:233: warning: int format, different type arg (arg 3)\n> ltree_io.c:243: warning: int format, different type arg (arg 3)\n> ltree_io.c:251: warning: int format, different type arg (arg 3)\n> ltree_io.c:260: warning: int format, different type arg (arg 3)\n> ltree_io.c:265: warning: int format, different type arg (arg 3)\n> ltree_io.c:273: warning: int format, different type arg (arg 3)\n> ltree_io.c:279: warning: int format, different type arg (arg 3)\n> ltree_io.c:296: warning: int format, different type arg (arg 4)\n> \n> I think it's to do with the printf % thingy used in the elog...\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net",
"msg_date": "Wed, 07 Aug 2002 15:28:29 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib/ltree, pls, apply patch "
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nTeodor Sigaev wrote:\n> The patch solves this problem, I hope...\n> \n> \n> Christopher Kings-Lynne wrote:\n> > I'm still getting ltree failures on 64bit freebsd:\n> > \n> > sed 's,MODULE_PATHNAME,$libdir/ltree,g' ltree.sql.in >ltree.sql\n> > gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPI\n> > C -DLOWER_NODE -I. -I../../src/include -c -o ltree_io.o ltree_io.c -MMD\n> > ltree_io.c: In function `ltree_in':\n> > ltree_io.c:57: warning: int format, different type arg (arg 3)\n> > ltree_io.c:63: warning: int format, different type arg (arg 4)\n> > ltree_io.c:68: warning: int format, different type arg (arg 3)\n> > ltree_io.c:78: warning: int format, different type arg (arg 4)\n> > ltree_io.c: In function `lquery_in':\n> > ltree_io.c:185: warning: int format, different type arg (arg 3)\n> > ltree_io.c:193: warning: int format, different type arg (arg 3)\n> > ltree_io.c:197: warning: int format, different type arg (arg 3)\n> > ltree_io.c:202: warning: int format, different type arg (arg 3)\n> > ltree_io.c:207: warning: int format, different type arg (arg 3)\n> > ltree_io.c:217: warning: int format, different type arg (arg 4)\n> > ltree_io.c:226: warning: int format, different type arg (arg 4)\n> > ltree_io.c:231: warning: int format, different type arg (arg 3)\n> > ltree_io.c:233: warning: int format, different type arg (arg 3)\n> > ltree_io.c:243: warning: int format, different type arg (arg 3)\n> > ltree_io.c:251: warning: int format, different type arg (arg 3)\n> > ltree_io.c:260: warning: int format, different type arg (arg 3)\n> > ltree_io.c:265: warning: int format, different type arg (arg 3)\n> > ltree_io.c:273: warning: int format, different type arg (arg 3)\n> > ltree_io.c:279: warning: int format, different type arg (arg 3)\n> > ltree_io.c:296: warning: int format, different type arg (arg 4)\n> > \n> > I think it's to do with the printf % thingy used in the elog...\n> > \n> > Chris\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ application/gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Aug 2002 19:26:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/ltree, pls, apply patch"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nTeodor Sigaev wrote:\n> The patch solves this problem, I hope...\n> \n> \n> Christopher Kings-Lynne wrote:\n> > I'm still getting ltree failures on 64bit freebsd:\n> > \n> > sed 's,MODULE_PATHNAME,$libdir/ltree,g' ltree.sql.in >ltree.sql\n> > gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPI\n> > C -DLOWER_NODE -I. -I../../src/include -c -o ltree_io.o ltree_io.c -MMD\n> > ltree_io.c: In function `ltree_in':\n> > ltree_io.c:57: warning: int format, different type arg (arg 3)\n> > ltree_io.c:63: warning: int format, different type arg (arg 4)\n> > ltree_io.c:68: warning: int format, different type arg (arg 3)\n> > ltree_io.c:78: warning: int format, different type arg (arg 4)\n> > ltree_io.c: In function `lquery_in':\n> > ltree_io.c:185: warning: int format, different type arg (arg 3)\n> > ltree_io.c:193: warning: int format, different type arg (arg 3)\n> > ltree_io.c:197: warning: int format, different type arg (arg 3)\n> > ltree_io.c:202: warning: int format, different type arg (arg 3)\n> > ltree_io.c:207: warning: int format, different type arg (arg 3)\n> > ltree_io.c:217: warning: int format, different type arg (arg 4)\n> > ltree_io.c:226: warning: int format, different type arg (arg 4)\n> > ltree_io.c:231: warning: int format, different type arg (arg 3)\n> > ltree_io.c:233: warning: int format, different type arg (arg 3)\n> > ltree_io.c:243: warning: int format, different type arg (arg 3)\n> > ltree_io.c:251: warning: int format, different type arg (arg 3)\n> > ltree_io.c:260: warning: int format, different type arg (arg 3)\n> > ltree_io.c:265: warning: int format, different type arg (arg 3)\n> > ltree_io.c:273: warning: int format, different type arg (arg 3)\n> > ltree_io.c:279: warning: int format, different type arg (arg 3)\n> > ltree_io.c:296: warning: int format, different type arg (arg 4)\n> > \n> > I think it's to do with the printf % thingy used in the elog...\n> > \n> > Chris\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ application/gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 10 Aug 2002 16:46:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/ltree, pls, apply patch"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 07 August 2002 02:09\n> To: Peter Eisentraut\n> Cc: Marc G. Fournier; Ron Snyder; Neil Conway; PostgreSQL-development\n> Subject: Re: [HACKERS] Open 7.3 items\n> \n> \n> \n> It had such limited usefulness ('password' only, only \n> crypted-hashed passwords in the file) that it doesn't make \n> much sense to resurect it.\n> \n> To directly address your point, I don't think this new \n> feature will be used enough to add the capability to the user \n> admin commands.\n> \n> I know you object, so I am going to ask for a vote.\n> \n> OK, here is the request for vote. Do we want:\n> \n> \t1) the old secondary passwords re-added\n> \t2) the new prefixing of the database name to the \n> username when enabled\n> \t3) do nothing\n\n2 please. I have used secondary passwords in the past, but 2 would do\nthe same job and seems cleaner.\n\nRegards, Dave.\n",
"msg_date": "Wed, 7 Aug 2002 09:02:33 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Open 7.3 items"
}
] |
[
{
"msg_contents": "Since the encoding conversion now looks up database, it must be\ndone within a transaction. Currently the encoding conversion function\npg_client_to_server() is called in pq_getstr(), which may not be in a\ntransaction. So I would like to move the call to pg_client_to_server()\nwithin pg_exec_query_string(), after start_xact_command() gets called.\nComments?\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 07 Aug 2002 19:26:39 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "moving FE->BE encoding conversion"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Since the encoding conversion now looks up database, it must be\n> done within a transaction.\n\nI still think that this is a fundamentally unworkable design. How will\nyou deal with reporting errors outside a transaction, or for that matter\ninside an already-failed transaction?\n\nISTM the conversion function *must* be kept as a persistent state\nvariable, with an initial default of \"no conversion\", and the actual\nsetting done via a SET command (which can do the lookup inside a\ntransaction). Then you can use the current conversion function without\nany assumptions about transaction state.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 09:44:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: moving FE->BE encoding conversion "
},
{
"msg_contents": "> I still think that this is a fundamentally unworkable design. How will\n> you deal with reporting errors outside a transaction, or for that matter\n> inside an already-failed transaction?\n\nIn those cases we could detect the state and could do no conversion.\n\n> ISTM the conversion function *must* be kept as a persistent state\n> variable, with an initial default of \"no conversion\", and the actual\n> setting done via a SET command (which can do the lookup inside a\n> transaction). Then you can use the current conversion function without\n> any assumptions about transaction state.\n\nAre you saying keeping conversion function's oids in memory after\ncurrent conversion is explicitly set by SET command or others? Even\nif we do that, fmgr might want to look up pg_proc to load the\nfunctions outside the transaction, no?\n\nOr are you saying we should load the functions when the SET command is\nexecuted? I'm not sure if OidFunctionCall could invoke the function\nwithout looking up pg_proc in this case.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 07 Aug 2002 23:42:37 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: moving FE->BE encoding conversion "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Are you saying keeping conversion function's oids in memory after\n> current conversion is explicitly set by SET command or others? Even\n> if we do that, fmgr might want to look up pg_proc to load the\n> functions outside the transaction, no?\n> Or are you saying we should load the functions when the SET command is\n> executed? I'm not sure if OidFunctionCall could invoke the function\n> without looking up pg_proc in this case.\n\nOidFunctionCallN will not work, but I believe it would work to\nconstruct an FmgrInfo record during the SET operation, keep that, and\nuse FunctionCallN when you need to invoke the converter. This will\ndefinitely work for built-in and new-style dynamically loaded C\nfunctions, and that's probably all we need/want to support anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Aug 2002 11:05:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: moving FE->BE encoding conversion "
},
{
"msg_contents": "> OidFunctionCallN will not work, but I believe it would work to\n> construct an FmgrInfo record during the SET operation, keep that, and\n> use FunctionCallN when you need to invoke the converter. This will\n> definitely work for built-in and new-style dynamically loaded C\n> functions, and that's probably all we need/want to support anyway.\n\nOk, I have committed changes per your suggestion.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 08 Aug 2002 15:37:02 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: moving FE->BE encoding conversion "
}
] |
[
{
"msg_contents": "On Mon, 05 Aug 2002 10:06:37 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>I've refrained from touching the tuple-header issues until Manfred\n>returns from vacation and can defend himself ;-)\nEn garde! Seriously, I don't think I have to defend *myself*, because\nI'm not going to be attacked personally. I'm ready to discuss,\ndefend, and/or drop my work ...\n\n. Transaction and command ids, performance\n\nI offered to provide cheaper versions of GetCmin and GetCmax to be\nused by the tqual routines. These version would reduce additional CPU\nwork from two infomask compares to one. Is this still considered an\nissue?\n\n. Transaction and command ids, robustness\n\nI'm still of the opinion that putting *more* knowledge into the SetXxx\nmacros is the way to go. The runaway INSERT bug could as well have\nbeen fixed by changing SetCmax to do nothing, if HEAP_XMAX_INVALID is\nset, and changing SetXmaxInvalid to set HEAP_XMAX_INVALID. Likewise I\nwould change SetXmax to set HEAP_XMAX_INVALID, if xid ==\nInvalidTransactionId, or reset it, if != (not sure about this). Same\nfor SetXminInvalid and SetXmin.\n\nFurther, I'll try to build a regression test using statement timeout\nto detect runaway INSERT/UPDATE (the so called \"halloween\" problem).\n\n. Oids\n\nI was a bit surprised that these patches went in so smoothly, must be\ndue to the fact that there was a lot of work to do at that time.\nPersonnally I feel that these changes are more dangerous than the\nXmin/Cid/Xmax patches; and I ask the hackers to review especially\npart 2, which contains changes to the executor and even to bootstrap.\n\n. Oids, t_infomask\n\nThere has been no comment from a tool developer.\n\n. Oids, heap_getsysattr\n\nWe thought that a TupleDesc parameter would have to be added for this\nfunction. However, tests showed that heap_getsysattr is not called to\nget the oid, when the tuple doesn't have an oid: \"ERROR: Attribute\n'oid' not found\".\n\n. Oids, micro tuning\n\nThere are a few places, where storing an oid in a local variable might\nbe a little faster than fetching it several times from a heap tuple\nheader.\n\n. Overall performance\n\nIf Joe Conway can be talked into running OSDB benchmarks with old and\nnew heap tuple header format, I'll provide patches and instructions to\neasily switch between versions. Or, Joe, can you tell me, what I need\nto have and need to do to set up a benchmarking environment?\n\n. CVS\n\nThere have been a lot of \"CVS broken\" messages in the past few days.\nWhen I tried\n cvs -z3 log heapam.c\nI got\n| cvs server: failed to create lock directory for `/projects/cvsroot/pgsql/src/backend/access/heap' (/projects/cvsroot/pgsql/src/backend/access/heap/#cvs.lock): No such file or directory\n| cvs server: failed to obtain dir lock in repository `/projects/cvsroot/pgsql/src/backend/access/heap'\n| cvs [server aborted]: read lock failed - giving up\n\nIs this a temporary problem or did a miss any planned changes?\n\nServus\n Manfred\n",
"msg_date": "Wed, 07 Aug 2002 16:16:14 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "Heap tuple header issues"
},
{
"msg_contents": "Manfred Koizar wrote:\n> . Overall performance\n> \n> If Joe Conway can be talked into running OSDB benchmarks with old and\n> new heap tuple header format, I'll provide patches and instructions to\n> easily switch between versions. Or, Joe, can you tell me, what I need\n> to have and need to do to set up a benchmarking environment?\n\nI just grabbed a copy and followed the instructions as found on \nsourceforge. Just search on OSDB. Get the source tarball and the 40MB \ntest dataset.\n\nI guess I could be talked into running some more tests, but they take a \nfew hours of run time for each iteration, during which I can't use my \ndev machine (at least if I don't want to affect the results), so it's a \nbit of a pain.\n\nWe should probably coordinate off-list and then just report back the \nresults.\n\nJoe\n\n",
"msg_date": "Wed, 07 Aug 2002 08:51:08 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Heap tuple header issues"
},
{
"msg_contents": "On Wed, 07 Aug 2002 08:51:08 -0700, Joe Conway <mail@joeconway.com>\nwrote:\n>I just grabbed a copy and followed the instructions as found on \n>sourceforge. Just search on OSDB. Get the source tarball and the 40MB \n>test dataset.\n\nNothing special for PG? That's good news. I imagined there was some\ntweaking necessary; don't remember where I got that impression from.\nDownload started.\n\n>I guess I could be talked into running some more tests\n\nIf I cannot get reliable results out of my slow old development\nmachine, I'll come back to you. Thanks.\n\nServus\n Manfred\n",
"msg_date": "Wed, 07 Aug 2002 18:21:06 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "OSDB (was: Heap tuple header issues)"
},
{
"msg_contents": "Manfred Koizar wrote:\n> On Wed, 07 Aug 2002 08:51:08 -0700, Joe Conway <mail@joeconway.com>\n> wrote:\n> \n>>I just grabbed a copy and followed the instructions as found on \n>>sourceforge. Just search on OSDB. Get the source tarball and the 40MB \n>>test dataset.\n> \n> \n> Nothing special for PG? That's good news. I imagined there was some\n> tweaking necessary; don't remember where I got that impression from.\n> Download started.\n\nI only needed to do:\n ./configure --with-postgresql=/usr/local/pgsql\n make\n make install\n\nand then\n osdb-pg --datadir /opt/src/osdb/data --postgresql=no_hash_index\n\n> If I cannot get reliable results out of my slow old development\n> machine, I'll come back to you. Thanks.\n\nIIRC the test takes 2 to 3 hours on my development machine which is a \n1GHz Athlon, 512MB RAM, 7200 rpm IDE drives. If it becomes a problem for \nyou, let me know.\n\nJoe\n\n",
"msg_date": "Wed, 07 Aug 2002 09:35:19 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: OSDB (was: Heap tuple header issues)"
}
] |
[
{
"msg_contents": "\nJust letting everyone know that Rackspace swap'd out the RAM on the server\nthis afternoon, which appears to have cleared up the cascade of SegFaults\nthat the server was generating this aft ...\n\nAll appears well right now ...\n\n\n",
"msg_date": "Wed, 7 Aug 2002 19:46:52 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Today's Downtime ..."
},
{
"msg_contents": "On Wed, 7 Aug 2002, Marc G. Fournier wrote:\n\n>\n> Just letting everyone know that Rackspace swap'd out the RAM on the server\n> this afternoon, which appears to have cleared up the cascade of SegFaults\n> that the server was generating this aft ...\n>\n> All appears well right now ...\n\n'cept that the database still ain't talkin to us.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 7 Aug 2002 18:48:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Today's Downtime ..."
},
{
"msg_contents": "\n'tis now ...\n\nOn Wed, 7 Aug 2002, Vince Vielhaber wrote:\n\n> On Wed, 7 Aug 2002, Marc G. Fournier wrote:\n>\n> >\n> > Just letting everyone know that Rackspace swap'd out the RAM on the server\n> > this afternoon, which appears to have cleared up the cascade of SegFaults\n> > that the server was generating this aft ...\n> >\n> > All appears well right now ...\n>\n> 'cept that the database still ain't talkin to us.\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\n",
"msg_date": "Wed, 7 Aug 2002 19:51:43 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Today's Downtime ..."
},
{
"msg_contents": "On Wed, 7 Aug 2002, Marc G. Fournier wrote:\n\n>\n> 'tis now ...\n\ntanks!\n\n>\n> On Wed, 7 Aug 2002, Vince Vielhaber wrote:\n>\n> > On Wed, 7 Aug 2002, Marc G. Fournier wrote:\n> >\n> > >\n> > > Just letting everyone know that Rackspace swap'd out the RAM on the server\n> > > this afternoon, which appears to have cleared up the cascade of SegFaults\n> > > that the server was generating this aft ...\n> > >\n> > > All appears well right now ...\n> >\n> > 'cept that the database still ain't talkin to us.\n> >\n> > Vince.\n> > --\n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> >\n> >\n> >\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 7 Aug 2002 18:52:39 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Today's Downtime ..."
}
] |
[
{
"msg_contents": "Okay, I read\nhttp://archives.postgresql.org/pgsql-bugs/2002-06/msg00086.php and never\nsaw a fix offered up. Since I'm gearing up to use Postgres and Python\nsoon, I figured I'd have a hand at trying to get this sucker addressed. \nApologies if this has already been plugged. I looked in the archives\nand never saw a response.\n\nAt any rate, I must admit I don't think I fully understand the\nimplications of some of the changes I made even though they appear to be\nstraight forward. We all know the devil is in the details. Anyone more\nknowledgeable is requested to review my changes. :(\n\nI also updated the advanced.py script in a somewhat nonsensical fashion\nto make use of an int8 field in an effort to test this change. It seems\nto run okay, however, this is by no means an all exhaustive test. So,\nit's possible that a bumpy road may lay ahead for some. On the other\nhand...overflows (hopefully) previously lurked (long -> int conversion).\n\nThis is my first submission. Please be kind if I submitted to the wrong\nlist. ;)\n\nThank you,\n\tGreg Copeland",
"msg_date": "07 Aug 2002 21:55:09 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "python patch"
},
{
"msg_contents": "Hi Greg,\n\nIf you're looking at the Python code, do you feel like trying to submit a\npatch to make it respec the new 'attisdropped' attribute of the\n'pg_attribute' catalog. This is a flag that indicates that a column is\ndropped and I notice that Python accesses the pg_attribute relation, and\nprobably needs to skip over attisdropped columns.\n\nOh yeah, you'd have to be working with CVS postgres to do this...\n\nJust a thought...no pressure :)\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Greg Copeland\n> Sent: Thursday, 8 August 2002 10:55 AM\n> To: PostgresSQL Hackers Mailing List\n> Subject: [HACKERS] python patch\n>\n>\n> Okay, I read\n> http://archives.postgresql.org/pgsql-bugs/2002-06/msg00086.php and never\n> saw a fix offered up. Since I'm gearing up to use Postgres and Python\n> soon, I figured I'd have a hand at trying to get this sucker addressed.\n> Apologies if this has already been plugged. I looked in the archives\n> and never saw a response.\n>\n> At any rate, I must admit I don't think I fully understand the\n> implications of some of the changes I made even though they appear to be\n> straight forward. We all know the devil is in the details. Anyone more\n> knowledgeable is requested to review my changes. :(\n>\n> I also updated the advanced.py script in a somewhat nonsensical fashion\n> to make use of an int8 field in an effort to test this change. It seems\n> to run okay, however, this is by no means an all exhaustive test. So,\n> it's possible that a bumpy road may lay ahead for some. On the other\n> hand...overflows (hopefully) previously lurked (long -> int conversion).\n>\n> This is my first submission. Please be kind if I submitted to the wrong\n> list. ;)\n>\n> Thank you,\n> \tGreg Copeland\n>\n>\n\n",
"msg_date": "Thu, 8 Aug 2002 11:01:23 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "I don't have a problem looking into it but I can't promise I can get it\nright. My python skills are fairly good...my postgres internal skills\nare still sub-par IMO.\n\nFrom a cursory review, if attisdropped is true then the attribute/column\nshould be ignored/skipped?! Seems pretty dang straight forward.\n\nI'll have a look at it and see what I can come up with.\n\nFYI, I'm currently working off of anonymous CVS. The patch I submitted\nwas against CVS, current within the last couple of hours.\n\nGreg\n\n\n\nOn Wed, 2002-08-07 at 22:01, Christopher Kings-Lynne wrote:\n> Hi Greg,\n> \n> If you're looking at the Python code, do you feel like trying to submit a\n> patch to make it respec the new 'attisdropped' attribute of the\n> 'pg_attribute' catalog. This is a flag that indicates that a column is\n> dropped and I notice that Python accesses the pg_attribute relation, and\n> probably needs to skip over attisdropped columns.\n> \n> Oh yeah, you'd have to be working with CVS postgres to do this...\n> \n> Just a thought...no pressure :)\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Greg Copeland\n> > Sent: Thursday, 8 August 2002 10:55 AM\n> > To: PostgresSQL Hackers Mailing List\n> > Subject: [HACKERS] python patch\n> >\n> >\n> > Okay, I read\n> > http://archives.postgresql.org/pgsql-bugs/2002-06/msg00086.php and never\n> > saw a fix offered up. Since I'm gearing up to use Postgres and Python\n> > soon, I figured I'd have a hand at trying to get this sucker addressed.\n> > Apologies if this has already been plugged. I looked in the archives\n> > and never saw a response.\n> >\n> > At any rate, I must admit I don't think I fully understand the\n> > implications of some of the changes I made even though they appear to be\n> > straight forward. We all know the devil is in the details. Anyone more\n> > knowledgeable is requested to review my changes. :(\n> >\n> > I also updated the advanced.py script in a somewhat nonsensical fashion\n> > to make use of an int8 field in an effort to test this change. It seems\n> > to run okay, however, this is by no means an all exhaustive test. So,\n> > it's possible that a bumpy road may lay ahead for some. On the other\n> > hand...overflows (hopefully) previously lurked (long -> int conversion).\n> >\n> > This is my first submission. Please be kind if I submitted to the wrong\n> > list. ;)\n> >\n> > Thank you,\n> > \tGreg Copeland\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html",
"msg_date": "07 Aug 2002 22:10:59 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "> I don't have a problem looking into it but I can't promise I can get it\n> right. My python skills are fairly good...my postgres internal skills\n> are still sub-par IMO.\n> \n> From a cursory review, if attisdropped is true then the attribute/column\n> should be ignored/skipped?! Seems pretty dang straight forward.\n\nBasically, yep. Just grep the source code for pg_attribute most likely...\n\nI'm interested in knowing what it uses pg_attribute for as well...?\n\nChris\n\n",
"msg_date": "Thu, 8 Aug 2002 11:16:34 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "Well, that certainly appeared to be very straight forward. pg.py and\nsyscat.py scripts were both modified. pg.py uses it to cache a list of\npks (which is seemingly does for every db connection) and various\nattributes. syscat uses it to walk the list of system tables and\nqueries the various attributes from these tables.\n\nIn both cases, it seemingly makes sense to apply what you've requested.\n\nPlease find attached the quested patch below.\n\nGreg\n\n\nOn Wed, 2002-08-07 at 22:16, Christopher Kings-Lynne wrote:\n> > I don't have a problem looking into it but I can't promise I can get it\n> > right. My python skills are fairly good...my postgres internal skills\n> > are still sub-par IMO.\n> > \n> > From a cursory review, if attisdropped is true then the attribute/column\n> > should be ignored/skipped?! Seems pretty dang straight forward.\n> \n> Basically, yep. Just grep the source code for pg_attribute most likely...\n> \n> I'm interested in knowing what it uses pg_attribute for as well...?\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly",
"msg_date": "07 Aug 2002 22:35:50 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nGreg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> Okay, I read\n> http://archives.postgresql.org/pgsql-bugs/2002-06/msg00086.php and never\n> saw a fix offered up. Since I'm gearing up to use Postgres and Python\n> soon, I figured I'd have a hand at trying to get this sucker addressed. \n> Apologies if this has already been plugged. I looked in the archives\n> and never saw a response.\n> \n> At any rate, I must admit I don't think I fully understand the\n> implications of some of the changes I made even though they appear to be\n> straight forward. We all know the devil is in the details. Anyone more\n> knowledgeable is requested to review my changes. :(\n> \n> I also updated the advanced.py script in a somewhat nonsensical fashion\n> to make use of an int8 field in an effort to test this change. It seems\n> to run okay, however, this is by no means an all exhaustive test. So,\n> it's possible that a bumpy road may lay ahead for some. On the other\n> hand...overflows (hopefully) previously lurked (long -> int conversion).\n> \n> This is my first submission. Please be kind if I submitted to the wrong\n> list. ;)\n> \n> Thank you,\n> \tGreg Copeland\n> \n\n[ text/x-diff is unsupported, treating like TEXT/PLAIN ]\n\n> ? lib_pgmodule.so.0.0\n> ? postgres-python.patch\n> ? tutorial/advanced.pyc\n> Index: pgmodule.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/pgmodule.c,v\n> retrieving revision 1.38\n> diff -u -r1.38 pgmodule.c\n> --- pgmodule.c\t2002/03/29 07:45:39\t1.38\n> +++ pgmodule.c\t2002/08/08 02:46:12\n> @@ -289,23 +289,26 @@\n> \t\t{\n> \t\t\tcase INT2OID:\n> \t\t\tcase INT4OID:\n> -\t\t\tcase INT8OID:\n> \t\t\tcase OIDOID:\n> \t\t\t\ttyp[j] = 1;\n> \t\t\t\tbreak;\n> \n> +\t\t\tcase INT8OID:\n> +\t\t\t\ttyp[j] = 2;\n> +\t\t\t\tbreak;\n> +\n> \t\t\tcase FLOAT4OID:\n> \t\t\tcase FLOAT8OID:\n> \t\t\tcase NUMERICOID:\n> -\t\t\t\ttyp[j] = 2;\n> +\t\t\t\ttyp[j] = 3;\n> \t\t\t\tbreak;\n> \n> \t\t\tcase CASHOID:\n> -\t\t\t\ttyp[j] = 3;\n> +\t\t\t\ttyp[j] = 4;\n> \t\t\t\tbreak;\n> \n> \t\t\tdefault:\n> -\t\t\t\ttyp[j] = 4;\n> +\t\t\t\ttyp[j] = 5;\n> \t\t\t\tbreak;\n> \t\t}\n> \t}\n> @@ -1797,23 +1800,26 @@\n> \t\t{\n> \t\t\tcase INT2OID:\n> \t\t\tcase INT4OID:\n> -\t\t\tcase INT8OID:\n> \t\t\tcase OIDOID:\n> \t\t\t\ttyp[j] = 1;\n> \t\t\t\tbreak;\n> \n> +\t\t\tcase INT8OID:\n> +\t\t\t\ttyp[j] = 2;\n> +\t\t\t\tbreak;\n> +\n> \t\t\tcase FLOAT4OID:\n> \t\t\tcase FLOAT8OID:\n> \t\t\tcase NUMERICOID:\n> -\t\t\t\ttyp[j] = 2;\n> +\t\t\t\ttyp[j] = 3;\n> \t\t\t\tbreak;\n> \n> \t\t\tcase CASHOID:\n> -\t\t\t\ttyp[j] = 3;\n> +\t\t\t\ttyp[j] = 4;\n> \t\t\t\tbreak;\n> \n> \t\t\tdefault:\n> -\t\t\t\ttyp[j] = 4;\n> +\t\t\t\ttyp[j] = 5;\n> \t\t\t\tbreak;\n> \t\t}\n> \t}\n> @@ -1846,10 +1852,14 @@\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase 2:\n> -\t\t\t\t\t\tval = PyFloat_FromDouble(strtod(s, NULL));\n> +\t\t\t\t\t\tval = PyLong_FromLong(strtol(s, NULL, 10));\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase 3:\n> +\t\t\t\t\t\tval = PyFloat_FromDouble(strtod(s, NULL));\n> +\t\t\t\t\t\tbreak;\n> +\n> +\t\t\t\t\tcase 4:\n> \t\t\t\t\t\t{\n> \t\t\t\t\t\t\tint\t\t\tmult = 1;\n> \n> @@ -1946,11 +1956,14 @@\n> \t\t{\n> \t\t\tcase INT2OID:\n> \t\t\tcase INT4OID:\n> -\t\t\tcase INT8OID:\n> \t\t\tcase OIDOID:\n> \t\t\t\ttyp[j] = 1;\n> \t\t\t\tbreak;\n> \n> +\t\t\tcase INT8OID:\n> +\t\t\t\ttyp[j] = 2;\n> +\t\t\t\tbreak;\n> +\n> \t\t\tcase FLOAT4OID:\n> \t\t\tcase FLOAT8OID:\n> \t\t\tcase NUMERICOID:\n> @@ -1995,10 +2008,14 @@\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase 2:\n> -\t\t\t\t\t\tval = PyFloat_FromDouble(strtod(s, NULL));\n> +\t\t\t\t\t\tval = PyLong_FromLong(strtol(s, NULL, 10));\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase 3:\n> +\t\t\t\t\t\tval = PyFloat_FromDouble(strtod(s, NULL));\n> +\t\t\t\t\t\tbreak;\n> +\n> +\t\t\t\t\tcase 4:\n> \t\t\t\t\t\t{\n> \t\t\t\t\t\t\tint\t\t\tmult = 1;\n> \n> Index: tutorial/advanced.py\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/advanced.py,v\n> retrieving revision 1.5\n> diff -u -r1.5 advanced.py\n> --- tutorial/advanced.py\t2000/10/02 03:46:24\t1.5\n> +++ tutorial/advanced.py\t2002/08/08 02:46:12\n> @@ -109,11 +109,13 @@\n> \tprint \"CREATE TABLE sal_emp (\"\n> \tprint \" name text,\"\n> \tprint \" pay_by_quarter int4[],\"\n> +\tprint \" pay_by_extra_quarter int8[],\"\n> \tprint \" schedule text[][]\"\n> \tprint \")\"\n> \tpgcnx.query(\"\"\"CREATE TABLE sal_emp (\n> name text,\n> pay_by_quarter int4[],\n> + pay_by_extra_quarter int8[],\n> schedule text[][])\"\"\")\n> \twait_key()\n> \tprint\n> @@ -123,18 +125,22 @@\n> \tprint \"INSERT INTO sal_emp VALUES (\"\n> \tprint \" 'Bill',\"\n> \tprint \" '{10000,10000,10000,10000}',\"\n> +\tprint \" '{9223372036854775800,9223372036854775800,9223372036854775800}',\"\n> \tprint \" '{{\\\"meeting\\\", \\\"lunch\\\"}, {}}')\"\n> \tprint\n> \tprint \"INSERT INTO sal_emp VALUES (\"\n> \tprint \" 'Carol',\"\n> \tprint \" '{20000,25000,25000,25000}',\"\n> +\tprint \" '{9223372036854775807,9223372036854775807,9223372036854775807}',\"\n> \tprint \" '{{\\\"talk\\\", \\\"consult\\\"}, {\\\"meeting\\\"}}')\"\n> \tprint\n> \tpgcnx.query(\"\"\"INSERT INTO sal_emp VALUES (\n> 'Bill', '{10000,10000,10000,10000}',\n> +\t'{9223372036854775800,9223372036854775800,9223372036854775800}',\n> '{{\\\"meeting\\\", \\\"lunch\\\"}, {}}')\"\"\")\n> \tpgcnx.query(\"\"\"INSERT INTO sal_emp VALUES (\n> 'Carol', '{20000,25000,25000,25000}',\n> +\t'{9223372036854775807,9223372036854775807,9223372036854775807}',\n> '{{\\\"talk\\\", \\\"consult\\\"}, {\\\"meeting\\\"}}')\"\"\")\n> \twait_key()\n> \tprint\n> @@ -148,11 +154,25 @@\n> \tprint pgcnx.query(\"\"\"SELECT name FROM sal_emp WHERE\n> sal_emp.pay_by_quarter[1] <> sal_emp.pay_by_quarter[2]\"\"\")\n> \tprint\n> +\tprint pgcnx.query(\"\"\"SELECT name FROM sal_emp WHERE\n> + sal_emp.pay_by_extra_quarter[1] <> sal_emp.pay_by_extra_quarter[2]\"\"\")\n> +\tprint\n> \tprint \"-- retrieve third quarter pay of all employees\"\n> \tprint \n> \tprint \"SELECT sal_emp.pay_by_quarter[3] FROM sal_emp\"\n> \tprint\n> \tprint pgcnx.query(\"SELECT sal_emp.pay_by_quarter[3] FROM sal_emp\")\n> +\tprint\n> +\tprint \"-- retrieve third quarter extra pay of all employees\"\n> +\tprint \n> +\tprint \"SELECT sal_emp.pay_by_extra_quarter[3] FROM sal_emp\"\n> +\tprint pgcnx.query(\"SELECT sal_emp.pay_by_extra_quarter[3] FROM sal_emp\")\n> +\tprint \n> +\tprint \"-- retrieve first two quarters of extra quarter pay of all employees\"\n> +\tprint \n> +\tprint \"SELECT sal_emp.pay_by_extra_quarter[1:2] FROM sal_emp\"\n> +\tprint\n> +\tprint pgcnx.query(\"SELECT sal_emp.pay_by_extra_quarter[1:2] FROM sal_emp\")\n> \tprint\n> \tprint \"-- select subarrays\"\n> \tprint \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 11 Aug 2002 01:13:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nGreg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> Well, that certainly appeared to be very straight forward. pg.py and\n> syscat.py scripts were both modified. pg.py uses it to cache a list of\n> pks (which is seemingly does for every db connection) and various\n> attributes. syscat uses it to walk the list of system tables and\n> queries the various attributes from these tables.\n> \n> In both cases, it seemingly makes sense to apply what you've requested.\n> \n> Please find attached the quested patch below.\n> \n> Greg\n> \n> \n> On Wed, 2002-08-07 at 22:16, Christopher Kings-Lynne wrote:\n> > > I don't have a problem looking into it but I can't promise I can get it\n> > > right. My python skills are fairly good...my postgres internal skills\n> > > are still sub-par IMO.\n> > > \n> > > From a cursory review, if attisdropped is true then the attribute/column\n> > > should be ignored/skipped?! Seems pretty dang straight forward.\n> > \n> > Basically, yep. Just grep the source code for pg_attribute most likely...\n> > \n> > I'm interested in knowing what it uses pg_attribute for as well...?\n> > \n> > Chris\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n\n[ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n\n> Index: pg.py\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/pg.py,v\n> retrieving revision 1.9\n> diff -u -r1.9 pg.py\n> --- pg.py\t2002/03/19 13:20:52\t1.9\n> +++ pg.py\t2002/08/08 03:29:48\n> @@ -69,7 +69,8 @@\n> \t\t\t\t\t\tWHERE pg_class.oid = pg_attribute.attrelid AND\n> \t\t\t\t\t\t\tpg_class.oid = pg_index.indrelid AND\n> \t\t\t\t\t\t\tpg_index.indkey[0] = pg_attribute.attnum AND \n> -\t\t\t\t\t\t\tpg_index.indisprimary = 't'\"\"\").getresult():\n> +\t\t\t\t\t\t\tpg_index.indisprimary = 't' AND\n> +\t\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\").getresult():\n> \t\t\tself.__pkeys__[rel] = att\n> \n> \t# wrap query for debugging\n> @@ -111,7 +112,8 @@\n> \t\t\t\t\tWHERE pg_class.relname = '%s' AND\n> \t\t\t\t\t\tpg_attribute.attnum > 0 AND\n> \t\t\t\t\t\tpg_attribute.attrelid = pg_class.oid AND\n> -\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid\"\"\"\n> +\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid AND\n> +\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\"\n> \n> \t\tl = {}\n> \t\tfor attname, typname in self.db.query(query % cl).getresult():\n> Index: tutorial/syscat.py\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/syscat.py,v\n> retrieving revision 1.7\n> diff -u -r1.7 syscat.py\n> --- tutorial/syscat.py\t2002/05/03 14:21:38\t1.7\n> +++ tutorial/syscat.py\t2002/08/08 03:29:48\n> @@ -37,7 +37,7 @@\n> \t\tFROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> \t\tWHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> \t\t\t\tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> -\t\t\t\tAND i.indproc = '0'::oid\n> +\t\t\t\tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> \t\tORDER BY class_name, index_name, attname\"\"\")\n> \treturn result\n> \n> @@ -48,6 +48,7 @@\n> \t\tWHERE c.relkind = 'r' and c.relname !~ '^pg_'\n> \t\t\tAND c.relname !~ '^Inv' and a.attnum > 0\n> \t\t\tAND a.attrelid = c.oid and a.atttypid = t.oid\n> + AND a.attisdropped = 'f'\n> \t\t\tORDER BY relname, attname\"\"\")\n> \treturn result\n> \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 11 Aug 2002 01:13:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "I wouldn't apply this _just_ yet Bruce as I'm not certain all the changes\nare necessary... I intend to look into it but I haven't had the time yet\n(sorry Greg!)\n\nChris\n\n\nOn Sun, 11 Aug 2002, Bruce Momjian wrote:\n\n>\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n>\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n>\n> I will try to apply it within the next 48 hours.\n>\n> ---------------------------------------------------------------------------\n>\n>\n> Greg Copeland wrote:\n>\n Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > Well, that certainly appeared to be very straight forward. pg.py and\n> > syscat.py scripts were both modified. pg.py uses it to cache a list of\n> > pks (which is seemingly does for every db connection) and various\n> > attributes. syscat uses it to walk the list of system tables and\n> > queries the various attributes from these tables.\n> >\n> > In both cases, it seemingly makes sense to apply what you've requested.\n> >\n> > Please find attached the quested patch below.\n> >\n> > Greg\n> >\n> >\n> > On Wed, 2002-08-07 at 22:16, Christopher Kings-Lynne wrote:\n> > > > I don't have a problem looking into it but I can't promise I can get it\n> > > > right. My python skills are fairly good...my postgres internal skills\n> > > > are still sub-par IMO.\n> > > >\n> > > > From a cursory review, if attisdropped is true then the attribute/column\n> > > > should be ignored/skipped?! Seems pretty dang straight forward.\n> > >\n> > > Basically, yep. Just grep the source code for pg_attribute most likely...\n> > >\n> > > I'm interested in knowing what it uses pg_attribute for as well...?\n> > >\n> > > Chris\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> >\n>\n> [ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n>\n> > Index: pg.py\n> > ===================================================================\n> > RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/pg.py,v\n> > retrieving revision 1.9\n> > diff -u -r1.9 pg.py\n> > --- pg.py\t2002/03/19 13:20:52\t1.9\n> > +++ pg.py\t2002/08/08 03:29:48\n> > @@ -69,7 +69,8 @@\n> > \t\t\t\t\t\tWHERE pg_class.oid = pg_attribute.attrelid AND\n> > \t\t\t\t\t\t\tpg_class.oid = pg_index.indrelid AND\n> > \t\t\t\t\t\t\tpg_index.indkey[0] = pg_attribute.attnum AND\n> > -\t\t\t\t\t\t\tpg_index.indisprimary = 't'\"\"\").getresult():\n> > +\t\t\t\t\t\t\tpg_index.indisprimary = 't' AND\n> > +\t\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\").getresult():\n> > \t\t\tself.__pkeys__[rel] = att\n> >\n> > \t# wrap query for debugging\n> > @@ -111,7 +112,8 @@\n> > \t\t\t\t\tWHERE pg_class.relname = '%s' AND\n> > \t\t\t\t\t\tpg_attribute.attnum > 0 AND\n> > \t\t\t\t\t\tpg_attribute.attrelid = pg_class.oid AND\n> > -\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid\"\"\"\n> > +\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid AND\n> > +\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\"\n> >\n> > \t\tl = {}\n> > \t\tfor attname, typname in self.db.query(query % cl).getresult():\n> > Index: tutorial/syscat.py\n> > ===================================================================\n> > RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/syscat.py,v\n> > retrieving revision 1.7\n> > diff -u -r1.7 syscat.py\n> > --- tutorial/syscat.py\t2002/05/03 14:21:38\t1.7\n> > +++ tutorial/syscat.py\t2002/08/08 03:29:48\n> > @@ -37,7 +37,7 @@\n> > \t\tFROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> > \t\tWHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> > \t\t\t\tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> > -\t\t\t\tAND i.indproc = '0'::oid\n> > +\t\t\t\tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> > \t\tORDER BY class_name, index_name, attname\"\"\")\n> > \treturn result\n> >\n> > @@ -48,6 +48,7 @@\n> > \t\tWHERE c.relkind = 'r' and c.relname !~ '^pg_'\n> > \t\t\tAND c.relname !~ '^Inv' and a.attnum > 0\n> > \t\t\tAND a.attrelid = c.oid and a.atttypid = t.oid\n> > + AND a.attisdropped = 'f'\n> > \t\t\tORDER BY relname, attname\"\"\")\n> > \treturn result\n> >\n> -- End of PGP section, PGP failed!\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n\n",
"msg_date": "Sun, 11 Aug 2002 17:43:28 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "Not a problem. I would rather them be correct.\n\nWorth noting that the first patch is what attempts to fix the long ->\nint overflow issue. The second patch attempts to resolve \"attisdropped\"\ncolumn use issues with the python scripts. The third patch addresses\nissues generated by the implicate to explicate use of \"cascade\".\n\nI assume your reservations are only with the second patch and not the\nfirst and third patches?\n\nGreg\n\n\nOn Sun, 2002-08-11 at 04:43, Christopher Kings-Lynne wrote:\n> I wouldn't apply this _just_ yet Bruce as I'm not certain all the changes\n> are necessary... I intend to look into it but I haven't had the time yet\n> (sorry Greg!)\n> \n> Chris\n> \n> \n> On Sun, 11 Aug 2002, Bruce Momjian wrote:\n> \n> >\n> > Your patch has been added to the PostgreSQL unapplied patches list at:\n> >\n> > \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> >\n> > I will try to apply it within the next 48 hours.\n> >\n> > ---------------------------------------------------------------------------\n> >\n> >\n> > Greg Copeland wrote:\n> >\n> Checking application/pgp-signature: FAILURE\n> > -- Start of PGP signed section.\n> > > Well, that certainly appeared to be very straight forward. pg.py and\n> > > syscat.py scripts were both modified. pg.py uses it to cache a list of\n> > > pks (which is seemingly does for every db connection) and various\n> > > attributes. syscat uses it to walk the list of system tables and\n> > > queries the various attributes from these tables.\n> > >\n> > > In both cases, it seemingly makes sense to apply what you've requested.\n> > >\n> > > Please find attached the quested patch below.\n> > >\n> > > Greg\n> > >\n> > >\n> > > On Wed, 2002-08-07 at 22:16, Christopher Kings-Lynne wrote:\n> > > > > I don't have a problem looking into it but I can't promise I can get it\n> > > > > right. My python skills are fairly good...my postgres internal skills\n> > > > > are still sub-par IMO.\n> > > > >\n> > > > > From a cursory review, if attisdropped is true then the attribute/column\n> > > > > should be ignored/skipped?! Seems pretty dang straight forward.\n> > > >\n> > > > Basically, yep. Just grep the source code for pg_attribute most likely...\n> > > >\n> > > > I'm interested in knowing what it uses pg_attribute for as well...?\n> > > >\n> > > > Chris\n> > > >\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > > message can get through to the mailing list cleanly\n> > >\n> >\n> > [ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n> >\n> > > Index: pg.py\n> > > ===================================================================\n> > > RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/pg.py,v\n> > > retrieving revision 1.9\n> > > diff -u -r1.9 pg.py\n> > > --- pg.py\t2002/03/19 13:20:52\t1.9\n> > > +++ pg.py\t2002/08/08 03:29:48\n> > > @@ -69,7 +69,8 @@\n> > > \t\t\t\t\t\tWHERE pg_class.oid = pg_attribute.attrelid AND\n> > > \t\t\t\t\t\t\tpg_class.oid = pg_index.indrelid AND\n> > > \t\t\t\t\t\t\tpg_index.indkey[0] = pg_attribute.attnum AND\n> > > -\t\t\t\t\t\t\tpg_index.indisprimary = 't'\"\"\").getresult():\n> > > +\t\t\t\t\t\t\tpg_index.indisprimary = 't' AND\n> > > +\t\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\").getresult():\n> > > \t\t\tself.__pkeys__[rel] = att\n> > >\n> > > \t# wrap query for debugging\n> > > @@ -111,7 +112,8 @@\n> > > \t\t\t\t\tWHERE pg_class.relname = '%s' AND\n> > > \t\t\t\t\t\tpg_attribute.attnum > 0 AND\n> > > \t\t\t\t\t\tpg_attribute.attrelid = pg_class.oid AND\n> > > -\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid\"\"\"\n> > > +\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid AND\n> > > +\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\"\n> > >\n> > > \t\tl = {}\n> > > \t\tfor attname, typname in self.db.query(query % cl).getresult():\n> > > Index: tutorial/syscat.py\n> > > ===================================================================\n> > > RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/syscat.py,v\n> > > retrieving revision 1.7\n> > > diff -u -r1.7 syscat.py\n> > > --- tutorial/syscat.py\t2002/05/03 14:21:38\t1.7\n> > > +++ tutorial/syscat.py\t2002/08/08 03:29:48\n> > > @@ -37,7 +37,7 @@\n> > > \t\tFROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> > > \t\tWHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> > > \t\t\t\tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> > > -\t\t\t\tAND i.indproc = '0'::oid\n> > > +\t\t\t\tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> > > \t\tORDER BY class_name, index_name, attname\"\"\")\n> > > \treturn result\n> > >\n> > > @@ -48,6 +48,7 @@\n> > > \t\tWHERE c.relkind = 'r' and c.relname !~ '^pg_'\n> > > \t\t\tAND c.relname !~ '^Inv' and a.attnum > 0\n> > > \t\t\tAND a.attrelid = c.oid and a.atttypid = t.oid\n> > > + AND a.attisdropped = 'f'\n> > > \t\t\tORDER BY relname, attname\"\"\")\n> > > \treturn result\n> > >\n> > -- End of PGP section, PGP failed!\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> >\n>",
"msg_date": "11 Aug 2002 09:36:51 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "\nOK, great to have people reviewing them. I will hold on all the python\npatches until I hear back from Christopher:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\n---------------------------------------------------------------------------\n\n\nGreg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> Not a problem. I would rather them be correct.\n> \n> Worth noting that the first patch is what attempts to fix the long ->\n> int overflow issue. The second patch attempts to resolve \"attisdropped\"\n> column use issues with the python scripts. The third patch addresses\n> issues generated by the implicate to explicate use of \"cascade\".\n> \n> I assume your reservations are only with the second patch and not the\n> first and third patches?\n> \n> Greg\n> \n> \n> On Sun, 2002-08-11 at 04:43, Christopher Kings-Lynne wrote:\n> > I wouldn't apply this _just_ yet Bruce as I'm not certain all the changes\n> > are necessary... I intend to look into it but I haven't had the time yet\n> > (sorry Greg!)\n> > \n> > Chris\n> > \n> > \n> > On Sun, 11 Aug 2002, Bruce Momjian wrote:\n> > \n> > >\n> > > Your patch has been added to the PostgreSQL unapplied patches list at:\n> > >\n> > > \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> > >\n> > > I will try to apply it within the next 48 hours.\n> > >\n> > > ---------------------------------------------------------------------------\n> > >\n> > >\n> > > Greg Copeland wrote:\n> > >\n> > Checking application/pgp-signature: FAILURE\n> > > -- Start of PGP signed section.\n> > > > Well, that certainly appeared to be very straight forward. pg.py and\n> > > > syscat.py scripts were both modified. pg.py uses it to cache a list of\n> > > > pks (which is seemingly does for every db connection) and various\n> > > > attributes. syscat uses it to walk the list of system tables and\n> > > > queries the various attributes from these tables.\n> > > >\n> > > > In both cases, it seemingly makes sense to apply what you've requested.\n> > > >\n> > > > Please find attached the quested patch below.\n> > > >\n> > > > Greg\n> > > >\n> > > >\n> > > > On Wed, 2002-08-07 at 22:16, Christopher Kings-Lynne wrote:\n> > > > > > I don't have a problem looking into it but I can't promise I can get it\n> > > > > > right. My python skills are fairly good...my postgres internal skills\n> > > > > > are still sub-par IMO.\n> > > > > >\n> > > > > > From a cursory review, if attisdropped is true then the attribute/column\n> > > > > > should be ignored/skipped?! Seems pretty dang straight forward.\n> > > > >\n> > > > > Basically, yep. Just grep the source code for pg_attribute most likely...\n> > > > >\n> > > > > I'm interested in knowing what it uses pg_attribute for as well...?\n> > > > >\n> > > > > Chris\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of broadcast)---------------------------\n> > > > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > > > message can get through to the mailing list cleanly\n> > > >\n> > >\n> > > [ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n> > >\n> > > > Index: pg.py\n> > > > ===================================================================\n> > > > RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/pg.py,v\n> > > > retrieving revision 1.9\n> > > > diff -u -r1.9 pg.py\n> > > > --- pg.py\t2002/03/19 13:20:52\t1.9\n> > > > +++ pg.py\t2002/08/08 03:29:48\n> > > > @@ -69,7 +69,8 @@\n> > > > \t\t\t\t\t\tWHERE pg_class.oid = pg_attribute.attrelid AND\n> > > > \t\t\t\t\t\t\tpg_class.oid = pg_index.indrelid AND\n> > > > \t\t\t\t\t\t\tpg_index.indkey[0] = pg_attribute.attnum AND\n> > > > -\t\t\t\t\t\t\tpg_index.indisprimary = 't'\"\"\").getresult():\n> > > > +\t\t\t\t\t\t\tpg_index.indisprimary = 't' AND\n> > > > +\t\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\").getresult():\n> > > > \t\t\tself.__pkeys__[rel] = att\n> > > >\n> > > > \t# wrap query for debugging\n> > > > @@ -111,7 +112,8 @@\n> > > > \t\t\t\t\tWHERE pg_class.relname = '%s' AND\n> > > > \t\t\t\t\t\tpg_attribute.attnum > 0 AND\n> > > > \t\t\t\t\t\tpg_attribute.attrelid = pg_class.oid AND\n> > > > -\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid\"\"\"\n> > > > +\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid AND\n> > > > +\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\"\n> > > >\n> > > > \t\tl = {}\n> > > > \t\tfor attname, typname in self.db.query(query % cl).getresult():\n> > > > Index: tutorial/syscat.py\n> > > > ===================================================================\n> > > > RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/syscat.py,v\n> > > > retrieving revision 1.7\n> > > > diff -u -r1.7 syscat.py\n> > > > --- tutorial/syscat.py\t2002/05/03 14:21:38\t1.7\n> > > > +++ tutorial/syscat.py\t2002/08/08 03:29:48\n> > > > @@ -37,7 +37,7 @@\n> > > > \t\tFROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> > > > \t\tWHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> > > > \t\t\t\tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> > > > -\t\t\t\tAND i.indproc = '0'::oid\n> > > > +\t\t\t\tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> > > > \t\tORDER BY class_name, index_name, attname\"\"\")\n> > > > \treturn result\n> > > >\n> > > > @@ -48,6 +48,7 @@\n> > > > \t\tWHERE c.relkind = 'r' and c.relname !~ '^pg_'\n> > > > \t\t\tAND c.relname !~ '^Inv' and a.attnum > 0\n> > > > \t\t\tAND a.attrelid = c.oid and a.atttypid = t.oid\n> > > > + AND a.attisdropped = 'f'\n> > > > \t\t\tORDER BY relname, attname\"\"\")\n> > > > \treturn result\n> > > >\n> > > -- End of PGP section, PGP failed!\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 359-1001\n> > > + If your life is a hard drive, | 13 Roberts Road\n> > > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > >\n> > \n> \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 11 Aug 2002 15:29:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "> Not a problem. I would rather them be correct.\n>\n> Worth noting that the first patch is what attempts to fix the long ->\n> int overflow issue. The second patch attempts to resolve \"attisdropped\"\n> column use issues with the python scripts. The third patch addresses\n> issues generated by the implicate to explicate use of \"cascade\".\n>\n> I assume your reservations are only with the second patch and not the\n> first and third patches?\n\nCorrect. I'm pretty sure you don't need to exclude attisdropped from the\nprimary key list because all it's doing is finding the column that a primary\nkey is over and that should never be over a dropped column. I can't\nremember what you said the second query did?\n\nChris\n\n",
"msg_date": "Mon, 12 Aug 2002 10:15:16 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "On Sun, 2002-08-11 at 21:15, Christopher Kings-Lynne wrote:\n> > Not a problem. I would rather them be correct.\n> >\n> > Worth noting that the first patch is what attempts to fix the long ->\n> > int overflow issue. The second patch attempts to resolve \"attisdropped\"\n> > column use issues with the python scripts. The third patch addresses\n> > issues generated by the implicate to explicate use of \"cascade\".\n> >\n> > I assume your reservations are only with the second patch and not the\n> > first and third patches?\n> \n> Correct. I'm pretty sure you don't need to exclude attisdropped from the\n> primary key list because all it's doing is finding the column that a primary\n> key is over and that should never be over a dropped column. I can't\n> remember what you said the second query did?\n\n\nHmmm. Sounds okay but I'm just not sure that holds true (as I\npreviously stated, I'm ignorant on the topic). Obviously I'll defer to\nyou on this.\n\nHere's the queries and what they do:\n\n\nFrom pg.py:\nUsed to locate primary keys -- or so the comment says. It does create a\ndictionary of keys and attribute values for each returned row so I\nassume it really is attempting to do something of the like.\n\nSELECT pg_class.relname, pg_attribute.attname \nFROM pg_class, pg_attribute, pg_index \nWHERE pg_class.oid = pg_attribute.attrelid AND \n\tpg_class.oid = pg_index.indrelid AND \n\tpg_index.indkey[0] = pg_attribute.attnum AND \n\tpg_index.indisprimary = 't' AND \n\tpg_attribute.attisdropped = 'f' ;\n\nSo, everyone is in agreement that any attribute which is indexed as a\nprimary key will never be able to have attisdtopped = 't'?\n\nAccording to the code:\nSELECT pg_attribute.attname, pg_type.typname\nFROM pg_class, pg_attribute, pg_type\nWHERE pg_class.relname = '%s' AND\n\tpg_attribute.attnum > 0 AND\n\tpg_attribute.attrelid = pg_class.oid AND\n\tpg_attribute.atttypid = pg_type.oid AND\n\tpg_attribute.attisdropped = 'f' ;\n\nis used to obtain all attributes (column names) and their types for a\ngiven table ('%s'). It then attempts to build a column/type cache. I'm\nassuming that this really does need to be there. Please correct\naccordingly.\n\n\nFrom syscat.py:\nSELECT bc.relname AS class_name,\n\tic.relname AS index_name, a.attname\nFROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\nWHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n\tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n\tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n\tORDER BY class_name, index_name, attname ;\n\nAccording to the nearby documentation, it's supposed to be fetching a\nlist of \"all simple indicies\". If that's the case, is it safe to assume\nthat any indexed column will never have attisdropped = 't'? If so, we\ncan remove that check from the file as well. Worth pointing out, this\nis from syscat.py, which is sample source and not used as actual\ninterface. So, worse case, it would appear to be redundant in nature\nwith no harm done.\n\nThis should conclude the patched items offered in the second patch.\n\nWhat ya think?\n\nThanks,\n\tGreg",
"msg_date": "12 Aug 2002 17:40:03 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "All of that said, the cost of the check is so small it may save someones\nass some day when they have a corrupted catalog and the below\nassumptions are no longer true.\n\nOn Mon, 2002-08-12 at 18:40, Greg Copeland wrote:\n> On Sun, 2002-08-11 at 21:15, Christopher Kings-Lynne wrote:\n> > > Not a problem. I would rather them be correct.\n> > >\n> > > Worth noting that the first patch is what attempts to fix the long ->\n> > > int overflow issue. The second patch attempts to resolve \"attisdropped\"\n> > > column use issues with the python scripts. The third patch addresses\n> > > issues generated by the implicate to explicate use of \"cascade\".\n> > >\n> > > I assume your reservations are only with the second patch and not the\n> > > first and third patches?\n> > \n> > Correct. I'm pretty sure you don't need to exclude attisdropped from the\n> > primary key list because all it's doing is finding the column that a primary\n> > key is over and that should never be over a dropped column. I can't\n> > remember what you said the second query did?\n> \n> \n> Hmmm. Sounds okay but I'm just not sure that holds true (as I\n> previously stated, I'm ignorant on the topic). Obviously I'll defer to\n> you on this.\n> \n> Here's the queries and what they do:\n> \n> \n> >From pg.py:\n> Used to locate primary keys -- or so the comment says. It does create a\n> dictionary of keys and attribute values for each returned row so I\n> assume it really is attempting to do something of the like.\n> \n> SELECT pg_class.relname, pg_attribute.attname \n> FROM pg_class, pg_attribute, pg_index \n> WHERE pg_class.oid = pg_attribute.attrelid AND \n> \tpg_class.oid = pg_index.indrelid AND \n> \tpg_index.indkey[0] = pg_attribute.attnum AND \n> \tpg_index.indisprimary = 't' AND \n> \tpg_attribute.attisdropped = 'f' ;\n> \n> So, everyone is in agreement that any attribute which is indexed as a\n> primary key will never be able to have attisdtopped = 't'?\n> \n> According to the code:\n> SELECT pg_attribute.attname, pg_type.typname\n> FROM pg_class, pg_attribute, pg_type\n> WHERE pg_class.relname = '%s' AND\n> \tpg_attribute.attnum > 0 AND\n> \tpg_attribute.attrelid = pg_class.oid AND\n> \tpg_attribute.atttypid = pg_type.oid AND\n> \tpg_attribute.attisdropped = 'f' ;\n> \n> is used to obtain all attributes (column names) and their types for a\n> given table ('%s'). It then attempts to build a column/type cache. I'm\n> assuming that this really does need to be there. Please correct\n> accordingly.\n> \n> \n> >From syscat.py:\n> SELECT bc.relname AS class_name,\n> \tic.relname AS index_name, a.attname\n> FROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> WHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> \tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> \tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> \tORDER BY class_name, index_name, attname ;\n> \n> According to the nearby documentation, it's supposed to be fetching a\n> list of \"all simple indicies\". If that's the case, is it safe to assume\n> that any indexed column will never have attisdropped = 't'? If so, we\n> can remove that check from the file as well. Worth pointing out, this\n> is from syscat.py, which is sample source and not used as actual\n> interface. So, worse case, it would appear to be redundant in nature\n> with no harm done.\n> \n> This should conclude the patched items offered in the second patch.\n> \n> What ya think?\n> \n> Thanks,\n> \tGreg\n> \n> \n\n\n",
"msg_date": "12 Aug 2002 19:33:29 -0400",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "Well, I tend to agree with that. Overall, I can't say that I see bad\nthings coming out of accepting the patch as is. It's not exactly\ncausing an extra join or other wise a significant waste of resources. \nAt worst, it appears to be ambiguous. Since Christopher has not offered\nany additional follow up, can we assume that he agrees? In not, please\nlet me know and I'll resubmit patch #2.\n\nIn the mean time, patches #1 and #3 should be good to go. Bruce, feel\nfree to apply those whenever time allows.\n\nThanks,\n\tGreg Copeland\n\n\nOn Mon, 2002-08-12 at 18:33, Rod Taylor wrote:\n> All of that said, the cost of the check is so small it may save someones\n> ass some day when they have a corrupted catalog and the below\n> assumptions are no longer true.\n> \n> On Mon, 2002-08-12 at 18:40, Greg Copeland wrote:\n> > On Sun, 2002-08-11 at 21:15, Christopher Kings-Lynne wrote:\n> > > > Not a problem. I would rather them be correct.\n> > > >\n> > > > Worth noting that the first patch is what attempts to fix the long ->\n> > > > int overflow issue. The second patch attempts to resolve \"attisdropped\"\n> > > > column use issues with the python scripts. The third patch addresses\n> > > > issues generated by the implicate to explicate use of \"cascade\".\n> > > >\n> > > > I assume your reservations are only with the second patch and not the\n> > > > first and third patches?\n> > > \n> > > Correct. I'm pretty sure you don't need to exclude attisdropped from the\n> > > primary key list because all it's doing is finding the column that a primary\n> > > key is over and that should never be over a dropped column. I can't\n> > > remember what you said the second query did?\n> > \n> > \n> > Hmmm. Sounds okay but I'm just not sure that holds true (as I\n> > previously stated, I'm ignorant on the topic). Obviously I'll defer to\n> > you on this.\n> > \n> > Here's the queries and what they do:\n> > \n> > \n> > >From pg.py:\n> > Used to locate primary keys -- or so the comment says. It does create a\n> > dictionary of keys and attribute values for each returned row so I\n> > assume it really is attempting to do something of the like.\n> > \n> > SELECT pg_class.relname, pg_attribute.attname \n> > FROM pg_class, pg_attribute, pg_index \n> > WHERE pg_class.oid = pg_attribute.attrelid AND \n> > \tpg_class.oid = pg_index.indrelid AND \n> > \tpg_index.indkey[0] = pg_attribute.attnum AND \n> > \tpg_index.indisprimary = 't' AND \n> > \tpg_attribute.attisdropped = 'f' ;\n> > \n> > So, everyone is in agreement that any attribute which is indexed as a\n> > primary key will never be able to have attisdtopped = 't'?\n> > \n> > According to the code:\n> > SELECT pg_attribute.attname, pg_type.typname\n> > FROM pg_class, pg_attribute, pg_type\n> > WHERE pg_class.relname = '%s' AND\n> > \tpg_attribute.attnum > 0 AND\n> > \tpg_attribute.attrelid = pg_class.oid AND\n> > \tpg_attribute.atttypid = pg_type.oid AND\n> > \tpg_attribute.attisdropped = 'f' ;\n> > \n> > is used to obtain all attributes (column names) and their types for a\n> > given table ('%s'). It then attempts to build a column/type cache. I'm\n> > assuming that this really does need to be there. Please correct\n> > accordingly.\n> > \n> > \n> > >From syscat.py:\n> > SELECT bc.relname AS class_name,\n> > \tic.relname AS index_name, a.attname\n> > FROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> > WHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> > \tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> > \tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> > \tORDER BY class_name, index_name, attname ;\n> > \n> > According to the nearby documentation, it's supposed to be fetching a\n> > list of \"all simple indicies\". If that's the case, is it safe to assume\n> > that any indexed column will never have attisdropped = 't'? If so, we\n> > can remove that check from the file as well. Worth pointing out, this\n> > is from syscat.py, which is sample source and not used as actual\n> > interface. So, worse case, it would appear to be redundant in nature\n> > with no harm done.\n> > \n> > This should conclude the patched items offered in the second patch.\n> > \n> > What ya think?\n> > \n> > Thanks,\n> > \tGreg\n> > \n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "14 Aug 2002 22:08:53 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "Yep - alright, just commit it I guess.\n\nChris\n\n> -----Original Message-----\n> From: Greg Copeland [mailto:greg@copelandconsulting.net]\n> Sent: Thursday, 15 August 2002 11:09 AM\n> To: Rod Taylor\n> Cc: Christopher Kings-Lynne; Bruce Momjian; PostgresSQL Hackers Mailing\n> List\n> Subject: Re: [HACKERS] python patch\n>\n>\n> Well, I tend to agree with that. Overall, I can't say that I see bad\n> things coming out of accepting the patch as is. It's not exactly\n> causing an extra join or other wise a significant waste of resources.\n> At worst, it appears to be ambiguous. Since Christopher has not offered\n> any additional follow up, can we assume that he agrees? In not, please\n> let me know and I'll resubmit patch #2.\n>\n> In the mean time, patches #1 and #3 should be good to go. Bruce, feel\n> free to apply those whenever time allows.\n>\n> Thanks,\n> \tGreg Copeland\n>\n>\n> On Mon, 2002-08-12 at 18:33, Rod Taylor wrote:\n> > All of that said, the cost of the check is so small it may save someones\n> > ass some day when they have a corrupted catalog and the below\n> > assumptions are no longer true.\n> >\n> > On Mon, 2002-08-12 at 18:40, Greg Copeland wrote:\n> > > On Sun, 2002-08-11 at 21:15, Christopher Kings-Lynne wrote:\n> > > > > Not a problem. I would rather them be correct.\n> > > > >\n> > > > > Worth noting that the first patch is what attempts to fix\n> the long ->\n> > > > > int overflow issue. The second patch attempts to resolve\n> \"attisdropped\"\n> > > > > column use issues with the python scripts. The third\n> patch addresses\n> > > > > issues generated by the implicate to explicate use of \"cascade\".\n> > > > >\n> > > > > I assume your reservations are only with the second patch\n> and not the\n> > > > > first and third patches?\n> > > >\n> > > > Correct. I'm pretty sure you don't need to exclude\n> attisdropped from the\n> > > > primary key list because all it's doing is finding the\n> column that a primary\n> > > > key is over and that should never be over a dropped column. I can't\n> > > > remember what you said the second query did?\n> > >\n> > >\n> > > Hmmm. Sounds okay but I'm just not sure that holds true (as I\n> > > previously stated, I'm ignorant on the topic). Obviously\n> I'll defer to\n> > > you on this.\n> > >\n> > > Here's the queries and what they do:\n> > >\n> > >\n> > > >From pg.py:\n> > > Used to locate primary keys -- or so the comment says. It\n> does create a\n> > > dictionary of keys and attribute values for each returned row so I\n> > > assume it really is attempting to do something of the like.\n> > >\n> > > SELECT pg_class.relname, pg_attribute.attname\n> > > FROM pg_class, pg_attribute, pg_index\n> > > WHERE pg_class.oid = pg_attribute.attrelid AND\n> > > \tpg_class.oid = pg_index.indrelid AND\n> > > \tpg_index.indkey[0] = pg_attribute.attnum AND\n> > > \tpg_index.indisprimary = 't' AND\n> > > \tpg_attribute.attisdropped = 'f' ;\n> > >\n> > > So, everyone is in agreement that any attribute which is indexed as a\n> > > primary key will never be able to have attisdtopped = 't'?\n> > >\n> > > According to the code:\n> > > SELECT pg_attribute.attname, pg_type.typname\n> > > FROM pg_class, pg_attribute, pg_type\n> > > WHERE pg_class.relname = '%s' AND\n> > > \tpg_attribute.attnum > 0 AND\n> > > \tpg_attribute.attrelid = pg_class.oid AND\n> > > \tpg_attribute.atttypid = pg_type.oid AND\n> > > \tpg_attribute.attisdropped = 'f' ;\n> > >\n> > > is used to obtain all attributes (column names) and their types for a\n> > > given table ('%s'). It then attempts to build a column/type\n> cache. I'm\n> > > assuming that this really does need to be there. Please correct\n> > > accordingly.\n> > >\n> > >\n> > > >From syscat.py:\n> > > SELECT bc.relname AS class_name,\n> > > \tic.relname AS index_name, a.attname\n> > > FROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> > > WHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> > > \tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> > > \tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> > > \tORDER BY class_name, index_name, attname ;\n> > >\n> > > According to the nearby documentation, it's supposed to be fetching a\n> > > list of \"all simple indicies\". If that's the case, is it\n> safe to assume\n> > > that any indexed column will never have attisdropped = 't'? If so, we\n> > > can remove that check from the file as well. Worth pointing out, this\n> > > is from syscat.py, which is sample source and not used as actual\n> > > interface. So, worse case, it would appear to be redundant in nature\n> > > with no harm done.\n> > >\n> > > This should conclude the patched items offered in the second patch.\n> > >\n> > > What ya think?\n> > >\n> > > Thanks,\n> > > \tGreg\n> > >\n> > >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n\n",
"msg_date": "Thu, 15 Aug 2002 11:15:07 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nGreg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> Okay, I read\n> http://archives.postgresql.org/pgsql-bugs/2002-06/msg00086.php and never\n> saw a fix offered up. Since I'm gearing up to use Postgres and Python\n> soon, I figured I'd have a hand at trying to get this sucker addressed. \n> Apologies if this has already been plugged. I looked in the archives\n> and never saw a response.\n> \n> At any rate, I must admit I don't think I fully understand the\n> implications of some of the changes I made even though they appear to be\n> straight forward. We all know the devil is in the details. Anyone more\n> knowledgeable is requested to review my changes. :(\n> \n> I also updated the advanced.py script in a somewhat nonsensical fashion\n> to make use of an int8 field in an effort to test this change. It seems\n> to run okay, however, this is by no means an all exhaustive test. So,\n> it's possible that a bumpy road may lay ahead for some. On the other\n> hand...overflows (hopefully) previously lurked (long -> int conversion).\n> \n> This is my first submission. Please be kind if I submitted to the wrong\n> list. ;)\n> \n> Thank you,\n> \tGreg Copeland\n> \n\n[ text/x-diff is unsupported, treating like TEXT/PLAIN ]\n\n> ? lib_pgmodule.so.0.0\n> ? postgres-python.patch\n> ? tutorial/advanced.pyc\n> Index: pgmodule.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/pgmodule.c,v\n> retrieving revision 1.38\n> diff -u -r1.38 pgmodule.c\n> --- pgmodule.c\t2002/03/29 07:45:39\t1.38\n> +++ pgmodule.c\t2002/08/08 02:46:12\n> @@ -289,23 +289,26 @@\n> \t\t{\n> \t\t\tcase INT2OID:\n> \t\t\tcase INT4OID:\n> -\t\t\tcase INT8OID:\n> \t\t\tcase OIDOID:\n> \t\t\t\ttyp[j] = 1;\n> \t\t\t\tbreak;\n> \n> +\t\t\tcase INT8OID:\n> +\t\t\t\ttyp[j] = 2;\n> +\t\t\t\tbreak;\n> +\n> \t\t\tcase FLOAT4OID:\n> \t\t\tcase FLOAT8OID:\n> \t\t\tcase NUMERICOID:\n> -\t\t\t\ttyp[j] = 2;\n> +\t\t\t\ttyp[j] = 3;\n> \t\t\t\tbreak;\n> \n> \t\t\tcase CASHOID:\n> -\t\t\t\ttyp[j] = 3;\n> +\t\t\t\ttyp[j] = 4;\n> \t\t\t\tbreak;\n> \n> \t\t\tdefault:\n> -\t\t\t\ttyp[j] = 4;\n> +\t\t\t\ttyp[j] = 5;\n> \t\t\t\tbreak;\n> \t\t}\n> \t}\n> @@ -1797,23 +1800,26 @@\n> \t\t{\n> \t\t\tcase INT2OID:\n> \t\t\tcase INT4OID:\n> -\t\t\tcase INT8OID:\n> \t\t\tcase OIDOID:\n> \t\t\t\ttyp[j] = 1;\n> \t\t\t\tbreak;\n> \n> +\t\t\tcase INT8OID:\n> +\t\t\t\ttyp[j] = 2;\n> +\t\t\t\tbreak;\n> +\n> \t\t\tcase FLOAT4OID:\n> \t\t\tcase FLOAT8OID:\n> \t\t\tcase NUMERICOID:\n> -\t\t\t\ttyp[j] = 2;\n> +\t\t\t\ttyp[j] = 3;\n> \t\t\t\tbreak;\n> \n> \t\t\tcase CASHOID:\n> -\t\t\t\ttyp[j] = 3;\n> +\t\t\t\ttyp[j] = 4;\n> \t\t\t\tbreak;\n> \n> \t\t\tdefault:\n> -\t\t\t\ttyp[j] = 4;\n> +\t\t\t\ttyp[j] = 5;\n> \t\t\t\tbreak;\n> \t\t}\n> \t}\n> @@ -1846,10 +1852,14 @@\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase 2:\n> -\t\t\t\t\t\tval = PyFloat_FromDouble(strtod(s, NULL));\n> +\t\t\t\t\t\tval = PyLong_FromLong(strtol(s, NULL, 10));\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase 3:\n> +\t\t\t\t\t\tval = PyFloat_FromDouble(strtod(s, NULL));\n> +\t\t\t\t\t\tbreak;\n> +\n> +\t\t\t\t\tcase 4:\n> \t\t\t\t\t\t{\n> \t\t\t\t\t\t\tint\t\t\tmult = 1;\n> \n> @@ -1946,11 +1956,14 @@\n> \t\t{\n> \t\t\tcase INT2OID:\n> \t\t\tcase INT4OID:\n> -\t\t\tcase INT8OID:\n> \t\t\tcase OIDOID:\n> \t\t\t\ttyp[j] = 1;\n> \t\t\t\tbreak;\n> \n> +\t\t\tcase INT8OID:\n> +\t\t\t\ttyp[j] = 2;\n> +\t\t\t\tbreak;\n> +\n> \t\t\tcase FLOAT4OID:\n> \t\t\tcase FLOAT8OID:\n> \t\t\tcase NUMERICOID:\n> @@ -1995,10 +2008,14 @@\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase 2:\n> -\t\t\t\t\t\tval = PyFloat_FromDouble(strtod(s, NULL));\n> +\t\t\t\t\t\tval = PyLong_FromLong(strtol(s, NULL, 10));\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase 3:\n> +\t\t\t\t\t\tval = PyFloat_FromDouble(strtod(s, NULL));\n> +\t\t\t\t\t\tbreak;\n> +\n> +\t\t\t\t\tcase 4:\n> \t\t\t\t\t\t{\n> \t\t\t\t\t\t\tint\t\t\tmult = 1;\n> \n> Index: tutorial/advanced.py\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/advanced.py,v\n> retrieving revision 1.5\n> diff -u -r1.5 advanced.py\n> --- tutorial/advanced.py\t2000/10/02 03:46:24\t1.5\n> +++ tutorial/advanced.py\t2002/08/08 02:46:12\n> @@ -109,11 +109,13 @@\n> \tprint \"CREATE TABLE sal_emp (\"\n> \tprint \" name text,\"\n> \tprint \" pay_by_quarter int4[],\"\n> +\tprint \" pay_by_extra_quarter int8[],\"\n> \tprint \" schedule text[][]\"\n> \tprint \")\"\n> \tpgcnx.query(\"\"\"CREATE TABLE sal_emp (\n> name text,\n> pay_by_quarter int4[],\n> + pay_by_extra_quarter int8[],\n> schedule text[][])\"\"\")\n> \twait_key()\n> \tprint\n> @@ -123,18 +125,22 @@\n> \tprint \"INSERT INTO sal_emp VALUES (\"\n> \tprint \" 'Bill',\"\n> \tprint \" '{10000,10000,10000,10000}',\"\n> +\tprint \" '{9223372036854775800,9223372036854775800,9223372036854775800}',\"\n> \tprint \" '{{\\\"meeting\\\", \\\"lunch\\\"}, {}}')\"\n> \tprint\n> \tprint \"INSERT INTO sal_emp VALUES (\"\n> \tprint \" 'Carol',\"\n> \tprint \" '{20000,25000,25000,25000}',\"\n> +\tprint \" '{9223372036854775807,9223372036854775807,9223372036854775807}',\"\n> \tprint \" '{{\\\"talk\\\", \\\"consult\\\"}, {\\\"meeting\\\"}}')\"\n> \tprint\n> \tpgcnx.query(\"\"\"INSERT INTO sal_emp VALUES (\n> 'Bill', '{10000,10000,10000,10000}',\n> +\t'{9223372036854775800,9223372036854775800,9223372036854775800}',\n> '{{\\\"meeting\\\", \\\"lunch\\\"}, {}}')\"\"\")\n> \tpgcnx.query(\"\"\"INSERT INTO sal_emp VALUES (\n> 'Carol', '{20000,25000,25000,25000}',\n> +\t'{9223372036854775807,9223372036854775807,9223372036854775807}',\n> '{{\\\"talk\\\", \\\"consult\\\"}, {\\\"meeting\\\"}}')\"\"\")\n> \twait_key()\n> \tprint\n> @@ -148,11 +154,25 @@\n> \tprint pgcnx.query(\"\"\"SELECT name FROM sal_emp WHERE\n> sal_emp.pay_by_quarter[1] <> sal_emp.pay_by_quarter[2]\"\"\")\n> \tprint\n> +\tprint pgcnx.query(\"\"\"SELECT name FROM sal_emp WHERE\n> + sal_emp.pay_by_extra_quarter[1] <> sal_emp.pay_by_extra_quarter[2]\"\"\")\n> +\tprint\n> \tprint \"-- retrieve third quarter pay of all employees\"\n> \tprint \n> \tprint \"SELECT sal_emp.pay_by_quarter[3] FROM sal_emp\"\n> \tprint\n> \tprint pgcnx.query(\"SELECT sal_emp.pay_by_quarter[3] FROM sal_emp\")\n> +\tprint\n> +\tprint \"-- retrieve third quarter extra pay of all employees\"\n> +\tprint \n> +\tprint \"SELECT sal_emp.pay_by_extra_quarter[3] FROM sal_emp\"\n> +\tprint pgcnx.query(\"SELECT sal_emp.pay_by_extra_quarter[3] FROM sal_emp\")\n> +\tprint \n> +\tprint \"-- retrieve first two quarters of extra quarter pay of all employees\"\n> +\tprint \n> +\tprint \"SELECT sal_emp.pay_by_extra_quarter[1:2] FROM sal_emp\"\n> +\tprint\n> +\tprint pgcnx.query(\"SELECT sal_emp.pay_by_extra_quarter[1:2] FROM sal_emp\")\n> \tprint\n> \tprint \"-- select subarrays\"\n> \tprint \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 23:31:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nGreg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> Well, that certainly appeared to be very straight forward. pg.py and\n> syscat.py scripts were both modified. pg.py uses it to cache a list of\n> pks (which is seemingly does for every db connection) and various\n> attributes. syscat uses it to walk the list of system tables and\n> queries the various attributes from these tables.\n> \n> In both cases, it seemingly makes sense to apply what you've requested.\n> \n> Please find attached the quested patch below.\n> \n> Greg\n> \n> \n> On Wed, 2002-08-07 at 22:16, Christopher Kings-Lynne wrote:\n> > > I don't have a problem looking into it but I can't promise I can get it\n> > > right. My python skills are fairly good...my postgres internal skills\n> > > are still sub-par IMO.\n> > > \n> > > From a cursory review, if attisdropped is true then the attribute/column\n> > > should be ignored/skipped?! Seems pretty dang straight forward.\n> > \n> > Basically, yep. Just grep the source code for pg_attribute most likely...\n> > \n> > I'm interested in knowing what it uses pg_attribute for as well...?\n> > \n> > Chris\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n\n[ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n\n> Index: pg.py\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/pg.py,v\n> retrieving revision 1.9\n> diff -u -r1.9 pg.py\n> --- pg.py\t2002/03/19 13:20:52\t1.9\n> +++ pg.py\t2002/08/08 03:29:48\n> @@ -69,7 +69,8 @@\n> \t\t\t\t\t\tWHERE pg_class.oid = pg_attribute.attrelid AND\n> \t\t\t\t\t\t\tpg_class.oid = pg_index.indrelid AND\n> \t\t\t\t\t\t\tpg_index.indkey[0] = pg_attribute.attnum AND \n> -\t\t\t\t\t\t\tpg_index.indisprimary = 't'\"\"\").getresult():\n> +\t\t\t\t\t\t\tpg_index.indisprimary = 't' AND\n> +\t\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\").getresult():\n> \t\t\tself.__pkeys__[rel] = att\n> \n> \t# wrap query for debugging\n> @@ -111,7 +112,8 @@\n> \t\t\t\t\tWHERE pg_class.relname = '%s' AND\n> \t\t\t\t\t\tpg_attribute.attnum > 0 AND\n> \t\t\t\t\t\tpg_attribute.attrelid = pg_class.oid AND\n> -\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid\"\"\"\n> +\t\t\t\t\t\tpg_attribute.atttypid = pg_type.oid AND\n> +\t\t\t\t\t\tpg_attribute.attisdropped = 'f'\"\"\"\n> \n> \t\tl = {}\n> \t\tfor attname, typname in self.db.query(query % cl).getresult():\n> Index: tutorial/syscat.py\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/syscat.py,v\n> retrieving revision 1.7\n> diff -u -r1.7 syscat.py\n> --- tutorial/syscat.py\t2002/05/03 14:21:38\t1.7\n> +++ tutorial/syscat.py\t2002/08/08 03:29:48\n> @@ -37,7 +37,7 @@\n> \t\tFROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> \t\tWHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> \t\t\t\tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> -\t\t\t\tAND i.indproc = '0'::oid\n> +\t\t\t\tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> \t\tORDER BY class_name, index_name, attname\"\"\")\n> \treturn result\n> \n> @@ -48,6 +48,7 @@\n> \t\tWHERE c.relkind = 'r' and c.relname !~ '^pg_'\n> \t\t\tAND c.relname !~ '^Inv' and a.attnum > 0\n> \t\t\tAND a.attrelid = c.oid and a.atttypid = t.oid\n> + AND a.attisdropped = 'f'\n> \t\t\tORDER BY relname, attname\"\"\")\n> \treturn result\n> \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 23:32:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "\nOK, I have applied all three of Greg's python patches.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Yep - alright, just commit it I guess.\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: Greg Copeland [mailto:greg@copelandconsulting.net]\n> > Sent: Thursday, 15 August 2002 11:09 AM\n> > To: Rod Taylor\n> > Cc: Christopher Kings-Lynne; Bruce Momjian; PostgresSQL Hackers Mailing\n> > List\n> > Subject: Re: [HACKERS] python patch\n> >\n> >\n> > Well, I tend to agree with that. Overall, I can't say that I see bad\n> > things coming out of accepting the patch as is. It's not exactly\n> > causing an extra join or other wise a significant waste of resources.\n> > At worst, it appears to be ambiguous. Since Christopher has not offered\n> > any additional follow up, can we assume that he agrees? In not, please\n> > let me know and I'll resubmit patch #2.\n> >\n> > In the mean time, patches #1 and #3 should be good to go. Bruce, feel\n> > free to apply those whenever time allows.\n> >\n> > Thanks,\n> > \tGreg Copeland\n> >\n> >\n> > On Mon, 2002-08-12 at 18:33, Rod Taylor wrote:\n> > > All of that said, the cost of the check is so small it may save someones\n> > > ass some day when they have a corrupted catalog and the below\n> > > assumptions are no longer true.\n> > >\n> > > On Mon, 2002-08-12 at 18:40, Greg Copeland wrote:\n> > > > On Sun, 2002-08-11 at 21:15, Christopher Kings-Lynne wrote:\n> > > > > > Not a problem. I would rather them be correct.\n> > > > > >\n> > > > > > Worth noting that the first patch is what attempts to fix\n> > the long ->\n> > > > > > int overflow issue. The second patch attempts to resolve\n> > \"attisdropped\"\n> > > > > > column use issues with the python scripts. The third\n> > patch addresses\n> > > > > > issues generated by the implicate to explicate use of \"cascade\".\n> > > > > >\n> > > > > > I assume your reservations are only with the second patch\n> > and not the\n> > > > > > first and third patches?\n> > > > >\n> > > > > Correct. I'm pretty sure you don't need to exclude\n> > attisdropped from the\n> > > > > primary key list because all it's doing is finding the\n> > column that a primary\n> > > > > key is over and that should never be over a dropped column. I can't\n> > > > > remember what you said the second query did?\n> > > >\n> > > >\n> > > > Hmmm. Sounds okay but I'm just not sure that holds true (as I\n> > > > previously stated, I'm ignorant on the topic). Obviously\n> > I'll defer to\n> > > > you on this.\n> > > >\n> > > > Here's the queries and what they do:\n> > > >\n> > > >\n> > > > >From pg.py:\n> > > > Used to locate primary keys -- or so the comment says. It\n> > does create a\n> > > > dictionary of keys and attribute values for each returned row so I\n> > > > assume it really is attempting to do something of the like.\n> > > >\n> > > > SELECT pg_class.relname, pg_attribute.attname\n> > > > FROM pg_class, pg_attribute, pg_index\n> > > > WHERE pg_class.oid = pg_attribute.attrelid AND\n> > > > \tpg_class.oid = pg_index.indrelid AND\n> > > > \tpg_index.indkey[0] = pg_attribute.attnum AND\n> > > > \tpg_index.indisprimary = 't' AND\n> > > > \tpg_attribute.attisdropped = 'f' ;\n> > > >\n> > > > So, everyone is in agreement that any attribute which is indexed as a\n> > > > primary key will never be able to have attisdtopped = 't'?\n> > > >\n> > > > According to the code:\n> > > > SELECT pg_attribute.attname, pg_type.typname\n> > > > FROM pg_class, pg_attribute, pg_type\n> > > > WHERE pg_class.relname = '%s' AND\n> > > > \tpg_attribute.attnum > 0 AND\n> > > > \tpg_attribute.attrelid = pg_class.oid AND\n> > > > \tpg_attribute.atttypid = pg_type.oid AND\n> > > > \tpg_attribute.attisdropped = 'f' ;\n> > > >\n> > > > is used to obtain all attributes (column names) and their types for a\n> > > > given table ('%s'). It then attempts to build a column/type\n> > cache. I'm\n> > > > assuming that this really does need to be there. Please correct\n> > > > accordingly.\n> > > >\n> > > >\n> > > > >From syscat.py:\n> > > > SELECT bc.relname AS class_name,\n> > > > \tic.relname AS index_name, a.attname\n> > > > FROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> > > > WHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> > > > \tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> > > > \tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> > > > \tORDER BY class_name, index_name, attname ;\n> > > >\n> > > > According to the nearby documentation, it's supposed to be fetching a\n> > > > list of \"all simple indicies\". If that's the case, is it\n> > safe to assume\n> > > > that any indexed column will never have attisdropped = 't'? If so, we\n> > > > can remove that check from the file as well. Worth pointing out, this\n> > > > is from syscat.py, which is sample source and not used as actual\n> > > > interface. So, worse case, it would appear to be redundant in nature\n> > > > with no harm done.\n> > > >\n> > > > This should conclude the patched items offered in the second patch.\n> > > >\n> > > > What ya think?\n> > > >\n> > > > Thanks,\n> > > > \tGreg\n> > > >\n> > > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> >\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 23:34:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: python patch"
},
{
"msg_contents": "Thanks.\n\n-Greg\n\n\nOn Wed, 2002-08-14 at 22:34, Bruce Momjian wrote:\n> \n> OK, I have applied all three of Greg's python patches.\n> \n> ---------------------------------------------------------------------------\n> \n> Christopher Kings-Lynne wrote:\n> > Yep - alright, just commit it I guess.\n> > \n> > Chris\n> > \n> > > -----Original Message-----\n> > > From: Greg Copeland [mailto:greg@copelandconsulting.net]\n> > > Sent: Thursday, 15 August 2002 11:09 AM\n> > > To: Rod Taylor\n> > > Cc: Christopher Kings-Lynne; Bruce Momjian; PostgresSQL Hackers Mailing\n> > > List\n> > > Subject: Re: [HACKERS] python patch\n> > >\n> > >\n> > > Well, I tend to agree with that. Overall, I can't say that I see bad\n> > > things coming out of accepting the patch as is. It's not exactly\n> > > causing an extra join or other wise a significant waste of resources.\n> > > At worst, it appears to be ambiguous. Since Christopher has not offered\n> > > any additional follow up, can we assume that he agrees? In not, please\n> > > let me know and I'll resubmit patch #2.\n> > >\n> > > In the mean time, patches #1 and #3 should be good to go. Bruce, feel\n> > > free to apply those whenever time allows.\n> > >\n> > > Thanks,\n> > > \tGreg Copeland\n> > >\n> > >\n> > > On Mon, 2002-08-12 at 18:33, Rod Taylor wrote:\n> > > > All of that said, the cost of the check is so small it may save someones\n> > > > ass some day when they have a corrupted catalog and the below\n> > > > assumptions are no longer true.\n> > > >\n> > > > On Mon, 2002-08-12 at 18:40, Greg Copeland wrote:\n> > > > > On Sun, 2002-08-11 at 21:15, Christopher Kings-Lynne wrote:\n> > > > > > > Not a problem. I would rather them be correct.\n> > > > > > >\n> > > > > > > Worth noting that the first patch is what attempts to fix\n> > > the long ->\n> > > > > > > int overflow issue. The second patch attempts to resolve\n> > > \"attisdropped\"\n> > > > > > > column use issues with the python scripts. The third\n> > > patch addresses\n> > > > > > > issues generated by the implicate to explicate use of \"cascade\".\n> > > > > > >\n> > > > > > > I assume your reservations are only with the second patch\n> > > and not the\n> > > > > > > first and third patches?\n> > > > > >\n> > > > > > Correct. I'm pretty sure you don't need to exclude\n> > > attisdropped from the\n> > > > > > primary key list because all it's doing is finding the\n> > > column that a primary\n> > > > > > key is over and that should never be over a dropped column. I can't\n> > > > > > remember what you said the second query did?\n> > > > >\n> > > > >\n> > > > > Hmmm. Sounds okay but I'm just not sure that holds true (as I\n> > > > > previously stated, I'm ignorant on the topic). Obviously\n> > > I'll defer to\n> > > > > you on this.\n> > > > >\n> > > > > Here's the queries and what they do:\n> > > > >\n> > > > >\n> > > > > >From pg.py:\n> > > > > Used to locate primary keys -- or so the comment says. It\n> > > does create a\n> > > > > dictionary of keys and attribute values for each returned row so I\n> > > > > assume it really is attempting to do something of the like.\n> > > > >\n> > > > > SELECT pg_class.relname, pg_attribute.attname\n> > > > > FROM pg_class, pg_attribute, pg_index\n> > > > > WHERE pg_class.oid = pg_attribute.attrelid AND\n> > > > > \tpg_class.oid = pg_index.indrelid AND\n> > > > > \tpg_index.indkey[0] = pg_attribute.attnum AND\n> > > > > \tpg_index.indisprimary = 't' AND\n> > > > > \tpg_attribute.attisdropped = 'f' ;\n> > > > >\n> > > > > So, everyone is in agreement that any attribute which is indexed as a\n> > > > > primary key will never be able to have attisdtopped = 't'?\n> > > > >\n> > > > > According to the code:\n> > > > > SELECT pg_attribute.attname, pg_type.typname\n> > > > > FROM pg_class, pg_attribute, pg_type\n> > > > > WHERE pg_class.relname = '%s' AND\n> > > > > \tpg_attribute.attnum > 0 AND\n> > > > > \tpg_attribute.attrelid = pg_class.oid AND\n> > > > > \tpg_attribute.atttypid = pg_type.oid AND\n> > > > > \tpg_attribute.attisdropped = 'f' ;\n> > > > >\n> > > > > is used to obtain all attributes (column names) and their types for a\n> > > > > given table ('%s'). It then attempts to build a column/type\n> > > cache. I'm\n> > > > > assuming that this really does need to be there. Please correct\n> > > > > accordingly.\n> > > > >\n> > > > >\n> > > > > >From syscat.py:\n> > > > > SELECT bc.relname AS class_name,\n> > > > > \tic.relname AS index_name, a.attname\n> > > > > FROM pg_class bc, pg_class ic, pg_index i, pg_attribute a\n> > > > > WHERE i.indrelid = bc.oid AND i.indexrelid = bc.oid\n> > > > > \tAND i.indkey[0] = a.attnum AND a.attrelid = bc.oid\n> > > > > \tAND i.indproc = '0'::oid AND a.attisdropped = 'f'\n> > > > > \tORDER BY class_name, index_name, attname ;\n> > > > >\n> > > > > According to the nearby documentation, it's supposed to be fetching a\n> > > > > list of \"all simple indicies\". If that's the case, is it\n> > > safe to assume\n> > > > > that any indexed column will never have attisdropped = 't'? If so, we\n> > > > > can remove that check from the file as well. Worth pointing out, this\n> > > > > is from syscat.py, which is sample source and not used as actual\n> > > > > interface. So, worse case, it would appear to be redundant in nature\n> > > > > with no harm done.\n> > > > >\n> > > > > This should conclude the patched items offered in the second patch.\n> > > > >\n> > > > > What ya think?\n> > > > >\n> > > > > Thanks,\n> > > > > \tGreg\n> > > > >\n> > > > >\n> > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > >\n> > >\n> > \n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html",
"msg_date": "14 Aug 2002 22:39:44 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: python patch"
}
] |
[
{
"msg_contents": "This fixes some text as well as enforces the use of \"drop table cascade\"\nsince we moved from an implicate to explicate implementation.\n\nPlease find attached the func.py patch.\n\nSorry these are not all one single patch. I really hadn't planned on\ndoing all this...especially not tonight. ;)\n\nGreg Copeland",
"msg_date": "07 Aug 2002 22:50:03 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Another python patch -- minor"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nGreg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> This fixes some text as well as enforces the use of \"drop table cascade\"\n> since we moved from an implicate to explicate implementation.\n> \n> Please find attached the func.py patch.\n> \n> Sorry these are not all one single patch. I really hadn't planned on\n> doing all this...especially not tonight. ;)\n> \n> Greg Copeland\n> \n> \n> \n\n[ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n\n> Index: func.py\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/func.py,v\n> retrieving revision 1.5\n> diff -u -r1.5 func.py\n> --- func.py\t2000/10/02 03:46:24\t1.5\n> +++ func.py\t2002/08/08 03:47:04\n> @@ -9,7 +9,7 @@\n> This module is designed for being imported from python prompt\n> \n> In order to run the samples included here, first create a connection\n> -using : cnx = advanced.DB(...)\n> +using : cnx = func.DB(...)\n> \n> The \"...\" should be replaced with whatever arguments you need to open an\n> existing database. Usually all you need is the name of the database and,\n> @@ -189,13 +189,13 @@\n> \tprint \"DROP FUNCTION add_em(int4, int4)\"\n> \tprint \"DROP FUNCTION one()\"\n> \tprint\n> -\tprint \"DROP TABLE EMP\"\n> +\tprint \"DROP TABLE EMP CASCADE\"\n> \tpgcnx.query(\"DROP FUNCTION clean_EMP()\")\n> \tpgcnx.query(\"DROP FUNCTION high_pay()\")\n> \tpgcnx.query(\"DROP FUNCTION new_emp()\")\n> \tpgcnx.query(\"DROP FUNCTION add_em(int4, int4)\")\n> \tpgcnx.query(\"DROP FUNCTION one()\")\n> -\tpgcnx.query(\"DROP TABLE EMP\")\n> +\tpgcnx.query(\"DROP TABLE EMP CASCADE\")\n> \n> # main demo function\n> def demo(pgcnx):\n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sun, 11 Aug 2002 01:14:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another python patch -- minor"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nGreg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> This fixes some text as well as enforces the use of \"drop table cascade\"\n> since we moved from an implicate to explicate implementation.\n> \n> Please find attached the func.py patch.\n> \n> Sorry these are not all one single patch. I really hadn't planned on\n> doing all this...especially not tonight. ;)\n> \n> Greg Copeland\n> \n> \n> \n\n[ text/x-patch is unsupported, treating like TEXT/PLAIN ]\n\n> Index: func.py\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/python/tutorial/func.py,v\n> retrieving revision 1.5\n> diff -u -r1.5 func.py\n> --- func.py\t2000/10/02 03:46:24\t1.5\n> +++ func.py\t2002/08/08 03:47:04\n> @@ -9,7 +9,7 @@\n> This module is designed for being imported from python prompt\n> \n> In order to run the samples included here, first create a connection\n> -using : cnx = advanced.DB(...)\n> +using : cnx = func.DB(...)\n> \n> The \"...\" should be replaced with whatever arguments you need to open an\n> existing database. Usually all you need is the name of the database and,\n> @@ -189,13 +189,13 @@\n> \tprint \"DROP FUNCTION add_em(int4, int4)\"\n> \tprint \"DROP FUNCTION one()\"\n> \tprint\n> -\tprint \"DROP TABLE EMP\"\n> +\tprint \"DROP TABLE EMP CASCADE\"\n> \tpgcnx.query(\"DROP FUNCTION clean_EMP()\")\n> \tpgcnx.query(\"DROP FUNCTION high_pay()\")\n> \tpgcnx.query(\"DROP FUNCTION new_emp()\")\n> \tpgcnx.query(\"DROP FUNCTION add_em(int4, int4)\")\n> \tpgcnx.query(\"DROP FUNCTION one()\")\n> -\tpgcnx.query(\"DROP TABLE EMP\")\n> +\tpgcnx.query(\"DROP TABLE EMP CASCADE\")\n> \n> # main demo function\n> def demo(pgcnx):\n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 23:33:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another python patch -- minor"
}
] |
[
{
"msg_contents": "Hi,\n\nI asked on -interfaces and there was no answer, so I'm trying here:\n\nCan someone give me an example how to use LISTEN in ECPG, please? NOTIFY\nis simple but I need LISTEN too...\n\nthanks, kuba\n\n\n\n",
"msg_date": "Thu, 8 Aug 2002 09:46:04 +0200 (CEST)",
"msg_from": "Jakub Ouhrabka <jouh8664@ss1000.ms.mff.cuni.cz>",
"msg_from_op": true,
"msg_subject": "ECPG and LISTEN"
}
] |
[
{
"msg_contents": "Hi all,\n\nI just spent some of the morning helping a customer build Pg 7.2.1 from \nsource in order to get Linux largefile support in pg_dump etc. They \npossibly would have kept using the binary RPMs if they had this feature.\n\nThis got me to wondering why the Redhat/Mandrake...etc binary RPMS are \nbuilt without it.\n\nWould including default largefile support in Linux RPMs be a good idea ?\n\n(I am presuming that such RPMs are built by the Pg community and \n\"supplied\" to the various distros... apologies if I have this all wrong...)\n\nCheers\n\nMark\n\n",
"msg_date": "Thu, 08 Aug 2002 20:27:44 +1200",
"msg_from": "mark Kirkwood <markir@slingshot.co.nz>",
"msg_from_op": true,
"msg_subject": "Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "\n\nNote, I'm not sure this belongs in -hackers so I've added -general but left\n-hackers in so that list can at least see that it's going to -general.\n\n\nOn Thu, 8 Aug 2002, mark Kirkwood wrote:\n\n> Hi all,\n> \n> I just spent some of the morning helping a customer build Pg 7.2.1 from \n> source in order to get Linux largefile support in pg_dump etc. They \n> possibly would have kept using the binary RPMs if they had this feature.\n> \n> This got me to wondering why the Redhat/Mandrake...etc binary RPMS are \n> built without it.\n> \n> Would including default largefile support in Linux RPMs be a good idea ?\n> \n> (I am presuming that such RPMs are built by the Pg community and \n> \"supplied\" to the various distros... apologies if I have this all wrong...)\n\n\nI must admit that I am fairly new to PostgreSQL but I have used it and read\nstuff about it and I'm not sure what you mean. Could you explain what you\ndid?\n\nA quick scan of the source shows that there may be an issue in\nstorage/file/buffile.c:BufFileSeek() is that the sort of thing you are talking\nabout? Or maybe I've got it completely wrong and you're talking about adding\ncode to pg_dump although I thought that could already handle large\nobjects. Actually, I'm going to shut up now before I really do show my\nignorance and let you answer.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Thu, 8 Aug 2002 22:36:06 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Hi,\n\njust my two cents worth: I like having the files sized in a way I can\nhandle them easily with any UNIX tool on nearly any system. No matter\nwether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at\na maximum size below any limits, handy for handling.\n\nFor example, Oracle suggests it somewhere in their documentation, to keep\ndatafiles at a reasonable size, e.g. 1 GB. Seems right to me, never had\nany problems with it.\n\n\nKind regards\n... Ralph ...\n\n\n",
"msg_date": "Fri, 9 Aug 2002 01:18:51 +0200 (MEST)",
"msg_from": "Ralph Graulich <maillist@shauny.de>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Thursday 08 August 2002 05:36 pm, Nigel J. Andrews wrote:\n> Matt Kirkwood wrote:\n\n> > I just spent some of the morning helping a customer build Pg 7.2.1 from\n> > source in order to get Linux largefile support in pg_dump etc. They\n> > possibly would have kept using the binary RPMs if they had this feature.\n\nAnd you added this by doing what, exactly? I'm not familiar with pg_dump \nlargefile support as a standalone feature.\n\n> > (I am presuming that such RPMs are built by the Pg community and\n> > \"supplied\" to the various distros... apologies if I have this all\n> > wrong...)\n\nYou have this wrong. The distributions do periodically sync up with my \nrevision, and I with theirs, but they do their own packaging.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 9 Aug 2002 01:07:17 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Fri, 2002-08-09 at 06:07, Lamar Owen wrote:\n> On Thursday 08 August 2002 05:36 pm, Nigel J. Andrews wrote:\n> > Matt Kirkwood wrote:\n> \n> > > I just spent some of the morning helping a customer build Pg 7.2.1 from\n> > > source in order to get Linux largefile support in pg_dump etc. They\n> > > possibly would have kept using the binary RPMs if they had this feature.\n> \n> And you added this by doing what, exactly? I'm not familiar with pg_dump \n> largefile support as a standalone feature.\n\nAs far as I can make out from the libc docs, largefile support is\nautomatic if the macro _GNU_SOURCE is defined and the kernel supports\nlarge files. \n\nIs that a correct understanding? or do I actually need to do something\nspecial to ensure that pg_dump supports large files?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"...ask, and ye shall receive, that your joy may be \n full.\" John 16:24 \n\n",
"msg_date": "09 Aug 2002 07:34:48 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "> As far as I can make out from the libc docs, largefile support is\n> automatic if the macro _GNU_SOURCE is defined and the kernel supports\n> large files.\n>\n> Is that a correct understanding? or do I actually need to do something\n> special to ensure that pg_dump supports large files?\n\nin this case you still have to use large file functions in the code\nexplicitly\n\nthe easiest way to get large file support is to pass\n-D_FILE_OFFSET_BITS=64 to the preprocessor, and I think I remember doing\nthis once for pg_dump\n\nsee /usr/include/features.h\n\nBest regards\nHelge\n\n",
"msg_date": "Fri, 9 Aug 2002 09:51:54 +0200 (CEST)",
"msg_from": "Helge Bahmann <bahmann@math.tu-freiberg.de>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "\nOn Fri, 9 Aug 2002, Helge Bahmann wrote:\n\n> > As far as I can make out from the libc docs, largefile support is\n> > automatic if the macro _GNU_SOURCE is defined and the kernel supports\n> > large files.\n> >\n> > Is that a correct understanding? or do I actually need to do something\n> > special to ensure that pg_dump supports large files?\n> \n> in this case you still have to use large file functions in the code\n> explicitly\n> \n> the easiest way to get large file support is to pass\n> -D_FILE_OFFSET_BITS=64 to the preprocessor, and I think I remember doing\n> this once for pg_dump\n> \n> see /usr/include/features.h\n\nThere is some commentary on this in my /usr/doc/libc6/NOTES.gz, which I presume\nOliver has already found since I found it after reading his posting. It gives a\nbit more detail that the header file for those who want to check this out. I\nfor one was completely unaware of those 64 bit functions.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Fri, 9 Aug 2002 11:12:55 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Lamar Owen wrote:\n\n>\n>And you added this by doing what, exactly? I'm not familiar with pg_dump \n>largefile support as a standalone feature.\n>\n\nEnabling largefile support for the utilities was accomplished by :\n\nCFLAGS=\"-O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\" ./configure ...\n\nIt seemed to me that the ability to dump databases >2G without gzip, \nsplit etc was a \"good thing\". What do you think ?\n\n>\n>\n>You have this wrong. The distributions do periodically sync up with my \n>revision, and I with theirs, but they do their own packaging.\n>\nI see.... so if you enabled such support, they they would probably sync \nthat too ?\n\n\n",
"msg_date": "Sat, 10 Aug 2002 17:13:23 +1200",
"msg_from": "Mark Kirkwood <markir@slingshot.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Ralph Graulich wrote:\n\n>Hi,\n>\n>just my two cents worth: I like having the files sized in a way I can\n>handle them easily with any UNIX tool on nearly any system. No matter\n>wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at\n>a maximum size below any limits, handy for handling.\n>\nGood point... however I was thinking that being able to dump the entire \ndatabase without resporting to \"gzips and splits\" was handy...\n\n>\n>For example, Oracle suggests it somewhere in their documentation, to keep\n>datafiles at a reasonable size, e.g. 1 GB. Seems right to me, never had\n>any problems with it.\n>\nYep, fixed or controlled sizes for data files is great... I was thinking \nabout databases rather than data files (altho I may not have made that \nclear in my mail)\n\nbest wishes\n\nMark\n\n\n",
"msg_date": "Sat, 10 Aug 2002 17:25:25 +1200",
"msg_from": "Mark Kirkwood <markir@slingshot.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Oliver Elphick wrote:\n\n>As far as I can make out from the libc docs, largefile support is\n>automatic if the macro _GNU_SOURCE is defined and the kernel supports\n>large files. \n>\n>Is that a correct understanding? or do I actually need to do something\n>special to ensure that pg_dump supports large files?\n>\nI defined\n\n_LARGEFILE_SOURCE and\n_FILE_OFFSET_BITS=64\n\nhowever _GNU_SOURCE may well be a cleaner way of getting the same effect\n(guess I should browse the .h files...)\n\nregards\n\nMark\n\n\n\n",
"msg_date": "Sat, 10 Aug 2002 17:32:23 +1200",
"msg_from": "Mark Kirkwood <markir@slingshot.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Sat, 2002-08-10 at 00:25, Mark Kirkwood wrote:\n> Ralph Graulich wrote:\n> \n> >Hi,\n> >\n> >just my two cents worth: I like having the files sized in a way I can\n> >handle them easily with any UNIX tool on nearly any system. No matter\n> >wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at\n> >a maximum size below any limits, handy for handling.\n> >\n> Good point... however I was thinking that being able to dump the entire \n> database without resporting to \"gzips and splits\" was handy...\n> \n> >\n> >For example, Oracle suggests it somewhere in their documentation, to keep\n> >datafiles at a reasonable size, e.g. 1 GB. Seems right to me, never had\n> >any problems with it.\n> >\n> Yep, fixed or controlled sizes for data files is great... I was thinking \n> about databases rather than data files (altho I may not have made that \n> clear in my mail)\n> \n\nI'm actually amazed that postgres isn't already using large file\nsupport. Especially for tools like dump. I do recognize the need to\nkeep files manageable in size but my file sizes for my needs may differ\nfrom your sizing needs.\n\nSeems like it would be a good thing to enable and simply make it a\nfunction for the DBA to handle. After all, even if I'm trying to keep\nmy dumps at around 1GB, I probably would be okay with a dump of 1.1GB\ntoo. To me, that just seems more flexible.\n\nGreg",
"msg_date": "10 Aug 2002 09:21:07 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Sat, 2002-08-10 at 06:32, Mark Kirkwood wrote:\n> Oliver Elphick wrote:\n> \n> >As far as I can make out from the libc docs, largefile support is\n> >automatic if the macro _GNU_SOURCE is defined and the kernel supports\n> >large files. \n> >\n> >Is that a correct understanding? or do I actually need to do something\n> >special to ensure that pg_dump supports large files?\n> >\n> I defined\n> \n> _LARGEFILE_SOURCE and\n> _FILE_OFFSET_BITS=64\n> \n> however _GNU_SOURCE may well be a cleaner way of getting the same effect\n> (guess I should browse the .h files...)\n\nIt seems that it has to be defined explicitly.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If ye abide in me, and my words abide in you, ye shall\n ask what ye will, and it shall be done unto you.\" \n John 15:7 \n\n",
"msg_date": "10 Aug 2002 22:20:37 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:\n\n> I'm actually amazed that postgres isn't already using large file\n> support. Especially for tools like dump. \n\nExcept it would only cause confusion if you ran such a program on a\nsystem that didn't itself have largefile support. Better to make the\nadmin turn all these things on on purpose, until everyone is running\n64 bit systems everywhere.\n\nA\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 12 Aug 2002 10:39:12 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, 2002-08-12 at 09:39, Andrew Sullivan wrote:\n> On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:\n> \n> > I'm actually amazed that postgres isn't already using large file\n> > support. Especially for tools like dump. \n> \n> Except it would only cause confusion if you ran such a program on a\n> system that didn't itself have largefile support. Better to make the\n> admin turn all these things on on purpose, until everyone is running\n> 64 bit systems everywhere.\n\nIf by \"turn...on\", you mean recompile, that's a horrible idea IMO. \nBesides, you're expecting that an admin is going to know that he even\nneeds to recompile to obtain this feature let alone that he'd interested\nin compiling his own installation. Whereas, more then likely he'll know\noff hand (or can easily find out) if his FS/system supports large files\n(>32 bit sizes).\n\nSeems like, systems which can natively support this feature should have\nit enabled by default. It's a different issue if an admin attempts to\ncreate files larger than what his system and/or FS can support.\n\nI guess what I'm trying to say here is, it's moving the problem from\nbeing a postgres specific issue (not compiled in -- having to recompile\nand install and not knowing if it's (dis)enabled) to a general body of\nknowledge (does my system support such-n-such capabilities).\n\nIf a recompile time is still much preferred by the core developers,\nperhaps a log entry can be created which at least denotes the current\nstatus of such a feature when a compile time option is required. Simply\nhaving an entry of, \"LOG: LARGE FILE SUPPORT (DIS)ENABLED (64-bit file\nsizes)\", etc...things along those lines. Of course, having a\n\"--enable-large-files\" would be nice too.\n\nThis would seemingly make sense in other contexts too. Imagine a\nback-end compiled with large file support and someone else using fe\ntools which does not support it. How are they going to know if their\nfe/be supports this feature unless we let them know?\n\nGreg",
"msg_date": "12 Aug 2002 10:15:46 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, Aug 12, 2002 at 10:15:46AM -0500, Greg Copeland wrote:\n\n> If by \"turn...on\", you mean recompile, that's a horrible idea IMO. \n\nAh. Well, that is what I meant. Why is it horrible? PostgreSQL\ndoesn't take very long to compile. \n\n> I guess what I'm trying to say here is, it's moving the problem from\n> being a postgres specific issue (not compiled in -- having to recompile\n> and install and not knowing if it's (dis)enabled) to a general body of\n> knowledge (does my system support such-n-such capabilities).\n\nThe problem is not just a system-level one, but a filesystem-level\none. Enabling 64 bits by default might be dangerous, because a DBA\nmight think \"oh, it supports largefiles by default\" and therefore not\nnotice that the filesystem itself is not mounted with largefile\nsupport. But I suspect that the developers would welcome autoconfig\npatches if someone offered them.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 12 Aug 2002 11:30:36 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Monday 12 August 2002 11:30 am, Andrew Sullivan wrote:\n> The problem is not just a system-level one, but a filesystem-level\n> one. Enabling 64 bits by default might be dangerous, because a DBA\n> might think \"oh, it supports largefiles by default\" and therefore not\n> notice that the filesystem itself is not mounted with largefile\n> support. But I suspect that the developers would welcome autoconfig\n> patches if someone offered them.\n\nInteresting point. Before I could deploy RPMs with largefile support by \ndefault, I would have to make sure it wouldn't silently break anything. So \nkeep discussing the issues involved, and I'll see what comes of it. I don't \nhave an direct experience with the largefile support, and am learning as I go \nwith this.\n\nGiven that I have to make the source RPM's buildable on distributions that \nmight not have the largefile support available, so on those distributions the \nsupport will have to be unavailable -- and the decision to build it or not to \nbuild it must be automatable.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 12 Aug 2002 11:44:24 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, Aug 12, 2002 at 11:44:24AM -0400, Lamar Owen wrote:\n> > The problem is not just a system-level one, but a filesystem-level\n> > one. Enabling 64 bits by default might be dangerous, because a DBA\n> > might think \"oh, it supports largefiles by default\" and therefore not\n> > notice that the filesystem itself is not mounted with largefile\n> > support. \n\n> keep discussing the issues involved, and I'll see what comes of it. I don't \n> have an direct experience with the largefile support, and am learning as I go \n> with this.\n\nI do have experience with both of these cases. We're hosted in a\nmanaged-hosting environment, and one day one of the sysadmins there\nmust've remounted a filesystem without largefile support. Poof! I\nstarted getting all sorts of strange pg_dump problems. It wasn't\nhard to track down, except that I was initially surprised by the\nerrors, since I'd just _enabled_ large file support.\n\nThis is an area that is not encountered terribly often, actually,\nbecause postgres itself breaks its files at 1G. Most people's dump\nfiles either don't reach the 2G limit, or they use split (a\nreasonable plan).\n\nThere are, in any case, _lots_ of problems with these large files. \nYou not only need to make sure that pg_dump and friends can support\nfiles bigger than 2G. You need to make sure that you can move the\nfiles around (your file transfer commands), that you can compress the\nfiles (how is gzip compiled? bzip2?), and even that you r backup\nsoftware takes the large file. In a few years, when all\ninstallations are ready for this, it seems like it'd be a good idea\nto turn this on by default. Right now, I think the risks are at\nleast as great as those incurred by telling people they need to\nrecompile. \n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 12 Aug 2002 12:04:29 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, 2002-08-12 at 10:30, Andrew Sullivan wrote:\n> On Mon, Aug 12, 2002 at 10:15:46AM -0500, Greg Copeland wrote:\n> \n> > If by \"turn...on\", you mean recompile, that's a horrible idea IMO. \n> \n> Ah. Well, that is what I meant. Why is it horrible? PostgreSQL\n> doesn't take very long to compile. \n\n\nMany reasons. A DBA is not always the same thing as a developer (which\nmeans it's doubtful he's even going to know about needed options to pass\n-- if any). Using a self compiled installation may not install the same\nsense of reliability (I know that sounds odd) as using a distribution's\npackage. DBA may not be a SA, which means he should probably not be\ncompiling and installing software on a system. Furthermore, he may not\neven have access to do so.\n\nMeans upgrading in the future may be problematic. Someone compiled with\nlarge file support. He leaves. New release comes out. Someone else\nupgrades and now finds things are broken. Why? If it supported it out\nof the box, issue is avoided.\n\nLastly, and perhaps the most obvious, SA and DBA bodies of knowledge are\nfairly distinct. You should not expect a DBA to function as a SA. \nFurthermore, SA and developer bodies of knowledge are also fairly\ndistinct. You shouldn't expect a SA to know what compiler options he\nneeds to use to compile software on his system. Especially for\nsomething as obscure as large file support.\n\n\n> \n> > I guess what I'm trying to say here is, it's moving the problem from\n> > being a postgres specific issue (not compiled in -- having to recompile\n> > and install and not knowing if it's (dis)enabled) to a general body of\n> > knowledge (does my system support such-n-such capabilities).\n> \n> The problem is not just a system-level one, but a filesystem-level\n> one. Enabling 64 bits by default might be dangerous, because a DBA\n> might think \"oh, it supports largefiles by default\" and therefore not\n> notice that the filesystem itself is not mounted with largefile\n> support. But I suspect that the developers would welcome autoconfig\n> patches if someone offered them.\n\n\nThe distinction you make there is minor. A SA, should know and\nunderstand the capabilities of the systems he maintains (this is true\neven if the SA and DBA are one). This includes filesystem\ncapabilities. A DBA, should only care about the system requirements and\ntrust that the SA can deliver those capabilities. If a SA says, my\nfilesystems can support very large files, installs postgres, the DBA\nshould expect that match support in the database is already available. \nWoe is his surprise when he finds out that his postgres installation\ncan't handle it?!\n\nAs for the concern for danger. Hmm...my understanding is that the\nresult is pretty much the same thing as exceeding max file size. That\nis, if you attempt to read/write beyond what the filesystem can provide,\nyou're still going to get an error. Is this really more dangerous than\nsimply reading/writing to a file which exceeds max system capabilities? \nEither way, this issue exists and having large file support, seemingly,\ndoes not effect it one way or another.\n\nI guess I'm tying to say, the risk of seeing filesystem corruption or\neven database corruption should not be effected by the use of large file\nsupport. Please correct me if I'm wrong.\n\n\nGreg",
"msg_date": "12 Aug 2002 11:07:51 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, 2002-08-12 at 11:04, Andrew Sullivan wrote:\n> On Mon, Aug 12, 2002 at 11:44:24AM -0400, Lamar Owen wrote:\n> > keep discussing the issues involved, and I'll see what comes of it. I don't \n> > have an direct experience with the largefile support, and am learning as I go \n> > with this.\n> \n> I do have experience with both of these cases. We're hosted in a\n> managed-hosting environment, and one day one of the sysadmins there\n> must've remounted a filesystem without largefile support. Poof! I\n> started getting all sorts of strange pg_dump problems. It wasn't\n> hard to track down, except that I was initially surprised by the\n> errors, since I'd just _enabled_ large file support.\n\nAnd, what if he just remounted it read only. Mistakes will happen. \nThat doesn't come across as being a strong argument to me. Besides,\nit's doubtful that a filesystem is going to be remounted while it's in\nuse. Which means, these issues are going to be secondary to actual\nproduct use of the database. That is, either the system is working\ncorrectly or it's not. If it's not, guess it's not ready for production\nuse.\n\nFurthermore, since fs mounting, if being done properly, is almost always\na matter of automation, this particular class of error should be few and\nvery far between.\n\nWouldn't you rather answer people with, \"remount your file system\",\nrather than, recompile with such-n-such option enabled, reinstall. Oh\nya, since you're re-installing a modified version of your database,\nprobably a good paranoid option would be to back up and dump, just to be\nsafe. Personally, I'd rather say, \"remount\".\n\n\n> There are, in any case, _lots_ of problems with these large files. \n> You not only need to make sure that pg_dump and friends can support\n> files bigger than 2G. You need to make sure that you can move the\n> files around (your file transfer commands), that you can compress the\n> files (how is gzip compiled? bzip2?), and even that you r backup\n> software takes the large file. In a few years, when all\n> installations are ready for this, it seems like it'd be a good idea\n> to turn this on by default. Right now, I think the risks are at\n> least as great as those incurred by telling people they need to\n> recompile. \n> \n\nAll of those are SA issues. Shouldn't we leave that domain of issues\nfor a SA to deal with rather than try to force a single view down\nsomeone's throat? Which, btw, results is creating more work for those\nthat desire this feature.\n\nGreg",
"msg_date": "12 Aug 2002 11:17:31 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, 2002-08-12 at 16:44, Lamar Owen wrote:\n> Interesting point. Before I could deploy RPMs with largefile support by \n> default, I would have to make sure it wouldn't silently break anything. So \n> keep discussing the issues involved, and I'll see what comes of it. I don't \n> have an direct experience with the largefile support, and am learning as I go \n> with this.\n> \n> Given that I have to make the source RPM's buildable on distributions that \n> might not have the largefile support available, so on those distributions the \n> support will have to be unavailable -- and the decision to build it or not to \n> build it must be automatable.\n\nI raised the question on the Debian developers' list. As far as I can\nsee, the general feeling is that it won't break anything but will only\nwork with kernel 2.4. It may break with 2.0, but 2.0 is no longer\nprovided with Debian stable, so I don't mind that.\n\nThe thread starts at\nhttp://lists.debian.org/debian-devel/2002/debian-devel-200208/msg00597.html\n\nI intend to enable it in the next version of the Debian packages (which\nwill go into the unstable archive if this works for me) by adding\n-D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 to CFLAGS for the entire build.\n\nOne person said:\n However compiling with largefile support will change the size\n of off_t from 32 bits to 64 bits - if postgres uses off_t or\n anything else related to file offsets in a binary struct in one\n of the database files you will break stuff pretty heavily. I\n would not compile postgres with largefile support until it\n is officially supported by the postgres developers.\n\nbut I cannot see that off_t is used in such a way.\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And he spake a parable unto them to this end, that men\n ought always to pray, and not to faint.\" \n Luke 18:1 \n\n",
"msg_date": "12 Aug 2002 17:25:03 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, Aug 12, 2002 at 11:07:51AM -0500, Greg Copeland wrote:\n\n> Many reasons. A DBA is not always the same thing as a developer (which\n> means it's doubtful he's even going to know about needed options to pass\n> -- if any). \n\nThis (and the \"upgrade\" argument) are simply documentation issues. \nIf you check the FAQ_Solaris, there's already a line in there which\ntells you how to do it.\n\n> Lastly, and perhaps the most obvious, SA and DBA bodies of knowledge are\n> fairly distinct. You should not expect a DBA to function as a SA. \n> Furthermore, SA and developer bodies of knowledge are also fairly\n> distinct. You shouldn't expect a SA to know what compiler options he\n> needs to use to compile software on his system. Especially for\n> something as obscure as large file support.\n\nIt seems to me that a DBA who is running a system which produces 2\nGig dump files, and who can't compile Postgres, is in for a rocky\nride. Such a person needs at least a support contract, and in such a\ncase the supporting organisation would be able to provide the needed\nbinary.\n\nAnyway, as I said, this really seems like the sort of thing that\nmostly gets done when someone sends in a patch. So if it scratches\nyour itch . . .\n\n> The distinction you make there is minor. A SA, should know and\n> understand the capabilities of the systems he maintains (this is true\n> even if the SA and DBA are one). This includes filesystem\n> capabilities. A DBA, should only care about the system requirements and\n> trust that the SA can deliver those capabilities. If a SA says, my\n> filesystems can support very large files, installs postgres, the DBA\n> should expect that match support in the database is already available. \n> Woe is his surprise when he finds out that his postgres installation\n> can't handle it?!\n\nAnd it seems to me the distinction you're making is an invidious one. \nI am sick to death of so-called experts who want to blather on about\nthis or that tuning parameter of [insert big piece of software here]\nwithout knowing the slightest thing about the basic operating\nenvironment. A DBA has responsibility to know a fair amount about\nthe platform in production. A DBA who doesn't is one day going to\nfind out what deep water is.\n\n> result is pretty much the same thing as exceeding max file size. That\n> is, if you attempt to read/write beyond what the filesystem can provide,\n> you're still going to get an error. Is this really more dangerous than\n> simply reading/writing to a file which exceeds max system capabilities? \n\nOnly if you were relying on it for backups, and suddenly your backups\ndon't work.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 12 Aug 2002 12:40:14 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, Aug 12, 2002 at 11:17:31AM -0500, Greg Copeland wrote:\n> \n> And, what if he just remounted it read only. Mistakes will happen. \n> That doesn't come across as being a strong argument to me. Besides,\n> it's doubtful that a filesystem is going to be remounted while it's in\n> use. Which means, these issues are going to be secondary to actual\n> product use of the database. That is, either the system is working\n> correctly or it's not. If it's not, guess it's not ready for production\n> use.\n\nIf it's already in production use, but was taken out briefly for\nmaintenance, and the supposed expert SAs do something dimwitted, then\nit's broken, sure. The point I was trying to make is that the\nsymptoms one sees from breakage can be from many different places,\nand so a glib \"enable largefile support\" remark hides an actual,\nreal-world complexity. Several steps can be broken, any one fof\nwhich causes problems. Better to force the relevant admins to do the\nwork to set things up for an exotic feature, if it is desired. \nThere's nothing about Postgres itself that requires large file\nsupport, so this is really a discussion about pg_dump. Using split\nis more portable, in my view, and therefore preferable. You can also\nuse the native-compressed binary dump format, if you like one big\nfile. Both of those already work out of the box.\n\n> > There are, in any case, _lots_ of problems with these large files. \n\n> All of those are SA issues. \n\nSo is compiling the software correctly, if the distinction has any\nmeaning at all. When some mis-installed bit of software breaks, the\nDBAs won't go running to the SAs. They'll ask here.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Mon, 12 Aug 2002 12:48:10 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS\\"
},
{
"msg_contents": "On Mon, 2002-08-12 at 11:40, Andrew Sullivan wrote:\n> On Mon, Aug 12, 2002 at 11:07:51AM -0500, Greg Copeland wrote:\n> \n> > Many reasons. A DBA is not always the same thing as a developer (which\n> > means it's doubtful he's even going to know about needed options to pass\n> > -- if any). \n> \n> This (and the \"upgrade\" argument) are simply documentation issues. \n> If you check the FAQ_Solaris, there's already a line in there which\n> tells you how to do it.\n\nAnd? What's you're point. That somehow make it disappear? Even if it\nhad been documented, it doesn't mean the documentation made it to the\nright hands or was obviously located. Just look at postgres'\ndocumentation in general. How often are people told to \"read the\ncode\". Give me a break. You're argument is a very weak straw.\n\n> \n> > Lastly, and perhaps the most obvious, SA and DBA bodies of knowledge are\n> > fairly distinct. You should not expect a DBA to function as a SA. \n> > Furthermore, SA and developer bodies of knowledge are also fairly\n> > distinct. You shouldn't expect a SA to know what compiler options he\n> > needs to use to compile software on his system. Especially for\n> > something as obscure as large file support.\n> \n> It seems to me that a DBA who is running a system which produces 2\n> Gig dump files, and who can't compile Postgres, is in for a rocky\n> ride. Such a person needs at least a support contract, and in such a\n> case the supporting organisation would be able to provide the needed\n> binary.\n\nLOL. Managing data and compiling applications have nothing to do with\neach other. Try, try again.\n\nYou also don't seem to understand that this isn't as simple as\nrecompile. It's not!!!!!!!!!!! We clear on this?! It's as simple as\nneeding to KNOW that you have to recompile and then KNOWING you have to\nuse a serious of obtuse options when compiling.\n\nIn other words, you seemingly know everything you don't know which is\nmore than the rest of us.\n\n> Anyway, as I said, this really seems like the sort of thing that\n> mostly gets done when someone sends in a patch. So if it scratches\n> your itch . . .\n> \n> > The distinction you make there is minor. A SA, should know and\n> > understand the capabilities of the systems he maintains (this is true\n> > even if the SA and DBA are one). This includes filesystem\n> > capabilities. A DBA, should only care about the system requirements and\n> > trust that the SA can deliver those capabilities. If a SA says, my\n> > filesystems can support very large files, installs postgres, the DBA\n> > should expect that match support in the database is already available. \n> > Woe is his surprise when he finds out that his postgres installation\n> > can't handle it?!\n> \n> And it seems to me the distinction you're making is an invidious one. \n> I am sick to death of so-called experts who want to blather on about\n> this or that tuning parameter of [insert big piece of software here]\n> without knowing the slightest thing about the basic operating\n> environment. A DBA has responsibility to know a fair amount about\n\nIn other words, you can't have a subject matter expert unless he is an\nexpert on every subject? Ya, right!\n\n> the platform in production. A DBA who doesn't is one day going to\n> find out what deep water is.\n\nAgreed...as it relates to the database. DBA's should have to know\ndetails about the filesystem...that's the job of a SA. You seem to be\nunder the impression that SA = DBA or somehow a DBA is an SA with extra\nknowledge. While this is sometimes true, I can assure you this is not\nalways the case.\n\nThis is exactly why large companies often have DBAs in one department\nand SA in another. Their knowledge domains tend to uniquely differ.\n\n> \n> > result is pretty much the same thing as exceeding max file size. That\n> > is, if you attempt to read/write beyond what the filesystem can provide,\n> > you're still going to get an error. Is this really more dangerous than\n> > simply reading/writing to a file which exceeds max system capabilities? \n> \n> Only if you were relying on it for backups, and suddenly your backups\n> don't work.\n> \n\nCorrection. \"Suddenly\" your backends never worked. Seems like it would\nof been caught prior to going into testing. Surely you're not\nsuggesting that people place a system into production without having\ntesting full life cycle? Back up testing is part of your life cycle\nright?\n\nGreg",
"msg_date": "12 Aug 2002 11:53:06 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, 2002-08-12 at 11:48, Andrew Sullivan wrote:\n> On Mon, Aug 12, 2002 at 11:17:31AM -0500, Greg Copeland wrote:\n[snip]\n\n> > > There are, in any case, _lots_ of problems with these large files. \n> \n> > All of those are SA issues. \n> \n> So is compiling the software correctly, if the distinction has any\n> meaning at all. When some mis-installed bit of software breaks, the\n> DBAs won't go running to the SAs. They'll ask here.\n\nEither case, they're going to ask. You can give them a simple solution\nor you can make them run around and pull their hair out.\n\nYou're also assuming that SA = developer. I can assure you it does\nnot. I've met many an SA who's development experience was \"make\" and\nkorn scripts. Expecting that he should know to use GNU_SOURCE and\nBITS=64, it a pretty far reach. Furthermore, you're even expecting that\nhe knows that such a \"recompile\" fix even exists. Where do you think\nhe's going to turn? The lists. That's right. Since he's going to\ncontact the list or review a faq item anyways, doesn't it make sense to\ngive them the easy way out (the the initiator and the mailing list)?\n\nIMO, powerful tools seem to always be capable enough to shoot your self\nin the foot. Why make pay special attention with this sole feature\nwhich doesn't really address it to begin with?\n\nWould you at least agree that \"--enable-large-files\", rather than\nCFLAGS=xxx, is a good idea as might well be banners and log entries\nstating that large file support has or has not been compiled in?\n\nGreg",
"msg_date": "12 Aug 2002 12:13:27 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS\\"
},
{
"msg_contents": "Oliver Elphick writes:\n\n> One person said:\n> However compiling with largefile support will change the size\n> of off_t from 32 bits to 64 bits - if postgres uses off_t or\n> anything else related to file offsets in a binary struct in one\n> of the database files you will break stuff pretty heavily. I\n> would not compile postgres with largefile support until it\n> is officially supported by the postgres developers.\n>\n> but I cannot see that off_t is used in such a way.\n\nThis is not the only issue. You really need to check all uses of off_t\n(for example printf(\"%ld\", off_t) will crash) and all places where off_t\nshould have been used in the first place. Furthermore you might need to\nreplace ftell() and fseek() by ftello() and fseeko(), especially if you\nwant pg_dump to support large archives.\n\nStill, most of the configuration work is already done in Autoconf (see\nAC_FUNC_FSEEKO and AC_SYS_LARGEFILE), so the work might be significantly\nless than the time spent debating the merits of large files on these\nlists. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 12 Aug 2002 22:07:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, Aug 12, 2002 at 11:30:36AM -0400, Andrew Sullivan wrote:\n> The problem is not just a system-level one, but a filesystem-level\n> one. Enabling 64 bits by default might be dangerous, because a DBA\n> might think \"oh, it supports largefiles by default\" and therefore not\n> notice that the filesystem itself is not mounted with largefile\n> support. But I suspect that the developers would welcome autoconfig\n> patches if someone offered them.\n\nAre there any filesystems in common use (not including windows ones) that\ndon't support >32-bit filesizes?\n\nLinux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),\nprobably much more. What about the BSDs? XFS? etc\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n",
"msg_date": "Tue, 13 Aug 2002 09:41:16 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, 2002-08-12 at 18:41, Martijn van Oosterhout wrote:\n> On Mon, Aug 12, 2002 at 11:30:36AM -0400, Andrew Sullivan wrote:\n> > The problem is not just a system-level one, but a filesystem-level\n> > one. Enabling 64 bits by default might be dangerous, because a DBA\n> > might think \"oh, it supports largefiles by default\" and therefore not\n> > notice that the filesystem itself is not mounted with largefile\n> > support. But I suspect that the developers would welcome autoconfig\n> > patches if someone offered them.\n> \n> Are there any filesystems in common use (not including windows ones) that\n> don't support >32-bit filesizes?\n> \n> Linux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),\n> probably much more. What about the BSDs? XFS? etc\n> \n\nExt2 & 3 should be okay. XFS (very sure) and JFS (reasonably sure)\nshould also be okay...IIRC. NFS and SMB are probably problematic, but I\ncan't see anyone really wanting to do this. Maybe some of the\nclustering file systems (GFS, etc) might have problems??? I'm not sure\nwhere reiserfs falls. I *think* it's not a problem but something\ntingles in the back of my brain that there may be problems lurking...\n\nJust for the heck of it, I did some searching. Found these for\nstarters:\nhttp://www.suse.de/~aj/linux_lfs.html.\nhttp://www.gelato.unsw.edu.au/~peterc/lfs.html\n\nhttp://ftp.sas.com/standards/large.file/\n\n\nSo, in a nut shell, most modern (2.4.x+) x86 Linux systems should be\nable to handle large files.\n\nEnjoy,\n\tGreg",
"msg_date": "12 Aug 2002 21:57:51 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Andrew Sullivan wrote:\n\n>On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:\n>\n>>I'm actually amazed that postgres isn't already using large file\n>>support. Especially for tools like dump. \n>>\n>\n>Except it would only cause confusion if you ran such a program on a\n>system that didn't itself have largefile support. Better to make the\n>admin turn all these things on on purpose, until everyone is running\n>64 bit systems everywhere.\n>\n>A\n>\nAh yes ... extremely good point - I had not considered that.\n\nI am pretty sure all reasonably current (kernel >= 2.4) Linux distros \nsupport largefile out of the box - so it should be safe for them.\n\nOther operating systems where 64 bit file access can be disabled or \nunconfigured require more care - possibly (sigh) 2 binary RPMS with a \ndistinctive 32 and 64 bit label ...(I think the \"big O\" does this for \nSolaris).\n\nCheers\n\nMark \n\n\n",
"msg_date": "Tue, 13 Aug 2002 20:42:29 +1200",
"msg_from": "Mark Kirkwood <markir@slingshot.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, 2002-08-13 at 03:57, Greg Copeland wrote:\n> > Are there any filesystems in common use (not including windows ones) that\n> > don't support >32-bit filesizes?\n> > \n> > Linux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),\n> > probably much more. What about the BSDs? XFS? etc\n> > \n> \n> Ext2 & 3 should be okay. XFS (very sure) and JFS (reasonably sure)\n> should also be okay...IIRC. NFS and SMB are probably problematic, but I\n> can't see anyone really wanting to do this. \n\nHmm. Whereas I can't see many people putting their database files on an\nNFS mount, I can readily see them using pg_dump to one, and pg_dump is\nthe program where large files are really likely to be needed.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n",
"msg_date": "13 Aug 2002 11:41:10 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Mon, 2002-08-12 at 21:07, Peter Eisentraut wrote:\n\n> This is not the only issue. You really need to check all uses of off_t\n> (for example printf(\"%ld\", off_t) will crash) and all places where off_t\n> should have been used in the first place. Furthermore you might need to\n> replace ftell() and fseek() by ftello() and fseeko(), especially if you\n> want pg_dump to support large archives.\n\nSearching for fseek, ftell and off_t yields only 12 files in the whole\nsource tree, so fortunately the impact is not enormous. As expected,\npg_dump is the main program involved.\n\nThere seem to be several places in the pg_dump code where int is used\ninstead of long int to receive the output of ftell(). I presume these\nought to be cleaned up as well.\n\nLooking at how to deal with this, is the following going to be\nportable?:\n \n in pg_dump/Makefile:\n CFLAGS += -D_LARGEFILE_SOURCE -D_OFFSET_BITS=64\n \n in pg_dump.h:\n #ifdef _LARGEFILE_SOURCE\n #define FSEEK fseeko\n #define FTELL ftello\n #define OFF_T_FORMAT %Ld\n typedef off_t OFF_T;\n #else\n #define FSEEK fseek\n #define FTELL ftell\n #define OFF_T_FORMAT %ld\n typedef long int OFF_T;\n #endif\n \n In pg_dump/*.c:\n change relevant occurrences of fseek and ftell to FSEEK and\n FTELL\n \n change all file offset parameters used or returned by fseek and\n ftell to OFF_T (usually from int)\n \n construct printf formats with OFF_T_FORMAT in appropriate places\n\n> Still, most of the configuration work is already done in Autoconf (see\n> AC_FUNC_FSEEKO and AC_SYS_LARGEFILE), so the work might be significantly\n> less than the time spent debating the merits of large files on these\n> lists. ;-)\n\nSince running autoconf isn't part of a normal build, I'm not familiar\nwith that. Can autoconf make any of the above unnecessary?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n",
"msg_date": "13 Aug 2002 13:18:21 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, 2002-08-13 at 03:42, Mark Kirkwood wrote:\n> Andrew Sullivan wrote:\n> \n> >On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:\n> >\n> >>I'm actually amazed that postgres isn't already using large file\n> >>support. Especially for tools like dump. \n> >>\n> >\n> >Except it would only cause confusion if you ran such a program on a\n> >system that didn't itself have largefile support. Better to make the\n> >admin turn all these things on on purpose, until everyone is running\n> >64 bit systems everywhere.\n> >\n> >A\n> >\n> Ah yes ... extremely good point - I had not considered that.\n> \n> I am pretty sure all reasonably current (kernel >= 2.4) Linux distros \n> support largefile out of the box - so it should be safe for them.\n> \n> Other operating systems where 64 bit file access can be disabled or \n> unconfigured require more care - possibly (sigh) 2 binary RPMS with a \n> distinctive 32 and 64 bit label ...(I think the \"big O\" does this for \n> Solaris).\nThen, of course, there are systems where Largefiles support is a\nfilesystem by filesystem (read mountpoint by mountpoint) option (E.G.\nOpenUNIX). \n\nI think this is going to be a pandoras box. \n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "13 Aug 2002 08:02:05 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 08:02:05AM -0500, Larry Rosenman wrote:\n> On Tue, 2002-08-13 at 03:42, Mark Kirkwood wrote:\n> > Other operating systems where 64 bit file access can be disabled or \n> > unconfigured require more care - possibly (sigh) 2 binary RPMS with a \n> > distinctive 32 and 64 bit label ...(I think the \"big O\" does this for \n> > Solaris).\n> Then, of course, there are systems where Largefiles support is a\n> filesystem by filesystem (read mountpoint by mountpoint) option (E.G.\n> OpenUNIX). \n> \n> I think this is going to be a pandoras box. \n\nI don't understand. Why would you want large-file support enabled on a\nper-filesystem basis? All your system programs would have to support the\nlowest common denomitor (ie, with large file support). Is it to make the\nkernel enforce a limit for the purposes of compatability?\n\nI'd suggest making it as simple as --enable-large-files and make it default\nin a year or two.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n",
"msg_date": "Tue, 13 Aug 2002 23:15:19 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> Looking at how to deal with this, is the following going to be\n> portable?:\n\n> #define OFF_T_FORMAT %Ld\n\nThat certainly will not be. Use INT64_FORMAT from pg_config.h.\n\n> typedef long int OFF_T;\n\nWhy not just use off_t? In both cases?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 10:23:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Linux Largefile Support In Postgresql RPMS "
},
{
"msg_contents": "Hi,\n\nwhat do you think about creating 2 binaries for pg_dump (I think this is the \nprogram, which needs large file support most). One without large file support \n(maybe named pg_dump, so this is the default) and one with large file support \n(named pg_dumpl). So the beginner-user dont get any trouble (you shouldn't be \nbeginner, if you use >2GB data) and for those, who know what they do can use \nlarge files if needed.\n\nTommi\n\n",
"msg_date": "Tue, 13 Aug 2002 16:32:11 +0200",
"msg_from": "Tommi Maekitalo <t.maekitalo@epgmbh.de>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, 2002-08-13 at 15:23, Tom Lane wrote:\n\n> > typedef long int OFF_T;\n> \n> Why not just use off_t? In both cases?\n\nThe prototype for fseek() is long int; I had assumed that off_t was not\ndefined if _LARGEFILE_SOURCE was not defined.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n",
"msg_date": "13 Aug 2002 15:34:21 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> On Tue, 2002-08-13 at 15:23, Tom Lane wrote:\n>> Why not just use off_t? In both cases?\n\n> The prototype for fseek() is long int; I had assumed that off_t was not\n> defined if _LARGEFILE_SOURCE was not defined.\n\nOh, you're right. A quick look at HPUX shows it's the same way: ftell\nreturns long int, ftello returns off_t (which presumably is an alias\nfor long long int). Okay, OFF_T seems a reasonable answer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 11:04:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Linux Largefile Support In Postgresql RPMS "
},
{
"msg_contents": "If all the 2GB problem is only about pg_dump may I suggest a work-around?\n\n pg_dump | cat >dumpfile.sql\n\nworks without problems if \"cat\" is largefile-enabled; this puts the burden\nof supplying largefile-enabled binaries on the operating system\ndistributor; similiar constructions work for all other postgres tools\n\nApart from this I think it is perfectly safe to enable largefile\ncompilation on linux unconditionally; the only major linux filesystem (I'm\ndiscounting VFAT and the like here) that cannot handle files >2GB is NFSv2\n(but NFSv3 works), the error code and signal you get from writing a too\nlarge file (EFBIG \"File too large\" and SIGXFSZ \"File size limit exceeded\")\nshould give the administrator prominent hints what might be wrong\n\nNote that in Debian Woody all system binaries (cp, cat etc.) are compiled\nwith largefile support enabled, I think this applies to all other\ndistributions as well\n\nRegards\n-- \nHelge Bahmann <bahmann@math.tu-freiberg.de> /| \\__\nThe past: Smart users in front of dumb terminals /_|____\\\n _/\\ | __)\n$ ./configure \\\\ \\|__/__|\nchecking whether build environment is sane... yes \\\\/___/ |\nchecking for AIX... no (we already did this) |\n\n",
"msg_date": "Tue, 13 Aug 2002 17:19:31 +0200 (CEST)",
"msg_from": "Helge Bahmann <bahmann@math.tu-freiberg.de>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 05:19:31PM +0200, Helge Bahmann wrote:\n> If all the 2GB problem is only about pg_dump may I suggest a work-around?\n> \n> pg_dump | cat >dumpfile.sql\n> \n> works without problems if \"cat\" is largefile-enabled; \n\nI had that break on Solaris. Makes no sense to me, either, but it\nmost certainly did.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 13 Aug 2002 11:31:15 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 05:19:31PM +0200, Helge Bahmann wrote:\n> If all the 2GB problem is only about pg_dump may I suggest a work-around?\n> \n> pg_dump | cat >dumpfile.sql\n\nThe only reason to care for large file support is if pg_dump will be\nseeking and telling on files it creates, not postgresql's data files, as\nthose will be split up by postgresql at the 1 Gb boundary.\n\nI very much doubt pg_dump would be seeking or telling on stdout, as it may\nbe a pipe, a tty, a socket, etc., so you can skip the cat and just do\npg_dump > dumpfile.sql.\n\nOh, and cat doesn't need to be largefile-enabled as it never seeks in\nfiles, as neither does pg_dump, as it doesn't, or shouldn't (I see no need\nfor), seek in the output file.\n\nI really see no point in this discussion. Will the backend ever seek or\ntell any file it uses? Its data files will be smaller than 1 Gb, so no\nproblem there. The only worry would be the COPY, but that doesn't need\nthose two functions, does it?\n\nSame for any frontend tool. Does it need the seek and tell? I'll rather\nhave then eliminated when not really needed, than having to worry about\nfilesystem and OS support.\n\nThe only thing to worry would be when opening the large files, but a\nsimple rule in autoconf will set the needed #define in the headers...\n\nRegards,\nLuciano Roha\n> -- \n> Helge Bahmann <bahmann@math.tu-freiberg.de> /| \\__\n> The past: Smart users in front of dumb terminals /_|____\\\n> _/\\ | __)\n> $ ./configure \\\\ \\|__/__|\n> checking whether build environment is sane... yes \\\\/___/ |\n> checking for AIX... no (we already did this) |\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nConsciousness: that annoying time between naps.\n",
"msg_date": "Tue, 13 Aug 2002 16:43:18 +0100",
"msg_from": "strange@nsk.yi.org",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 11:31:15AM -0400, Andrew Sullivan wrote:\n> On Tue, Aug 13, 2002 at 05:19:31PM +0200, Helge Bahmann wrote:\n> > If all the 2GB problem is only about pg_dump may I suggest a work-around?\n> > \n> > pg_dump | cat >dumpfile.sql\n> > \n> > works without problems if \"cat\" is largefile-enabled; \n> \n> I had that break on Solaris. Makes no sense to me, either, but it\n> most certainly did.\n\nDoes the shell have large file support? The file descriptor for dumpfile.sql\nis opened by the shell, not by cat. Cat just reads a few bytes from stdin\nand writes a few bytes to stdout, it wouldn't break large file support for\nitself.\n\nRegards,\nLuciano Rocha\n\n-- \nConsciousness: that annoying time between naps.\n",
"msg_date": "Tue, 13 Aug 2002 16:53:47 +0100",
"msg_from": "strange@nsk.yi.org",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "> I very much doubt pg_dump would be seeking or telling on stdout, as it may\n> be a pipe, a tty, a socket, etc., so you can skip the cat and just do\n> pg_dump > dumpfile.sql.\n>\n> Oh, and cat doesn't need to be largefile-enabled as it never seeks in\n> files, as neither does pg_dump, as it doesn't, or shouldn't (I see no need\n> for), seek in the output file.\n\nno and no, this will *not* work; the file has to be opened with the flag\nO_LARGEFILE, otherwise the kernel will refuse to write files larger than\n2GB. Really.\n\nRegards\n-- \nHelge Bahmann <bahmann@math.tu-freiberg.de> /| \\__\nThe past: Smart users in front of dumb terminals /_|____\\\n _/\\ | __)\n$ ./configure \\\\ \\|__/__|\nchecking whether build environment is sane... yes \\\\/___/ |\nchecking for AIX... no (we already did this) |\n\n",
"msg_date": "Tue, 13 Aug 2002 17:59:33 +0200 (CEST)",
"msg_from": "Helge Bahmann <bahmann@math.tu-freiberg.de>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Oliver Elphick wrote:\n> On Tue, 2002-08-13 at 03:57, Greg Copeland wrote:\n>>Ext2 & 3 should be okay. XFS (very sure) and JFS (reasonably sure)\n>>should also be okay...IIRC. NFS and SMB are probably problematic, but I\n>>can't see anyone really wanting to do this. \n> \n> Hmm. Whereas I can't see many people putting their database files on an\n> NFS mount, I can readily see them using pg_dump to one, and pg_dump is\n> the program where large files are really likely to be needed.\n\nI wouldn't totally discount using NFS for large databases. Believe it or \nnot, with an Oracle database and a Network Appliance for storage, NFS is \nexactly what is used. We've found that we get better performance with a \n(properly tuned) NFS mounted NetApp volume than with attached storage on \nour HPUX box with several 100+GB databases.\n\nJoe\n\n",
"msg_date": "Tue, 13 Aug 2002 09:01:29 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 05:59:33PM +0200, Helge Bahmann wrote:\n> > I very much doubt pg_dump would be seeking or telling on stdout, as it may\n> > be a pipe, a tty, a socket, etc., so you can skip the cat and just do\n> > pg_dump > dumpfile.sql.\n> >\n> > Oh, and cat doesn't need to be largefile-enabled as it never seeks in\n> > files, as neither does pg_dump, as it doesn't, or shouldn't (I see no need\n> > for), seek in the output file.\n> \n> no and no, this will *not* work; the file has to be opened with the flag\n> O_LARGEFILE, otherwise the kernel will refuse to write files larger than\n> 2GB. Really.\n\nYeah, and cat will *never* open any file to write to, anyway.\n\nI do say at the end:\n\n\"The only thing to worry would be when opening the large files, but a\nsimple rule in autoconf will set the needed #define in the headers...\"\n\n-- \nConsciousness: that annoying time between naps.\n",
"msg_date": "Tue, 13 Aug 2002 17:03:33 +0100",
"msg_from": "strange@nsk.yi.org",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 04:53:47PM +0100, strange@nsk.yi.org wrote:\n> On Tue, Aug 13, 2002 at 11:31:15AM -0400, Andrew Sullivan wrote:\n> > On Tue, Aug 13, 2002 at 05:19:31PM +0200, Helge Bahmann wrote:\n> > > If all the 2GB problem is only about pg_dump may I suggest a work-around?\n> > > \n> > > pg_dump | cat >dumpfile.sql\n> > > \n> > > works without problems if \"cat\" is largefile-enabled; \n> > \n> > I had that break on Solaris. Makes no sense to me, either, but it\n> > most certainly did.\n> \n> Does the shell have large file support? \n\nYep. It was an error from pg_dump that claimed it couldn't keep\nwriting. Never seen anything like it. I'm sure I did something\nwrong somewhere, I just didn't see what it was. (In the end, I just\nrecompiled pg_dump.) But _something_ along the chain didn't have\nlarge file support. It's these sorts of little gotchas that I was\nthinking of when I said that just turning on large files is not that\nsimple: you really need to know that _everything_ is ready, or the\nerrors you get will surprise you.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 13 Aug 2002 12:40:46 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, 2002-08-13 at 17:11, Rod Taylor wrote:\n> > I wouldn't totally discount using NFS for large databases. Believe it or \n> > not, with an Oracle database and a Network Appliance for storage, NFS is \n> > exactly what is used. We've found that we get better performance with a \n> > (properly tuned) NFS mounted NetApp volume than with attached storage on \n> > our HPUX box with several 100+GB databases.\n> \n> We've also tended to keep logs local on raid 1 and the data on a pair of\n> custered netapps for PostgreSQL.\n\nBut large file support is not really an issue for the database itself,\nsince table files are split at 1Gb. Unless that changes, the database\nis not a problem.\n \n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Watch ye therefore, and pray always, that ye may be \n accounted worthy to escape all these things that shall\n come to pass, and to stand before the Son of man.\" \n Luke 21:36 \n\n",
"msg_date": "13 Aug 2002 17:50:04 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> But large file support is not really an issue for the database itself,\n> since table files are split at 1Gb. Unless that changes, the database\n> is not a problem.\n\nI see no really good reason to change the file-split logic. The places\nwhere the backend might possibly need large-file support are\n\t* backend-side COPY to or from a large file\n\t* postmaster log to stderr --- does this fail if log output\n\t exceeds 2G?\nThere might be some other similar issues, but that's all that comes to\nmind offhand.\n\nOn a system where building with large-file support is reasonably\nstandard, I agree that PG should be built that way too. Where it's\nnot so standard, I agree with Andrew Sullivan's concerns ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Aug 2002 13:04:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS "
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:\n> \n> I see no really good reason to change the file-split logic. The places\n> where the backend might possibly need large-file support are\n> \t* backend-side COPY to or from a large file\n\nI _think_ this causes a crash. At least, I _think_ that's what\ncaused it one day (I was doing one of those jackhammer-the-server\nsorts of tests, and it was one of about 50 things I was doing at the\ntime, to see if I could make it fall over. I did, but not where I\nexpected, and way beyond any real load we could anticipate).\n\n> \t* postmaster log to stderr --- does this fail if log output\n> \t exceeds 2G?\n\nYes, definitely, at least on Solaris.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 13 Aug 2002 13:10:19 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:\n> On a system where building with large-file support is reasonably\n> standard, I agree that PG should be built that way too. Where it's\n> not so standard, I agree with Andrew Sullivan's concerns ...\n\nWhat do you mean by \"standard\"? That only some filesystems are supported?\nIn Linux the vfat filesystem doesn't support largefiles, so the behaviour\nis the same as if the application didn't specify O_LARGEFILE to open(2):\nAs Helge Bahmann pointed out, \"kernel will refuse to write files larger than\n2GB\". In current Linux, a signal (SIGXFSZ) is sent to the application\nthat then dumps core.\n\n\nSo, the use of O_LARGEFILE is nullified by the lack of support by the\nfilesystem, but no problem is introduced by the application supporting\nlargefiles, it already existed before.\n\nAll the crashes and problems presented on these lists occur when largefile\nsupport isn't compiled, I didn't see one occuring from any application\nhaving the support, but not the filesystem. (Your \"not so standard\nsupport\"?)\n\nThe changes to postgresql doesn't seem complicated, I can try to make them\nmyself (fcntl on stdout, stdin; add check to autoconf; etc.) if no one\nelse volunteers.\n\nRegards,\nLuciano Rocha\n\n-- \nConsciousness: that annoying time between naps.\n",
"msg_date": "Tue, 13 Aug 2002 18:45:59 +0100",
"msg_from": "strange@nsk.yi.org",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 06:45:59PM +0100, strange@nsk.yi.org wrote:\n\n> support isn't compiled, I didn't see one occuring from any application\n> having the support, but not the filesystem. (Your \"not so standard\n\nWrong. The symptom is _exactly the same_ if the program doesn't have\nthe support, the filesystem doesn't have the support, or both, at\nleast on Solaris. I've checked.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 13 Aug 2002 14:09:07 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, 2002-08-13 at 12:45, strange@nsk.yi.org wrote:\n> On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:\n> > On a system where building with large-file support is reasonably\n> > standard, I agree that PG should be built that way too. Where it's\n> > not so standard, I agree with Andrew Sullivan's concerns ...\n> \n> What do you mean by \"standard\"? That only some filesystems are supported?\n> In Linux the vfat filesystem doesn't support largefiles, so the behaviour\n> is the same as if the application didn't specify O_LARGEFILE to open(2):\n> As Helge Bahmann pointed out, \"kernel will refuse to write files larger than\n> 2GB\". In current Linux, a signal (SIGXFSZ) is sent to the application\n> that then dumps core.\n> \n> \n> So, the use of O_LARGEFILE is nullified by the lack of support by the\n> filesystem, but no problem is introduced by the application supporting\n> largefiles, it already existed before.\n> \n\nThank you. That's a point that I previously pointed out...you just did\na much better job of it. Specifically, want to stress that enabling\nlarge file support is not dangerous.\n\nGreg",
"msg_date": "13 Aug 2002 13:10:10 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, 2002-08-13 at 12:04, Tom Lane wrote:\n \n> On a system where building with large-file support is reasonably\n> standard, I agree that PG should be built that way too. Where it's\n> not so standard, I agree with Andrew Sullivan's concerns ...\n\n\nAgreed. This is what I originally asked for.\n\nGreg",
"msg_date": "13 Aug 2002 13:11:27 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "On Tue, Aug 13, 2002 at 02:09:07PM -0400, Andrew Sullivan wrote:\n> On Tue, Aug 13, 2002 at 06:45:59PM +0100, strange@nsk.yi.org wrote:\n> \n> > support isn't compiled, I didn't see one occuring from any application\n> > having the support, but not the filesystem. (Your \"not so standard\n> \n> Wrong. The symptom is _exactly the same_ if the program doesn't have\n> the support, the filesystem doesn't have the support, or both, at\n> least on Solaris. I've checked.\n\n??\n\nMy point is that: Having postgresql the support doesn't bring NEW errors.\n\nI never said postgresql would automagically gain support on filesystems\nthat don't support largfiles, I said no one mentioned an error caused by\npostgresql *having* the support, but *not the filesystem*. Maybe I wasn't\nclear, but I meant *new* errors.\n\nAs it seams, adding support to largefiles doesn't break anything.\n\nRegards,\nLuciano Rocha\n\n-- \nConsciousness: that annoying time between naps.\n",
"msg_date": "Tue, 13 Aug 2002 19:39:01 +0100",
"msg_from": "strange@nsk.yi.org",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Tom Lane writes:\n\n> > The prototype for fseek() is long int; I had assumed that off_t was not\n> > defined if _LARGEFILE_SOURCE was not defined.\n\nAll that _LARGEFILE_SOURCE does is make fseeko() and ftello() visible on\nsome systems, but on some systems they should be available by default.\n\n> Oh, you're right. A quick look at HPUX shows it's the same way: ftell\n> returns long int, ftello returns off_t (which presumably is an alias\n> for long long int). Okay, OFF_T seems a reasonable answer.\n\nfseek() and ftell() using long int for the offset was a mistake, therefore\nfseeko() and ftello() were invented. (This is independent of whether the\nlarge file interface is used.)\n\nTo activate the large file interface you define _FILE_OFFSET_BITS=64,\nwhich transparently replaces off_t and everything that uses it with a 64\nbit version. There is no need to use any of the proposed macro tricks\n(because that exact macro trick is already provided by the OS).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 13 Aug 2002 21:53:35 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Linux Largefile Support In Postgresql RPMS"
},
{
"msg_contents": "Tom Lane writes:\n\n> \t* postmaster log to stderr --- does this fail if log output\n> \t exceeds 2G?\n\nThat would be an issue of the shell, not the postmaster.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 13 Aug 2002 21:53:43 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Linux Largefile Support In Postgresql RPMS"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tatsuo Ishii [mailto:t-ishii@sra.co.jp] \n> Sent: 08 August 2002 09:26\n> To: peter_e@gmx.net\n> Cc: Dave Page; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Build errors with current CVS\n> \n> \n> > Tatsuo Ishii writes:\n> > \n> > > I think this happens because you are using cygwin \n> envrionment. Under \n> > > cygwin environment a shared object(dll) cannot be built until the \n> > > backend bild completes.\n> > \n> > Yes. AIX also has this problem.\n> > \n> > > I heard this theory from a cygwin expert in Japan. If this is \n> > > correct, we have to move utils/mb/conversion_procs to right under \n> > > src so that it builds *after* the backend build finishes.\n> > \n> > You don't have to move them, but you need to adjust the build order.\n> \n> I have committed changes according to your suggestion.\n> \n> Can people on cygwin or AIX test the fix?\n\nI'll give it a go right away.\n\nThanks, Dave.\n",
"msg_date": "Thu, 8 Aug 2002 09:28:10 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Build errors with current CVS"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tatsuo Ishii [mailto:t-ishii@sra.co.jp] \n> Sent: 08 August 2002 09:26\n> To: peter_e@gmx.net\n> Cc: Dave Page; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Build errors with current CVS\n> \n> \n> > Tatsuo Ishii writes:\n> > \n> > > I think this happens because you are using cygwin \n> envrionment. Under \n> > > cygwin environment a shared object(dll) cannot be built until the \n> > > backend bild completes.\n> > \n> > Yes. AIX also has this problem.\n> > \n> > > I heard this theory from a cygwin expert in Japan. If this is \n> > > correct, we have to move utils/mb/conversion_procs to right under \n> > > src so that it builds *after* the backend build finishes.\n> > \n> > You don't have to move them, but you need to adjust the build order.\n> \n> I have committed changes according to your suggestion.\n> \n> Can people on cygwin or AIX test the fix?\n\nLooks OK here, thanks.\n\nRegards, Dave.\n",
"msg_date": "Thu, 8 Aug 2002 12:04:27 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Build errors with current CVS"
}
] |
[
{
"msg_contents": "Hmm. That maybe. I've yet to sit down and try to fully understand the\nrelations in the system tables. I simply grep'd for pg_attribute and\nadded a check for \"and attisdropped = 'f'\".\n\nIn each case, I did attempt to run the modified queries to ensure they\nstill executed and compared the resulting output against the unmodified\nqueries, however, it doesn't mean I groked what it was really trying to\ndo.\n\nIf you do find corrections to be made, please let me know as I'd love to\nknow where, what, and why I messed up so I can learn from my mistakes.\n\nSeems the only system tables ERD I have, I find rather hard to\nread...guess that will be on my list of items to do today...find a good\nERD of the system tables.\n\nGreg\n\nOn Wed, 2002-08-07 at 23:00, Christopher Kings-Lynne wrote:\n> Hi Greg,\n> \n> You should be submitting all these patches to the pgsql-patches mailing\n> list...\n> \n> I will look at your attisdropped patch a bit more carefully as I think some\n> of the exclusions you've put in aren't necessary...\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Greg Copeland\n> > Sent: Thursday, 8 August 2002 11:50 AM\n> > To: PostgresSQL Hackers Mailing List\n> > Subject: [HACKERS] Another python patch -- minor\n> >\n> >\n> > This fixes some text as well as enforces the use of \"drop table cascade\"\n> > since we moved from an implicate to explicate implementation.\n> >\n> > Please find attached the func.py patch.\n> >\n> > Sorry these are not all one single patch. I really hadn't planned on\n> > doing all this...especially not tonight. ;)\n> >\n> > Greg Copeland\n> >\n> >\n> >\n> >\n>",
"msg_date": "08 Aug 2002 07:23:55 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Another python patch -- minor"
},
{
"msg_contents": "On Wed, 2002-08-07 at 23:00, Christopher Kings-Lynne wrote:\n> Hi Greg,\n> \n> You should be submitting all these patches to the pgsql-patches mailing\n> list...\n\n\nShould I resubmit all my patches to that list or will they all be picked\nup from here?\n\nGreg",
"msg_date": "08 Aug 2002 10:59:55 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Another python patch -- minor"
},
{
"msg_contents": "\nPicked up. Thanks.\n\n---------------------------------------------------------------------------\n\nGreg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> On Wed, 2002-08-07 at 23:00, Christopher Kings-Lynne wrote:\n> > Hi Greg,\n> > \n> > You should be submitting all these patches to the pgsql-patches mailing\n> > list...\n> \n> \n> Should I resubmit all my patches to that list or will they all be picked\n> up from here?\n> \n> Greg\n> \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 8 Aug 2002 19:53:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another python patch -- minor"
}
] |
[
{
"msg_contents": "I'm going to be overseeing a move from a Mac-based postgres database (100k\ntransactions/day, roughly 5M rows) to an SGI Octane in the near-ish term.The\nmachine will only be two-way SMP. I'd like to see it working 64-bit and\ncompiled with MIPSpro. I have a friend who has mostly succeeded in getting\nit compiiled with MIPSpro, but Neil told me today there might be concerns\nwith SMP systems > 4cpu's. I offered access on a system with 6 cpus (SGI\nChallenge L, R4400's). I may have access to other machines, including a\n36-cpu Octane with R10k's. If this is useful to somebody on the core group,\nplease let me know. I'd really like to see Postgres understand MIPSpro and\nirix out of the box. I understand there is some difficulty at present.\n\nI'd appreciate a Cc on the thread, if possible.\n\nThanks,\nalex\n",
"msg_date": "Thu, 8 Aug 2002 16:49:24 -0400 ",
"msg_from": "Alex Avriette <a_avriette@acs.org>",
"msg_from_op": true,
"msg_subject": "IRIX and large SMP: donations of shells &c"
},
{
"msg_contents": "Alex Avriette <a_avriette@acs.org> writes:\n> I have a friend who has mostly succeeded in getting\n> it compiiled with MIPSpro, but Neil told me today there might be concerns\n> with SMP systems > 4cpu's.\n\nThat's my impression, anyone -- I can't say I've confirmed that with\nany benchmarks.\n\n> I offered access on a system with 6 cpus (SGI Challenge L, R4400's).\n\nAs I indicated in IRC, I'd be interested in the use of that\nmachine. If that's okay, can you send me the auth info via email? You\ncan find my GPG key on keyserver.net.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "08 Aug 2002 16:53:08 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: IRIX and large SMP: donations of shells &c"
},
{
"msg_contents": "Alex Avriette <a_avriette@acs.org> writes:\n> I'd really like to see Postgres understand MIPSpro and\n> irix out of the box. I understand there is some difficulty at present.\n\nLike what?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Aug 2002 16:58:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: IRIX and large SMP: donations of shells &c "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.