threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi all,\n\nJust tested latest CVS on my freebsd/alpha. Only one test failed, and\nthat's privileges related...\n\n*** ./expected/privileges.out\tThu Mar 7 09:53:51 2002\n--- ./results/privileges.out\tFri Mar 8 11:03:36 2002\n***************\n*** 201,218 ****\n CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 * $1;' LANGUAGE\nsql;\n CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 * $1;' LANGUAGE\nsql;\n GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regressuser2;\n GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; -- semantic error\n! ERROR: invalid privilege type USAGE for function object\n GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;\n GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;\n! ERROR: Function 'testfunc_nosuch(int4)' does not exist\n SET SESSION AUTHORIZATION regressuser2;\n SELECT testfunc1(5), testfunc2(5); -- ok\n! testfunc1 | testfunc2\n! -----------+-----------\n! 10 | 15\n! (1 row)\n!\n CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 * $1;' LANGUAGE\nsql; -- fail\n ERROR: permission denied\n SET SESSION AUTHORIZATION regressuser3;\n--- 201,216 ----\n CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 * $1;' LANGUAGE\nsql;\n CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 * $1;' LANGUAGE\nsql;\n GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regressuser2;\n+ ERROR: bogus GrantStmt.objtype 458\n GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; -- semantic error\n! ERROR: bogus GrantStmt.objtype 458\n GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;\n+ ERROR: bogus GrantStmt.objtype 458\n GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;\n! ERROR: bogus GrantStmt.objtype 458\n SET SESSION AUTHORIZATION regressuser2;\n SELECT testfunc1(5), testfunc2(5); -- ok\n! ERROR: permission denied\n CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 * $1;' LANGUAGE\nsql; -- fail\n ERROR: permission denied\n SET SESSION AUTHORIZATION regressuser3;\n***************\n*** 220,230 ****\n ERROR: permission denied\n SET SESSION AUTHORIZATION regressuser4;\n SELECT testfunc1(5); -- ok\n! testfunc1\n! -----------\n! 10\n! (1 row)\n!\n DROP FUNCTION testfunc1(int); -- fail\n ERROR: RemoveFunction: function 'testfunc1': permission denied\n \\c -\n--- 218,224 ----\n ERROR: permission denied\n SET SESSION AUTHORIZATION regressuser4;\n SELECT testfunc1(5); -- ok\n! ERROR: permission denied\n DROP FUNCTION testfunc1(int); -- fail\n ERROR: RemoveFunction: function 'testfunc1': permission denied\n \\c -\n\n======================================================================",
"msg_date": "Fri, 8 Mar 2002 11:54:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "privileges regression problem on freebsd/alpha"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regressuser2;\n> + ERROR: bogus GrantStmt.objtype 458\n\nDoes the error persist if you \"make clean\" and rebuild?\n\nI'm betting this is not a platform issue, but just aclchk.c being out\nof sync with the parser. GrantStmt is using parser token codes to\ndistinguish the various kinds of GRANT, which is probably a bad idea.\nThe token codes will change anytime someone looks crosseyed at gram.y\n(well, I exaggerate, but they're not exactly stable). IMHO node\nstructure definitions shouldn't depend on them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 00:33:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: privileges regression problem on freebsd/alpha "
},
{
"msg_contents": "Yep, tried it again and everything passes.\n\nChris\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, 8 March 2002 1:33 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] privileges regression problem on freebsd/alpha \n> \n> \n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO \n> regressuser2;\n> > + ERROR: bogus GrantStmt.objtype 458\n> \n> Does the error persist if you \"make clean\" and rebuild?\n> \n> I'm betting this is not a platform issue, but just aclchk.c being out\n> of sync with the parser. GrantStmt is using parser token codes to\n> distinguish the various kinds of GRANT, which is probably a bad idea.\n> The token codes will change anytime someone looks crosseyed at gram.y\n> (well, I exaggerate, but they're not exactly stable). IMHO node\n> structure definitions shouldn't depend on them.\n> \n> \t\t\tregards, tom lane\n> \n\n",
"msg_date": "Fri, 8 Mar 2002 14:33:50 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: privileges regression problem on freebsd/alpha "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Yep, tried it again and everything passes.\n\nBingo.\n\n>> I'm betting this is not a platform issue, but just aclchk.c being out\n>> of sync with the parser. GrantStmt is using parser token codes to\n>> distinguish the various kinds of GRANT, which is probably a bad idea.\n>> The token codes will change anytime someone looks crosseyed at gram.y\n>> (well, I exaggerate, but they're not exactly stable). IMHO node\n>> structure definitions shouldn't depend on them.\n\nLooking around finds these places where parser token codes are used\nbeyond the parser itself:\n\naclchk.c: GrantStmt\ncommand.c: AlterTableDropConstraint\ncomment.c: CommentObject, CommentRelation\npostgres.c: TransactionStmt\nutility.c: TransactionStmt, FetchStmt, CopyStmt, DefineStmt, ReindexStmt\n\n(I exclude _outAExpr in outfuncs.c, which is okay since it's effectively\nonly used for debugging dumps.)\n\nI believe these are all trouble waiting to happen --- for example,\nif utility.o is out of sync with the parser, a COPY command could be\ninterpreted as going in the wrong direction :-(. The risk would be\ncompletely intolerable if any of these commands were allowed in stored\nrules, since the rule parsetree would outlive any one compilation of the\nbackend. Currently that's not true, but they might be allowed sometime.\n\nBarring strenuous objections from someplace, I plan to change these node\ntypes to use booleans or special-purpose enum fields as appropriate.\nThat will make their representation independent of what the parser token\nset happens to be on any given day. We should avoid re-introducing such\ndependencies in future.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 02:21:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: privileges regression problem on freebsd/alpha "
},
{
"msg_contents": "\nChristopher, is this problem fixed now?\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Hi all,\n> \n> Just tested latest CVS on my freebsd/alpha. Only one test failed, and\n> that's privileges related...\n> \n> *** ./expected/privileges.out\tThu Mar 7 09:53:51 2002\n> --- ./results/privileges.out\tFri Mar 8 11:03:36 2002\n> ***************\n> *** 201,218 ****\n> CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 * $1;' LANGUAGE\n> sql;\n> CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 * $1;' LANGUAGE\n> sql;\n> GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regressuser2;\n> GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; -- semantic error\n> ! ERROR: invalid privilege type USAGE for function object\n> GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;\n> GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;\n> ! ERROR: Function 'testfunc_nosuch(int4)' does not exist\n> SET SESSION AUTHORIZATION regressuser2;\n> SELECT testfunc1(5), testfunc2(5); -- ok\n> ! testfunc1 | testfunc2\n> ! -----------+-----------\n> ! 10 | 15\n> ! (1 row)\n> !\n> CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 * $1;' LANGUAGE\n> sql; -- fail\n> ERROR: permission denied\n> SET SESSION AUTHORIZATION regressuser3;\n> --- 201,216 ----\n> CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 * $1;' LANGUAGE\n> sql;\n> CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 * $1;' LANGUAGE\n> sql;\n> GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regressuser2;\n> + ERROR: bogus GrantStmt.objtype 458\n> GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; -- semantic error\n> ! ERROR: bogus GrantStmt.objtype 458\n> GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;\n> + ERROR: bogus GrantStmt.objtype 458\n> GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;\n> ! ERROR: bogus GrantStmt.objtype 458\n> SET SESSION AUTHORIZATION regressuser2;\n> SELECT testfunc1(5), testfunc2(5); -- ok\n> ! ERROR: permission denied\n> CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 * $1;' LANGUAGE\n> sql; -- fail\n> ERROR: permission denied\n> SET SESSION AUTHORIZATION regressuser3;\n> ***************\n> *** 220,230 ****\n> ERROR: permission denied\n> SET SESSION AUTHORIZATION regressuser4;\n> SELECT testfunc1(5); -- ok\n> ! testfunc1\n> ! -----------\n> ! 10\n> ! (1 row)\n> !\n> DROP FUNCTION testfunc1(int); -- fail\n> ERROR: RemoveFunction: function 'testfunc1': permission denied\n> \\c -\n> --- 218,224 ----\n> ERROR: permission denied\n> SET SESSION AUTHORIZATION regressuser4;\n> SELECT testfunc1(5); -- ok\n> ! ERROR: permission denied\n> DROP FUNCTION testfunc1(int); -- fail\n> ERROR: RemoveFunction: function 'testfunc1': permission denied\n> \\c -\n> \n> ======================================================================\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 14 Mar 2002 16:20:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: privileges regression problem on freebsd/alpha"
},
{
"msg_contents": "Yep\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, 15 March 2002 5:20 AM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] privileges regression problem on freebsd/alpha\n>\n>\n>\n> Christopher, is this problem fixed now?\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> Christopher Kings-Lynne wrote:\n> > Hi all,\n> >\n> > Just tested latest CVS on my freebsd/alpha. Only one test failed, and\n> > that's privileges related...\n> >\n> > *** ./expected/privileges.out\tThu Mar 7 09:53:51 2002\n> > --- ./results/privileges.out\tFri Mar 8 11:03:36 2002\n> > ***************\n> > *** 201,218 ****\n> > CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 *\n> $1;' LANGUAGE\n> > sql;\n> > CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 *\n> $1;' LANGUAGE\n> > sql;\n> > GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO\n> regressuser2;\n> > GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; --\n> semantic error\n> > ! ERROR: invalid privilege type USAGE for function object\n> > GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;\n> > GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;\n> > ! ERROR: Function 'testfunc_nosuch(int4)' does not exist\n> > SET SESSION AUTHORIZATION regressuser2;\n> > SELECT testfunc1(5), testfunc2(5); -- ok\n> > ! testfunc1 | testfunc2\n> > ! -----------+-----------\n> > ! 10 | 15\n> > ! (1 row)\n> > !\n> > CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 *\n> $1;' LANGUAGE\n> > sql; -- fail\n> > ERROR: permission denied\n> > SET SESSION AUTHORIZATION regressuser3;\n> > --- 201,216 ----\n> > CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 *\n> $1;' LANGUAGE\n> > sql;\n> > CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 *\n> $1;' LANGUAGE\n> > sql;\n> > GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO\n> regressuser2;\n> > + ERROR: bogus GrantStmt.objtype 458\n> > GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; --\n> semantic error\n> > ! ERROR: bogus GrantStmt.objtype 458\n> > GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;\n> > + ERROR: bogus GrantStmt.objtype 458\n> > GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;\n> > ! ERROR: bogus GrantStmt.objtype 458\n> > SET SESSION AUTHORIZATION regressuser2;\n> > SELECT testfunc1(5), testfunc2(5); -- ok\n> > ! ERROR: permission denied\n> > CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 *\n> $1;' LANGUAGE\n> > sql; -- fail\n> > ERROR: permission denied\n> > SET SESSION AUTHORIZATION regressuser3;\n> > ***************\n> > *** 220,230 ****\n> > ERROR: permission denied\n> > SET SESSION AUTHORIZATION regressuser4;\n> > SELECT testfunc1(5); -- ok\n> > ! testfunc1\n> > ! -----------\n> > ! 10\n> > ! (1 row)\n> > !\n> > DROP FUNCTION testfunc1(int); -- fail\n> > ERROR: RemoveFunction: function 'testfunc1': permission denied\n> > \\c -\n> > --- 218,224 ----\n> > ERROR: permission denied\n> > SET SESSION AUTHORIZATION regressuser4;\n> > SELECT testfunc1(5); -- ok\n> > ! ERROR: permission denied\n> > DROP FUNCTION testfunc1(int); -- fail\n> > ERROR: RemoveFunction: function 'testfunc1': permission denied\n> > \\c -\n> >\n> > ======================================================================\n>\n> [ Attachment, skipping... ]\n>\n> [ Attachment, skipping... ]\n>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Fri, 15 Mar 2002 09:45:29 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: privileges regression problem on freebsd/alpha"
}
] |
[
{
"msg_contents": "This patch completes the following TODO item:\n\n\t* Remove brackets as multi-statement rule grouping, must use parens\n\nOne question I have is whether this change is needed:\n\t\n\t %left '.'\n\t- %left '[' ']'\n\t %left '(' ')'\n\nI believe the logic for removal of brackets for multi-statement rules is\nthat brackets are just weird in this usage. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/ref/create_rule.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/create_rule.sgml,v\nretrieving revision 1.31\ndiff -c -r1.31 create_rule.sgml\n*** doc/src/sgml/ref/create_rule.sgml\t24 Jan 2002 18:28:15 -0000\t1.31\n--- doc/src/sgml/ref/create_rule.sgml\t8 Mar 2002 04:00:46 -0000\n***************\n*** 32,39 ****\n <replaceable class=\"parameter\">query</replaceable>\n |\n ( <replaceable class=\"parameter\">query</replaceable> ; <replaceable class=\"parameter\">query</replaceable> ... )\n- |\n- [ <replaceable class=\"parameter\">query</replaceable> ; <replaceable class=\"parameter\">query</replaceable> ... ]\n </synopsis>\n \n <refsect2 id=\"R2-SQL-CREATERULE-1\">\n--- 32,37 ----\n***************\n*** 177,191 ****\n </para>\n \n <para>\n! The <replaceable class=\"parameter\">action</replaceable> part of the rule\n! can consist of one or more queries. To write multiple queries, surround\n! them with either parentheses or square brackets. Such queries will be\n! performed in the specified order (whereas there are no guarantees about\n! the execution order of multiple rules for an object). The\n! <replaceable class=\"parameter\">action</replaceable> can also be NOTHING\n! indicating no action. Thus, a DO INSTEAD NOTHING rule suppresses the\n! original query from executing (when its condition is true); a DO NOTHING\n! rule is useless.\n </para>\n \n <para>\n--- 175,189 ----\n </para>\n \n <para>\n! The <replaceable class=\"parameter\">action</replaceable> part of the\n! rule can consist of one or more queries. To write multiple queries,\n! surround them with parentheses. Such queries will be performed in the\n! specified order (whereas there are no guarantees about the execution\n! order of multiple rules for an object). The <replaceable\n! class=\"parameter\">action</replaceable> can also be NOTHING indicating\n! no action. Thus, a DO INSTEAD NOTHING rule suppresses the original\n! query from executing (when its condition is true); a DO NOTHING rule\n! is useless.\n </para>\n \n <para>\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.287\ndiff -c -r2.287 gram.y\n*** src/backend/parser/gram.y\t7 Mar 2002 16:35:35 -0000\t2.287\n--- src/backend/parser/gram.y\t8 Mar 2002 04:01:02 -0000\n***************\n*** 407,413 ****\n %left\t\tAT ZONE\t\t\t/* sets precedence for AT TIME ZONE */\n %right\t\tUMINUS\n %left\t\t'.'\n- %left\t\t'[' ']'\n %left\t\t'(' ')'\n %left\t\tTYPECAST\n %%\n--- 407,412 ----\n***************\n*** 2864,2870 ****\n \n RuleActionList: NOTHING\t\t\t\t{ $$ = NIL; }\n \t\t| RuleActionStmt\t\t\t\t{ $$ = makeList1($1); }\n- \t\t| '[' RuleActionMulti ']'\t\t{ $$ = $2; }\n \t\t| '(' RuleActionMulti ')'\t\t{ $$ = $2; } \n \t\t;\n \n--- 2863,2868 ----\nIndex: src/interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.180\ndiff -c -r1.180 preproc.y\n*** src/interfaces/ecpg/preproc/preproc.y\t6 Mar 2002 10:10:52 -0000\t1.180\n--- src/interfaces/ecpg/preproc/preproc.y\t8 Mar 2002 04:01:29 -0000\n***************\n*** 273,279 ****\n %left AT ZONE\n %right\t\tUMINUS\n %left\t\t'.'\n- %left\t\t'[' ']'\n %left\t\t'(' ')'\n %left\t\tTYPECAST\n \n--- 273,278 ----\n***************\n*** 2153,2159 ****\n \n RuleActionList: NOTHING { $$ = make_str(\"nothing\"); }\n | RuleActionStmt { $$ = $1; }\n- | '[' RuleActionMulti ']' { $$ = cat_str(3, make_str(\"[\"), $2, make_str(\"]\")); }\n | '(' RuleActionMulti ')' { $$ = cat_str(3, make_str(\"(\"), $2, make_str(\")\")); }\n ;\n \n--- 2152,2157 ----",
"msg_date": "Thu, 7 Mar 2002 23:09:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Patch for removal of RULE bracket use"
},
{
"msg_contents": "Bruce,\n\nOn Thu, 7 Mar 2002, Bruce Momjian wrote:\n\n> This patch completes the following TODO item:\n> \n> \t* Remove brackets as multi-statement rule grouping, must use parens\n> \n> One question I have is whether this change is needed:\n> \t\n> \t %left '.'\n> \t- %left '[' ']'\n> \t %left '(' ')'\n\nIt is unncessary to remove this. Square brackets are used elsewhere in the\ngrammar (arrays, opt_indirection). It is possible that the grammar\nrequires left to right order of precidence for these.\n\nGavin\n\n",
"msg_date": "Fri, 8 Mar 2002 15:22:42 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Patch for removal of RULE bracket use"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> One question I have is whether this change is needed:\n\t\n> \t %left '.'\n> \t- %left '[' ']'\n> \t %left '(' ')'\n\nOnly if you want to break array-subscript parsing ;-). Leave it in.\n\n> I believe the logic for removal of brackets for multi-statement rules is\n> that brackets are just weird in this usage. :-)\n\nI think the real reason is that psql and other clients aren't smart\nabout brackets overriding semicolons, and we don't feel like making\nthem so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 00:09:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for removal of RULE bracket use "
}
] |
[
{
"msg_contents": "Ok....\n\ngram.y is fixed (no more %expect usage)\n\nUsing the copyCreateDomainStmt in the proper place.\n\nEvolution is the mail client of choice for different (improved?) mime\nheaders.\n\nAnd attached is a regular diff -c, rather than a cvs diff -c.\n\n\nI updated the poor descriptions of MergeDomainAttributes(). Hopefully\nits current and future use is more obvious.\n\n\nAm I getting close?",
"msg_date": "07 Mar 2002 23:21:18 -0500",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Domain Support -- another round"
},
{
"msg_contents": "Grr. Figured out why the patch was pooched. Basically SAVING it out of\nOutlook adds CR's everywhere! Believe it or not...\n\nBTW, it failed when patching parsenodes.h - you might need to update the\npatch against CVS...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-patches-owner@postgresql.org\n> [mailto:pgsql-patches-owner@postgresql.org]On Behalf Of Rod Taylor\n> Sent: Friday, 8 March 2002 12:21 PM\n> To: pgsql-patches@postgresql.org\n> Subject: [PATCHES] Domain Support -- another round\n>\n>\n> Ok....\n>\n> gram.y is fixed (no more %expect usage)\n>\n> Using the copyCreateDomainStmt in the proper place.\n>\n> Evolution is the mail client of choice for different (improved?) mime\n> headers.\n>\n> And attached is a regular diff -c, rather than a cvs diff -c.\n>\n>\n> I updated the poor descriptions of MergeDomainAttributes(). Hopefully\n> its current and future use is more obvious.\n>\n>\n> Am I getting close?\n>\n>\n\n",
"msg_date": "Fri, 8 Mar 2002 15:37:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "Attached is a diff to the patch of the below message to use b_expr\nrather than c_expr.\n\nAlso includes an improved regress set. Less redundant failures, and\ntests numeric types as they're different from the others enough to\nwarrent it.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Rod Taylor\" <rbt@zort.ca>\nTo: <pgsql-patches@postgresql.org>\nSent: Thursday, March 07, 2002 11:21 PM\nSubject: [PATCHES] Domain Support -- another round\n\n\n> Ok....\n>\n> gram.y is fixed (no more %expect usage)\n>\n> Using the copyCreateDomainStmt in the proper place.\n>\n> Evolution is the mail client of choice for different (improved?)\nmime\n> headers.\n>\n> And attached is a regular diff -c, rather than a cvs diff -c.\n>\n>\n> I updated the poor descriptions of MergeDomainAttributes().\nHopefully\n> its current and future use is more obvious.\n>\n>\n> Am I getting close?\n>\n>\n\n\n----------------------------------------------------------------------\n----------\n\n\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n>",
"msg_date": "Fri, 8 Mar 2002 22:18:29 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "\nRod indicates this is his final version.\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nRod Taylor wrote:\n> Attached is a diff to the patch of the below message to use b_expr\n> rather than c_expr.\n> \n> Also includes an improved regress set. Less redundant failures, and\n> tests numeric types as they're different from the others enough to\n> warrent it.\n> --\n> Rod Taylor\n> \n> This message represents the official view of the voices in my head\n> \n> ----- Original Message -----\n> From: \"Rod Taylor\" <rbt@zort.ca>\n> To: <pgsql-patches@postgresql.org>\n> Sent: Thursday, March 07, 2002 11:21 PM\n> Subject: [PATCHES] Domain Support -- another round\n> \n> \n> > Ok....\n> >\n> > gram.y is fixed (no more %expect usage)\n> >\n> > Using the copyCreateDomainStmt in the proper place.\n> >\n> > Evolution is the mail client of choice for different (improved?)\n> mime\n> > headers.\n> >\n> > And attached is a regular diff -c, rather than a cvs diff -c.\n> >\n> >\n> > I updated the poor descriptions of MergeDomainAttributes().\n> Hopefully\n> > its current and future use is more obvious.\n> >\n> >\n> > Am I getting close?\n> >\n> >\n> \n> \n> ----------------------------------------------------------------------\n> ----------\n> \n> \n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to\n> majordomo@postgresql.org)\n> >\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 15:52:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "Random nitpicking below. Also, have you created a regression test?\n\n\n> diff -rc pgsql.orig/doc/src/sgml/catalogs.sgml pgsqldomain/doc/src/sgml/catalogs.sgml\n> *** pgsql.orig/doc/src/sgml/catalogs.sgml\tThu Mar 7 11:35:32 2002\n> --- pgsqldomain/doc/src/sgml/catalogs.sgml\tThu Mar 7 22:24:23 2002\n> ***************\n> *** 2511,2516 ****\n> --- 2511,2563 ----\n> </row>\n>\n> <row>\n> + <entry>typbasetype</entry>\n> + <entry><type>oid</type></entry>\n> + <entry></entry>\n> + <entry><para>\n> + <structfield>typbasetype</structfield> is the type that this one is based\n> + off of. Normally references the domains parent type, and is 0 otherwise.\n\n\"based on\"\n\n> + </para></entry>\n> + </row>\n> +\n> + \t <row>\n> + \t <entry>typnotnull</entry>\n> + \t <entry><type>boolean</type></entry>\n> + \t <entry></entry>\n> + \t <entry><para>\n> + \t <structfield>typnotnull</structfield> represents a NOT NULL\n> + \t constraint on a type. Normally used only for domains.\n\nAnd unnormally...?\n\n> + \t </para></entry>\n> + \t </row>\n> +\n> + <row>\n> + <entry>typmod</entry>\n> + <entry><type>integer</type></entry>\n> + <entry></entry>\n> + <entry><para>\n> + <structfield>typmod</structfield> records type-specific data\n> + supplied at table creation time (for example, the maximum\n> + length of a <type>varchar</type> column). It is passed to\n> + type-specific input and output functions as the third\n> + argument. The value will generally be -1 for types that do not\n> + need typmod. This data is copied to\n> + <structfield>pg_attribute.atttypmod</structfield> on creation\n> + of a table using a domain as it's field type.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry>typdefaultbin</entry>\n> + <entry><type>text</type></entry>\n> + <entry></entry>\n> + <entry><para>\n> + <structfield>typdefaultbin</structfield> is NULL for types without a\n> + default value. If it's not NULL, it contains the internal string\n> + representation of the default expression node.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> <entry>typdefault</entry>\n> <entry><type>text</type></entry>\n> <entry></entry>\n> diff -rc pgsql.orig/doc/src/sgml/ref/allfiles.sgml pgsqldomain/doc/src/sgml/ref/allfiles.sgml\n> *** pgsql.orig/doc/src/sgml/ref/allfiles.sgml\tThu Mar 7 11:35:32 2002\n> --- pgsqldomain/doc/src/sgml/ref/allfiles.sgml\tThu Mar 7 22:24:23 2002\n> ***************\n> *** 52,57 ****\n> --- 52,58 ----\n> <!entity createAggregate system \"create_aggregate.sgml\">\n> <!entity createConstraint system \"create_constraint.sgml\">\n> <!entity createDatabase system \"create_database.sgml\">\n> + <!entity createDomain system \"create_domain.sgml\">\n\nI don't see this file included.\n\n> <!entity createFunction system \"create_function.sgml\">\n> <!entity createGroup system \"create_group.sgml\">\n> <!entity createIndex system \"create_index.sgml\">\n> ***************\n> *** 69,74 ****\n> --- 70,76 ----\n> <!entity delete system \"delete.sgml\">\n> <!entity dropAggregate system \"drop_aggregate.sgml\">\n> <!entity dropDatabase system \"drop_database.sgml\">\n> + <!entity dropDomain system \"drop_domain.sgml\">\n> <!entity dropFunction system \"drop_function.sgml\">\n> <!entity dropGroup system \"drop_group.sgml\">\n> <!entity dropIndex system \"drop_index.sgml\">\n> diff -rc pgsql.orig/doc/src/sgml/ref/comment.sgml pgsqldomain/doc/src/sgml/ref/comment.sgml\n> *** pgsql.orig/doc/src/sgml/ref/comment.sgml\tThu Mar 7 11:35:33 2002\n> --- pgsqldomain/doc/src/sgml/ref/comment.sgml\tThu Mar 7 22:24:23 2002\n> ***************\n> *** 25,31 ****\n> <synopsis>\n> COMMENT ON\n> [\n> ! [ DATABASE | INDEX | RULE | SEQUENCE | TABLE | TYPE | VIEW ] <replaceable class=\"PARAMETER\">object_name</replaceable> |\n> COLUMN <replaceable class=\"PARAMETER\">table_name</replaceable>.<replaceable class=\"PARAMETER\">column_name</replaceable> |\n> AGGREGATE <replaceable class=\"PARAMETER\">agg_name</replaceable> (<replaceable class=\"PARAMETER\">agg_type</replaceable>) |\n> FUNCTION <replaceable class=\"PARAMETER\">func_name</replaceable> (<replaceable class=\"PARAMETER\">arg1</replaceable>, <replaceable class=\"PARAMETER\">arg2</replaceable>, ...) |\n> --- 25,31 ----\n> <synopsis>\n> COMMENT ON\n> [\n> ! [ DATABASE | DOMAIN | INDEX | RULE | SEQUENCE | TABLE | TYPE | VIEW ] <replaceable class=\"PARAMETER\">object_name</replaceable> |\n> COLUMN <replaceable class=\"PARAMETER\">table_name</replaceable>.<replaceable class=\"PARAMETER\">column_name</replaceable> |\n> AGGREGATE <replaceable class=\"PARAMETER\">agg_name</replaceable> (<replaceable class=\"PARAMETER\">agg_type</replaceable>) |\n> FUNCTION <replaceable class=\"PARAMETER\">func_name</replaceable> (<replaceable class=\"PARAMETER\">arg1</replaceable>, <replaceable class=\"PARAMETER\">arg2</replaceable>, ...) |\n> ***************\n> *** 33,39 ****\n> TRIGGER <replaceable class=\"PARAMETER\">trigger_name</replaceable> ON <replaceable class=\"PARAMETER\">table_name</replaceable>\n> ] IS <replaceable class=\"PARAMETER\">'text'</replaceable>\n> </synopsis>\n> !\n> <refsect2 id=\"R2-SQL-COMMENT-1\">\n> <refsect2info>\n> <date>1999-10-25</date>\n> --- 33,39 ----\n> TRIGGER <replaceable class=\"PARAMETER\">trigger_name</replaceable> ON <replaceable class=\"PARAMETER\">table_name</replaceable>\n> ] IS <replaceable class=\"PARAMETER\">'text'</replaceable>\n> </synopsis>\n> !\n> <refsect2 id=\"R2-SQL-COMMENT-1\">\n> <refsect2info>\n> <date>1999-10-25</date>\n> ***************\n> *** 64,70 ****\n> </variablelist>\n> </para>\n> </refsect2>\n> !\n> <refsect2 id=\"R2-SQL-COMMENT-2\">\n> <refsect2info>\n> <date>1998-09-08</date>\n> --- 64,70 ----\n> </variablelist>\n> </para>\n> </refsect2>\n> !\n> <refsect2 id=\"R2-SQL-COMMENT-2\">\n> <refsect2info>\n> <date>1998-09-08</date>\n> ***************\n> *** 99,105 ****\n> </title>\n> <para>\n> <command>COMMENT</command> stores a comment about a database object.\n> ! Comments can be\n> easily retrieved with <command>psql</command>'s\n> <command>\\dd</command>, <command>\\d+</command>, or <command>\\l+</command>\n> commands. Other user interfaces to retrieve comments can be built atop\n> --- 99,105 ----\n> </title>\n> <para>\n> <command>COMMENT</command> stores a comment about a database object.\n> ! Comments can be\n> easily retrieved with <command>psql</command>'s\n> <command>\\dd</command>, <command>\\d+</command>, or <command>\\l+</command>\n> commands. Other user interfaces to retrieve comments can be built atop\n> ***************\n> *** 141,146 ****\n> --- 141,147 ----\n>\n> <programlisting>\n> COMMENT ON DATABASE my_database IS 'Development Database';\n> + COMMENT ON DOMAIN my_domain IS 'Domains are like abstracted fields';\n\nThis comment describes domains in general, not a specific domain.\n\n> COMMENT ON INDEX my_index IS 'Enforces uniqueness on employee id';\n> COMMENT ON RULE my_rule IS 'Logs UPDATES of employee records';\n> COMMENT ON SEQUENCE my_sequence IS 'Used to generate primary keys';\n> ***************\n> *** 155,166 ****\n> </programlisting>\n> </para>\n> </refsect1>\n> !\n> <refsect1 id=\"R1-SQL-COMMENT-3\">\n> <title>\n> Compatibility\n> </title>\n> !\n> <refsect2 id=\"R2-SQL-COMMENT-4\">\n> <refsect2info>\n> <date>1998-09-08</date>\n> --- 156,167 ----\n> </programlisting>\n> </para>\n> </refsect1>\n> !\n> <refsect1 id=\"R1-SQL-COMMENT-3\">\n> <title>\n> Compatibility\n> </title>\n> !\n> <refsect2 id=\"R2-SQL-COMMENT-4\">\n> <refsect2info>\n> <date>1998-09-08</date>\n> diff -rc pgsql.orig/doc/src/sgml/reference.sgml pgsqldomain/doc/src/sgml/reference.sgml\n> *** pgsql.orig/doc/src/sgml/reference.sgml\tThu Mar 7 11:35:32 2002\n> --- pgsqldomain/doc/src/sgml/reference.sgml\tThu Mar 7 22:24:23 2002\n> ***************\n> *** 61,66 ****\n> --- 61,67 ----\n> &createAggregate;\n> &createConstraint;\n> &createDatabase;\n> + &createDomain;\n> &createFunction;\n> &createGroup;\n> &createIndex;\n> ***************\n> *** 78,83 ****\n> --- 79,85 ----\n> &delete;\n> &dropAggregate;\n> &dropDatabase;\n> + &dropDomain;\n> &dropFunction;\n> &dropGroup;\n> &dropIndex;\n> ***************\n> *** 115,121 ****\n> &unlisten;\n> &update;\n> &vacuum;\n> !\n> </reference>\n>\n> <!--\n> --- 117,123 ----\n> &unlisten;\n> &update;\n> &vacuum;\n> !\n> </reference>\n>\n> <!--\n> diff -rc pgsql.orig/src/backend/catalog/heap.c pgsqldomain/src/backend/catalog/heap.c\n> *** pgsql.orig/src/backend/catalog/heap.c\tThu Mar 7 11:35:33 2002\n> --- pgsqldomain/src/backend/catalog/heap.c\tThu Mar 7 22:24:23 2002\n> ***************\n> *** 49,54 ****\n> --- 49,55 ----\n> #include \"optimizer/planmain.h\"\n> #include \"optimizer/prep.h\"\n> #include \"optimizer/var.h\"\n> + #include \"parser/parse_coerce.h\"\n> #include \"parser/parse_expr.h\"\n> #include \"parser/parse_relation.h\"\n> #include \"parser/parse_target.h\"\n> ***************\n> *** 698,707 ****\n> \t\t\t \"oidin\",\t\t\t/* receive procedure */\n> \t\t\t \"oidout\",\t\t/* send procedure */\n> \t\t\t NULL,\t\t\t/* array element type - irrelevant */\n> \t\t\t NULL,\t\t\t/* default type value - none */\n> \t\t\t true,\t\t\t/* passed by value */\n> \t\t\t 'i',\t\t\t\t/* default alignment - same as for OID */\n> ! \t\t\t 'p');\t\t\t/* Not TOASTable */\n> }\n>\n> /* --------------------------------\n> --- 699,713 ----\n> \t\t\t \"oidin\",\t\t\t/* receive procedure */\n> \t\t\t \"oidout\",\t\t/* send procedure */\n> \t\t\t NULL,\t\t\t/* array element type - irrelevant */\n> + \t\t\t NULL,\t\t\t/* baseType Name -- typically for domaains */\n\nspello\n\n> \t\t\t NULL,\t\t\t/* default type value - none */\n> + \t\t\t NULL,\t\t\t/* default type binary representation */\n> \t\t\t true,\t\t\t/* passed by value */\n> \t\t\t 'i',\t\t\t\t/* default alignment - same as for OID */\n> ! \t\t\t 'p',\t\t\t\t/* Not TOASTable */\n> ! \t\t\t -1,\t\t\t\t/* Type mod length */\n> ! \t\t\t 0,\t\t\t\t/* array dimensions for typBaseType */\n> ! \t\t\t false);\t\t\t/* Type NOT NULL */\n> }\n>\n> /* --------------------------------\n> ***************\n> *** 1584,1589 ****\n> --- 1590,1599 ----\n> \tint\t\t\tnumchecks;\n> \tList\t *listptr;\n>\n> + \t/* Probably shouldn't be null by default */\n> + \tNode\t *expr = NULL;\n> +\n> +\n> \t/*\n> \t * Get info about existing constraints.\n> \t */\n> ***************\n> *** 1614,1681 ****\n> \tforeach(listptr, rawColDefaults)\n> \t{\n> \t\tRawColumnDefault *colDef = (RawColumnDefault *) lfirst(listptr);\n> - \t\tNode\t *expr;\n> - \t\tOid\t\t\ttype_id;\n>\n> - \t\tAssert(colDef->raw_default != NULL);\n>\n> ! \t\t/*\n> ! \t\t * Transform raw parsetree to executable expression.\n> ! \t\t */\n> ! \t\texpr = transformExpr(pstate, colDef->raw_default, EXPR_COLUMN_FIRST);\n>\n> ! \t\t/*\n> ! \t\t * Make sure default expr does not refer to any vars.\n> ! \t\t */\n> ! \t\tif (contain_var_clause(expr))\n> ! \t\t\telog(ERROR, \"cannot use column references in DEFAULT clause\");\n> !\n> ! \t\t/*\n> ! \t\t * No subplans or aggregates, either...\n> ! \t\t */\n> ! \t\tif (contain_subplans(expr))\n> ! \t\t\telog(ERROR, \"cannot use subselects in DEFAULT clause\");\n> ! \t\tif (contain_agg_clause(expr))\n> ! \t\t\telog(ERROR, \"cannot use aggregate functions in DEFAULT clause\");\n> !\n> ! \t\t/*\n> ! \t\t * Check that it will be possible to coerce the expression to the\n> ! \t\t * column's type. We store the expression without coercion,\n> ! \t\t * however, to avoid premature coercion in cases like\n> ! \t\t *\n> ! \t\t * CREATE TABLE tbl (fld datetime DEFAULT 'now'::text);\n> ! \t\t *\n> ! \t\t * NB: this should match the code in optimizer/prep/preptlist.c that\n> ! \t\t * will actually do the coercion, to ensure we don't accept an\n> ! \t\t * unusable default expression.\n> ! \t\t */\n> ! \t\ttype_id = exprType(expr);\n> ! \t\tif (type_id != InvalidOid)\n> ! \t\t{\n> ! \t\t\tForm_pg_attribute atp = rel->rd_att->attrs[colDef->attnum - 1];\n> !\n> ! \t\t\tif (type_id != atp->atttypid)\n> ! \t\t\t{\n> ! \t\t\t\tif (CoerceTargetExpr(NULL, expr, type_id,\n> ! \t\t\t\t\t\t\t\t atp->atttypid, atp->atttypmod) == NULL)\n> ! \t\t\t\t\telog(ERROR, \"Column \\\"%s\\\" is of type %s\"\n> ! \t\t\t\t\t\t \" but default expression is of type %s\"\n> ! \t\t\t\t\t\"\\n\\tYou will need to rewrite or cast the expression\",\n> ! \t\t\t\t\t\t NameStr(atp->attname),\n> ! \t\t\t\t\t\t format_type_be(atp->atttypid),\n> ! \t\t\t\t\t\t format_type_be(type_id));\n> ! \t\t\t}\n> ! \t\t}\n> !\n> ! \t\t/*\n> ! \t\t * Might as well try to reduce any constant expressions.\n> ! \t\t */\n> ! \t\texpr = eval_const_expressions(expr);\n> !\n> ! \t\t/*\n> ! \t\t * Must fix opids, in case any operators remain...\n> ! \t\t */\n> ! \t\tfix_opids(expr);\n>\n> \t\t/*\n> \t\t * OK, store it.\n> --- 1624,1636 ----\n> \tforeach(listptr, rawColDefaults)\n> \t{\n> \t\tRawColumnDefault *colDef = (RawColumnDefault *) lfirst(listptr);\n>\n>\n> ! \t\tForm_pg_attribute atp = rel->rd_att->attrs[colDef->attnum - 1];\n>\n> ! \t\texpr = cookDefault(pstate, colDef->raw_default\n> ! \t\t\t\t\t\t, atp->atttypid, atp->atttypmod\n> ! \t\t\t\t\t\t, NameStr(atp->attname));\n>\n> \t\t/*\n> \t\t * OK, store it.\n> ***************\n> *** 1891,1896 ****\n> --- 1846,1933 ----\n> \theap_freetuple(reltup);\n> \theap_close(relrel, RowExclusiveLock);\n> }\n> +\n> + /*\n> + * Take a raw default and convert it to a cooked format ready for\n> + * storage.\n> + *\n> + * Parse state, attypid, attypmod and attname are required for\n> + * CoerceTargetExpr() and more importantly transformExpr().\n> + */\n> + Node *\n> + cookDefault(ParseState *pstate\n> + \t\t\t, Node *raw_default\n> + \t\t\t, Oid atttypid\n> + \t\t\t, int32 atttypmod\n> + \t\t\t, char *attname) {\n\nStick to the formatting please.\n\n> +\n> + \tOid\t\t\ttype_id;\n> + \tNode\t\t*expr;\n> +\n> + \tAssert(raw_default != NULL);\n> +\n> + \t/*\n> + \t * Transform raw parsetree to executable expression.\n> + \t */\n> + \texpr = transformExpr(pstate, raw_default, EXPR_COLUMN_FIRST);\n> +\n> + \t/*\n> + \t * Make sure default expr does not refer to any vars.\n> + \t */\n> + \tif (contain_var_clause(expr))\n> + \t\telog(ERROR, \"cannot use column references in DEFAULT clause\");\n> +\n> + \t/*\n> + \t * No subplans or aggregates, either...\n> + \t */\n> + \tif (contain_subplans(expr))\n> + \t\telog(ERROR, \"cannot use subselects in DEFAULT clause\");\n> + \tif (contain_agg_clause(expr))\n> + \t\telog(ERROR, \"cannot use aggregate functions in DEFAULT clause\");\n> +\n> + \t/*\n> + \t * Check that it will be possible to coerce the expression to the\n> + \t * column's type. We store the expression without coercion,\n> + \t * however, to avoid premature coercion in cases like\n> + \t *\n> + \t * CREATE TABLE tbl (fld datetime DEFAULT 'now'::text);\n> + \t *\n> + \t * NB: this should match the code in optimizer/prep/preptlist.c that\n> + \t * will actually do the coercion, to ensure we don't accept an\n> + \t * unusable default expression.\n> + \t */\n> + \ttype_id = exprType(expr);\n> + \tif (type_id != InvalidOid && atttypid != InvalidOid) {\n> + \t\tif (type_id != atttypid) {\n> +\n> + \t\t\t/* Try coercing to the base type of the domain if available */\n> + \t\t\tif (CoerceTargetExpr(pstate, expr, type_id,\n> + \t\t\t\t\t\t\t\t getBaseType(atttypid),\n> + \t\t\t\t\t\t\t\t atttypmod) == NULL) {\n> +\n> + \t\t\t\telog(ERROR, \"Column \\\"%s\\\" is of type %s\"\n> + \t\t\t\t\t\" but default expression is of type %s\"\n> + \t\t\t\t\t\"\\n\\tYou will need to rewrite or cast the expression\",\n> + \t\t\t\t\t attname,\n> + \t\t\t\t\t format_type_be(atttypid),\n> + \t\t\t\t\t format_type_be(type_id));\n> + \t\t\t}\n> + \t\t}\n> + \t}\n> +\n> + \t/*\n> + \t * Might as well try to reduce any constant expressions.\n> + \t */\n> + \texpr = eval_const_expressions(expr);\n> +\n> + \t/*\n> + \t * Must fix opids, in case any operators remain...\n> + \t */\n> + \tfix_opids(expr);\n> +\n> + \treturn(expr);\n> + }\n> +\n>\n> static void\n> RemoveAttrDefaults(Relation rel)\n\n> diff -rc pgsql.orig/src/backend/commands/creatinh.c pgsqldomain/src/backend/commands/creatinh.c\n> *** pgsql.orig/src/backend/commands/creatinh.c\tThu Mar 7 11:35:34 2002\n> --- pgsqldomain/src/backend/commands/creatinh.c\tThu Mar 7 23:16:06 2002\n> ***************\n> *** 39,45 ****\n> static void StoreCatalogInheritance(Oid relationId, List *supers);\n> static int\tfindAttrByName(const char *attributeName, List *schema);\n> static void setRelhassubclassInRelation(Oid relationId, bool relhassubclass);\n> !\n>\n> /* ----------------------------------------------------------------\n> *\t\tDefineRelation\n> --- 39,45 ----\n> static void StoreCatalogInheritance(Oid relationId, List *supers);\n> static int\tfindAttrByName(const char *attributeName, List *schema);\n> static void setRelhassubclassInRelation(Oid relationId, bool relhassubclass);\n> ! static List *MergeDomainAttributes(List *schema);\n>\n> /* ----------------------------------------------------------------\n> *\t\tDefineRelation\n> ***************\n> *** 70,75 ****\n> --- 70,82 ----\n> \tStrNCpy(relname, stmt->relname, NAMEDATALEN);\n>\n> \t/*\n> + \t * Inherit domain attributes into the known columns before table inheritance\n> + \t * applies it's changes otherwise we risk adding double constraints\n> + \t * to a domain thats inherited.\n> + \t */\n> + \tschema = MergeDomainAttributes(schema);\n> +\n> + \t/*\n> \t * Look up inheritance ancestors and generate relation schema,\n> \t * including inherited attributes.\n> \t */\n> ***************\n> *** 235,240 ****\n> --- 242,307 ----\n> {\n> \tAssertArg(name);\n> \theap_truncate(name);\n> + }\n> +\n> +\n> + /*\n> + * MergeDomainAttributes\n> + * Returns a new table schema with the constraints, types, and other\n> + * attributes of the domain resolved for fields using the domain as\n> + *\t\ttheir type.\n\nI didn't know we had schemas yet. You should probably not overload that\nterm to mean \"a list of database objects\".\n\n> + *\n> + * Defaults are pulled out by the table attribute as required, similar to\n> + * how all types defaults are processed.\n> + */\n> + static List *\n> + MergeDomainAttributes(List *schema)\n> + {\n> + \tList\t *entry;\n> +\n> + \t/*\n> + \t * Loop through the table elements supplied. These should\n> + \t * never include inherited domains else they'll be\n> + \t * double (or more) processed.\n> + \t */\n> + \tforeach(entry, schema)\n> + \t{\n> + \t\tColumnDef *coldef = lfirst(entry);\n> + \t\tHeapTuple tuple;\n> + \t\tForm_pg_type typeTup;\n> +\n> +\n> + \t\ttuple = SearchSysCache(TYPENAME,\n> + \t\t\t\t\t\t\t CStringGetDatum(coldef->typename->name),\n> + \t\t\t\t\t\t\t 0,0,0);\n> +\n> + \t\tif (!HeapTupleIsValid(tuple))\n> + \t\t\telog(ERROR, \"MergeDomainAttributes: Type %s does not exist\",\n> + \t\t\t\t coldef->typename->name);\n> +\n> + \t\ttypeTup = (Form_pg_type) GETSTRUCT(tuple);\n> + \t\tif (typeTup->typtype == 'd') {\n> + \t\t\t/*\n> + \t\t\t * This is a domain, lets force the properties of the domain on to\n> + \t\t\t * the new column.\n> + \t\t\t */\n> +\n> + \t\t\t/* Enforce the typmod value */\n> + \t\t\tcoldef->typename->typmod = typeTup->typmod;\n> +\n> + \t\t\t/* Enforce type NOT NULL || column definition NOT NULL -> NOT NULL */\n> + \t\t\tcoldef->is_not_null |= typeTup->typnotnull;\n> +\n> + \t\t\t/* Enforce the element type in the event the domain is an array\n> + \t\t\t *\n> + \t\t\t * BUG: How do we fill out arrayBounds and attrname from typelem and typNDimms?\n> + \t\t\t */\n> +\n> + \t\t}\n> + \t\tReleaseSysCache(tuple);\n> + \t}\n> +\n> + \treturn schema;\n> }\n>\n> /*----------\n> diff -rc pgsql.orig/src/backend/commands/define.c pgsqldomain/src/backend/commands/define.c\n> *** pgsql.orig/src/backend/commands/define.c\tThu Mar 7 11:35:34 2002\n> --- pgsqldomain/src/backend/commands/define.c\tThu Mar 7 22:24:23 2002\n> ***************\n> *** 40,45 ****\n> --- 40,46 ----\n>\n> #include \"access/heapam.h\"\n> #include \"catalog/catname.h\"\n> + #include \"catalog/heap.h\"\n> #include \"catalog/pg_aggregate.h\"\n> #include \"catalog/pg_language.h\"\n> #include \"catalog/pg_operator.h\"\n> ***************\n> *** 476,481 ****\n> --- 477,798 ----\n> }\n>\n> /*\n> + * DefineDomain\n> + *\t\tRegisters a new domain.\n> + */\n> + void\n> + DefineDomain(CreateDomainStmt *stmt)\n> + {\n> + \tint16\t\tinternalLength = -1;\t/* int2 */\n> + \tint16\t\texternalLength = -1;\t/* int2 */\n> + \tchar\t *inputName = NULL;\n> + \tchar\t *outputName = NULL;\n> + \tchar\t *sendName = NULL;\n> + \tchar\t *receiveName = NULL;\n> +\n> + \t/*\n> + \t * Domains store the external representation in defaultValue\n> + \t * and the interal Node representation in defaultValueBin\n> + \t */\n> + \tchar\t *defaultValue = NULL;\n> + \tchar\t *defaultValueBin = NULL;\n> +\n> + \tbool\t\tbyValue = false;\n> + \tchar\t\tdelimiter = DEFAULT_TYPDELIM;\n> + \tchar\t\talignment = 'i';\t/* default alignment */\n> + \tchar\t\tstorage = 'p';\t/* default TOAST storage method */\n> + \tchar\t\ttyptype;\n> + \tDatum\t\tdatum;\n> + \tbool\t\ttypNotNull = false;\n> + \tchar\t\t*elemName = NULL;\n> + \tint32\t\ttypNDims = 0;\t/* No array dimensions by default */\n> +\n> + \tbool\t\tisnull;\n> + \tRelation\tpg_type_rel;\n> + \tTupleDesc\tpg_type_dsc;\n> + \tHeapTuple\ttypeTup;\n> + \tchar\t *typeName = stmt->typename->name;\n> +\n> + \tList\t *listptr;\n> + \tList\t *schema = stmt->constraints;\n> +\n> + \t/*\n> + \t * Domainnames, unlike typenames don't need to account for the '_'\n> + \t * prefix. So they can be one character longer.\n> + \t */\n> + \tif (strlen(stmt->domainname) > (NAMEDATALEN - 1))\n> + \t\telog(ERROR, \"CREATE DOMAIN: domain names must be %d characters or less\",\n> + \t\t\t NAMEDATALEN - 1);\n> +\n> +\n> + \t/* Test for existing Domain (or type) of that name */\n> + \ttypeTup = SearchSysCache( TYPENAME\n> + \t\t\t\t\t\t\t, PointerGetDatum(stmt->domainname)\n> + \t\t\t\t\t\t\t, 0, 0, 0\n> + \t\t\t\t\t\t\t);\n> +\n> + \tif (HeapTupleIsValid(typeTup))\n> + \t{\n> + \t\telog(ERROR, \"CREATE DOMAIN: domain or type %s already exists\",\n> + \t\t\t stmt->domainname);\n> + \t}\n> +\n> + \t/*\n> + \t * Get the information about old types\n> + \t */\n> + \tpg_type_rel = heap_openr(TypeRelationName, RowExclusiveLock);\n> + \tpg_type_dsc = RelationGetDescr(pg_type_rel);\n> +\n> +\n> + \t/*\n> + \t * When the type is an array for some reason we don't actually receive\n> + \t * the name here. We receive the base types name. Lets set Dims while\n> + \t * were at it.\n> + \t */\n> + \tif (stmt->typename->arrayBounds > 0) {\n> + \t\ttypeName = makeArrayTypeName(stmt->typename->name);\n> +\n> + \t\ttypNDims = length(stmt->typename->arrayBounds);\n> + \t}\n> +\n> +\n> + \ttypeTup = SearchSysCache( TYPENAME\n> + \t\t\t\t\t\t\t, PointerGetDatum(typeName)\n> + \t\t\t\t\t\t\t, 0, 0, 0\n> + \t\t\t\t\t\t\t);\n> +\n> + \tif (!HeapTupleIsValid(typeTup))\n> + \t{\n> + \t\telog(ERROR, \"CREATE DOMAIN: type %s does not exist\",\n> + \t\t\t stmt->typename->name);\n> + \t}\n> +\n> +\n> + \t/* Check that this is a basetype */\n> + \ttyptype = DatumGetChar(heap_getattr(typeTup, Anum_pg_type_typtype, pg_type_dsc, &isnull));\n> + \tAssert(!isnull);\n> +\n> + \t/*\n> + \t * What we really don't want is domains of domains. This could cause all sorts\n> + \t * of neat issues if we allow that.\n> + \t *\n> + \t * With testing, we may determine complex types should be allowed\n> + \t */\n> + \tif (typtype != 'b') {\n> + \t\telog(ERROR, \"DefineDomain: %s is not a basetype\", stmt->typename->name);\n> + \t}\n> +\n> + \t/* passed by value */\n> + \tbyValue = \t\t\tDatumGetBool(heap_getattr(typeTup, Anum_pg_type_typbyval, pg_type_dsc, &isnull));\n> + \tAssert(!isnull);\n\nYou don't have to use heap_getattr here. You can use\n\n byValue = ((Form_pg_type) GETSTRUCT(typeTup))->typbyval\n\nSame for all the other ones that are fixed-length.\n\n> +\n> + \t/* Required Alignment */\n> + \talignment = \t\tDatumGetChar(heap_getattr(typeTup, Anum_pg_type_typalign, pg_type_dsc, &isnull));\n> + \tAssert(!isnull);\n> +\n> + \t/* Storage Length */\n> + \tinternalLength = \tDatumGetInt16(heap_getattr(typeTup, Anum_pg_type_typlen, pg_type_dsc, &isnull));\n> + \tAssert(!isnull);\n> +\n> + \t/* External Length (unused) */\n> + \texternalLength = \tDatumGetInt16(heap_getattr(typeTup, Anum_pg_type_typprtlen, pg_type_dsc, &isnull));\n> + \tAssert(!isnull);\n> +\n> + \t/* Array element Delimiter */\n> + \tdelimiter = \t\tDatumGetChar(heap_getattr(typeTup, Anum_pg_type_typdelim, pg_type_dsc, &isnull));\n> + \tAssert(!isnull);\n> +\n> + \t/* Input Function Name */\n> + \tdatum = \t\t\theap_getattr(typeTup, Anum_pg_type_typinput, pg_type_dsc, &isnull);\n> + \tAssert(!isnull);\n> +\n> + \tinputName = \t\tDatumGetCString(DirectFunctionCall1(regprocout, datum));\n> +\n> + \t/* Output Function Name */\n> + \tdatum = \t\t\theap_getattr(typeTup, Anum_pg_type_typoutput, pg_type_dsc, &isnull);\n> + \tAssert(!isnull);\n> +\n> + \toutputName = \t\tDatumGetCString(DirectFunctionCall1(regprocout, datum));\n> +\n> + \t/* ReceiveName */\n> + \tdatum = \t\t\theap_getattr(typeTup, Anum_pg_type_typreceive, pg_type_dsc, &isnull);\n> + \tAssert(!isnull);\n> +\n> + \treceiveName = \t\tDatumGetCString(DirectFunctionCall1(regprocout, datum));\n> +\n> + \t/* SendName */\n> + \tdatum = \t\t\theap_getattr(typeTup, Anum_pg_type_typsend, pg_type_dsc, &isnull);\n> + \tAssert(!isnull);\n> +\n> + \tsendName = \t\t\tDatumGetCString(DirectFunctionCall1(regprocout, datum));\n> +\n> + \t/* TOAST Strategy */\n> + \tstorage = \t\t\tDatumGetChar(heap_getattr(typeTup, Anum_pg_type_typstorage, pg_type_dsc, &isnull));\n> + \tAssert(!isnull);\n> +\n> + \t/* Inherited default value */\n> + \tdatum = \t\t\theap_getattr(typeTup, Anum_pg_type_typdefault, pg_type_dsc, &isnull);\n> + \tif (!isnull) {\n> + \t\tdefaultValue = \tDatumGetCString(DirectFunctionCall1(textout, datum));\n> + \t}\n> +\n> + \t/*\n> + \t * Pull out the typelem name of the parent OID.\n> + \t *\n> + \t * This is what enables us to make a domain of an array\n> + \t */\n> + \tdatum = \t\t\theap_getattr(typeTup, Anum_pg_type_typelem, pg_type_dsc, &isnull);\n> + \tAssert(!isnull);\n> +\n> + \tif (DatumGetObjectId(datum) != InvalidOid) {\n> + \t\tHeapTuple tup;\n> +\n> + \t\ttup = SearchSysCache( TYPEOID\n> + \t\t\t\t\t\t\t, datum\n> + \t\t\t\t\t\t\t, 0, 0, 0\n> + \t\t\t\t\t\t\t);\n> +\n> + \t\telemName = NameStr(((Form_pg_type) GETSTRUCT(tup))->typname);\n> +\n> + \t\tReleaseSysCache(tup);\n> + \t}\n> +\n> +\n> + \t/*\n> + \t * Run through constraints manually avoids the additional\n> + \t * processing conducted by DefineRelation() and friends.\n> + \t *\n> + \t * Besides, we don't want any constraints to be cooked. We'll\n> + \t * do that when the table is created via MergeDomainAttributes().\n> + \t */\n> + \tforeach(listptr, schema)\n> + \t{\n> + \t\tbool nullDefined = false;\n> + \t\tNode\t *expr;\n> + \t\tConstraint *colDef = lfirst(listptr);\n> +\n> + \t\t/* Used for the statement transformation */\n> + \t\tParseState *pstate;\n> +\n> + \t\t/*\n> + \t\t * Create a dummy ParseState and insert the target relation as its\n> + \t\t * sole rangetable entry. We need a ParseState for transformExpr.\n> + \t\t */\n> + \t\tpstate = make_parsestate(NULL);\n> +\n> + \t\tswitch(colDef->contype) {\n> + \t\t\t/*\n> + \t \t\t * The inherited default value may be overridden by the user\n> + \t\t\t * with the DEFAULT <expr> statement.\n> + \t\t\t *\n> + \t \t\t * We have to search the entire constraint tree returned as we\n> + \t\t\t * don't want to cook or fiddle too much.\n> + \t\t\t */\n> + \t\t\tcase CONSTR_DEFAULT:\n> +\n> + \t\t\t\t/*\n> + \t\t\t\t * Cook the colDef->raw_expr into an expression to ensure\n> + \t\t\t\t * that it can be done. We store the text version of the\n> + \t\t\t\t * raw value.\n> + \t\t\t\t *\n> + \t\t\t\t * Note: Name is strictly for error message\n> + \t\t\t\t */\n> + \t\t\t\texpr = cookDefault(pstate, colDef->raw_expr\n> + \t\t\t\t\t\t\t\t, typeTup->t_data->t_oid\n> + \t\t\t\t\t\t\t\t, stmt->typename->typmod\n> + \t\t\t\t\t\t\t\t, stmt->typename->name);\n> +\n> + \t\t\t\t/* Binary default required */\n> + \t\t\t\tdefaultValue = deparse_expression(expr,\n> + \t\t\t\t\t\t\t\tdeparse_context_for(stmt->domainname,\n> + \t\t\t\t\t\t\t\t\t\t\t\t\tInvalidOid),\n> + \t\t\t\t\t\t\t\t\t\t\t\t false);\n> +\n> + \t\t\t\tdefaultValueBin = nodeToString(expr);\n> +\n> + \t\t\t\tbreak;\n> +\n> + \t\t\t/*\n> + \t\t\t * Find the NULL constraint.\n> + \t\t\t */\n> + \t\t\tcase CONSTR_NOTNULL:\n> + \t\t\t\tif (nullDefined) {\n> + \t\t\t\t\telog(ERROR, \"CREATE DOMAIN has conflicting NULL / NOT NULL constraint\");\n> + \t\t\t\t} else {\n> + \t\t\t\t\ttypNotNull = true;\n> + \t\t\t\t\tnullDefined = true;\n> + \t\t\t\t}\n> +\n> + \t\t \t\tbreak;\n> +\n> + \t\t\tcase CONSTR_NULL:\n> + \t\t\t\tif (nullDefined) {\n> + \t\t\t\t\telog(ERROR, \"CREATE DOMAIN has conflicting NULL / NOT NULL constraint\");\n> + \t\t\t\t} else {\n> + \t\t\t\t\ttypNotNull = false;\n> + \t\t\t\t\tnullDefined = true;\n> + \t\t\t\t}\n> +\n> + \t\t \t\tbreak;\n> +\n> + \t\t \tcase CONSTR_UNIQUE:\n> + \t\t \t\telog(ERROR, \"CREATE DOMAIN / UNIQUE indecies not supported\");\n> + \t\t \t\tbreak;\n> +\n> + \t\t \tcase CONSTR_PRIMARY:\n> + \t\t \t\telog(ERROR, \"CREATE DOMAIN / PRIMARY KEY indecies not supported\");\n> + \t\t \t\tbreak;\n> +\n> +\n> + \t\t \tcase CONSTR_CHECK:\n> +\n> + \t\t \t\telog(ERROR, \"defineDomain: CHECK Constraints not supported\");\n> + \t\t \t\tbreak;\n> +\n> + \t\t \tcase CONSTR_ATTR_DEFERRABLE:\n> + \t\t \tcase CONSTR_ATTR_NOT_DEFERRABLE:\n> + \t\t \tcase CONSTR_ATTR_DEFERRED:\n> + \t\t \tcase CONSTR_ATTR_IMMEDIATE:\n> + \t\t \t\telog(ERROR, \"defineDomain: DEFERRABLE, NON DEFERRABLE, DEFERRED and IMMEDIATE not supported\");\n> + \t\t \t\tbreak;\n> + \t\t}\n> +\n> + \t}\n> +\n> + \t/*\n> + \t * Have TypeCreate do all the real work.\n> + \t */\n> + \tTypeCreate(stmt->domainname,\t/* type name */\n> + \t\t\t InvalidOid,\t\t\t/* preassigned type oid (not done here) */\n> + \t\t\t InvalidOid,\t\t\t/* relation oid (n/a here) */\n> + \t\t\t internalLength,\t\t/* internal size */\n> + \t\t\t externalLength,\t\t/* external size */\n> + \t\t\t 'd',\t\t\t\t\t/* type-type (domain type) */\n> + \t\t\t delimiter,\t\t\t/* array element delimiter */\n> + \t\t\t inputName,\t\t\t/* input procedure */\n> + \t\t\t outputName,\t\t\t/* output procedure */\n> + \t\t\t receiveName,\t\t\t/* receive procedure */\n> + \t\t\t sendName,\t\t\t/* send procedure */\n> + \t\t\t elemName,\t\t\t/* element type name */\n> + \t\t\t typeName,\t\t\t/* base type name */\n> + \t\t\t defaultValue,\t\t/* default type value */\n> + \t\t\t defaultValueBin,\t\t/* default type value */\n> + \t\t\t byValue,\t\t\t\t/* passed by value */\n> + \t\t\t alignment,\t\t\t/* required alignment */\n> + \t\t\t storage,\t\t\t\t/* TOAST strategy */\n> + \t\t\t stmt->typename->typmod, /* typeMod value */\n> + \t\t\t typNDims,\t\t\t/* Array dimensions for base type */\n> + \t\t\t typNotNull);\t/* Type NOT NULL */\n> +\n> + \t/*\n> + \t * Now we can clean up.\n> + \t */\n> + \tReleaseSysCache(typeTup);\n> + \theap_close(pg_type_rel, NoLock);\n> + }\n> +\n> +\n> + /*\n> * DefineType\n> *\t\tRegisters a new type.\n> */\n> ***************\n> *** 490,495 ****\n> --- 807,814 ----\n> \tchar\t *sendName = NULL;\n> \tchar\t *receiveName = NULL;\n> \tchar\t *defaultValue = NULL;\n> + \tchar\t *defaultValueBin = NULL;\n> + \tNode\t *defaultRaw = (Node *) NULL;\n> \tbool\t\tbyValue = false;\n> \tchar\t\tdelimiter = DEFAULT_TYPDELIM;\n> \tchar\t *shadow_type;\n> ***************\n> *** 531,537 ****\n> \t\telse if (strcasecmp(defel->defname, \"element\") == 0)\n> \t\t\telemName = defGetString(defel);\n> \t\telse if (strcasecmp(defel->defname, \"default\") == 0)\n> ! \t\t\tdefaultValue = defGetString(defel);\n> \t\telse if (strcasecmp(defel->defname, \"passedbyvalue\") == 0)\n> \t\t\tbyValue = true;\n> \t\telse if (strcasecmp(defel->defname, \"alignment\") == 0)\n> --- 850,856 ----\n> \t\telse if (strcasecmp(defel->defname, \"element\") == 0)\n> \t\t\telemName = defGetString(defel);\n> \t\telse if (strcasecmp(defel->defname, \"default\") == 0)\n> ! \t\t\tdefaultRaw = defel->arg;\n> \t\telse if (strcasecmp(defel->defname, \"passedbyvalue\") == 0)\n> \t\t\tbyValue = true;\n> \t\telse if (strcasecmp(defel->defname, \"alignment\") == 0)\n> ***************\n> *** 591,596 ****\n> --- 910,941 ----\n> \tif (outputName == NULL)\n> \t\telog(ERROR, \"Define: \\\"output\\\" unspecified\");\n>\n> +\n> + \tif (defaultRaw) {\n> + \t\tNode *expr;\n> + \t\tParseState *pstate;\n> +\n> + \t\t/*\n> + \t\t * Create a dummy ParseState and insert the target relation as its\n> + \t\t * sole rangetable entry. We need a ParseState for transformExpr.\n> + \t\t */\n> + \t\tpstate = make_parsestate(NULL);\n> +\n> + \t\texpr = cookDefault(pstate, defaultRaw,\n> + \t\t\t\t\t\t InvalidOid,\n> + \t\t\t\t\t\t -1,\n> + \t\t\t\t\t\t typeName);\n> +\n> + \t\t/* Binary default required */\n> + \t\tdefaultValue = deparse_expression(expr,\n> + \t\t\t\t\t\tdeparse_context_for(typeName,\n> + \t\t\t\t\t\t\t\t\t\t\tInvalidOid),\n> + \t\t\t\t\t\t\t\t\t\t false);\n> +\n> + \t\tdefaultValueBin = nodeToString(expr);\n> + \t}\n> +\n> +\n> \t/*\n> \t * now have TypeCreate do all the real work.\n> \t */\n> ***************\n> *** 606,615 ****\n> \t\t\t receiveName,\t\t/* receive procedure */\n> \t\t\t sendName,\t\t/* send procedure */\n> \t\t\t elemName,\t\t/* element type name */\n> \t\t\t defaultValue,\t/* default type value */\n> \t\t\t byValue,\t\t\t/* passed by value */\n> \t\t\t alignment,\t\t/* required alignment */\n> ! \t\t\t storage);\t\t/* TOAST strategy */\n>\n> \t/*\n> \t * When we create a base type (as opposed to a complex type) we need\n> --- 951,965 ----\n> \t\t\t receiveName,\t\t/* receive procedure */\n> \t\t\t sendName,\t\t/* send procedure */\n> \t\t\t elemName,\t\t/* element type name */\n> + \t\t\t NULL,\t\t\t/* base type name (Non-zero for domains) */\n> \t\t\t defaultValue,\t/* default type value */\n> + \t\t\t defaultValueBin,\t/* default type value (Binary form) */\n> \t\t\t byValue,\t\t\t/* passed by value */\n> \t\t\t alignment,\t\t/* required alignment */\n> ! \t\t\t storage,\t\t\t/* TOAST strategy */\n> ! \t\t\t -1,\t\t\t\t/* typMod (Domains only) */\n> ! \t\t\t 0,\t\t\t\t/* Array Dimensions of typbasetype */\n> ! \t\t\t 'f');\t\t\t/* Type NOT NULL */\n>\n> \t/*\n> \t * When we create a base type (as opposed to a complex type) we need\n> ***************\n> *** 632,641 ****\n> \t\t\t \"array_in\",\t\t/* receive procedure */\n> \t\t\t \"array_out\",\t\t/* send procedure */\n> \t\t\t typeName,\t\t/* element type name */\n> \t\t\t NULL,\t\t\t/* never a default type value */\n> \t\t\t false,\t\t\t/* never passed by value */\n> \t\t\t alignment,\t\t/* see above */\n> ! \t\t\t 'x');\t\t\t/* ARRAY is always toastable */\n>\n> \tpfree(shadow_type);\n> }\n> --- 982,996 ----\n> \t\t\t \"array_in\",\t\t/* receive procedure */\n> \t\t\t \"array_out\",\t\t/* send procedure */\n> \t\t\t typeName,\t\t/* element type name */\n> + \t\t\t NULL,\t\t\t/* base type name */\n> \t\t\t NULL,\t\t\t/* never a default type value */\n> + \t\t\t NULL,\t\t\t/* binary default isn't sent either */\n> \t\t\t false,\t\t\t/* never passed by value */\n> \t\t\t alignment,\t\t/* see above */\n> ! \t\t\t 'x',\t\t\t\t/* ARRAY is always toastable */\n> ! \t\t\t -1,\t\t\t\t/* typMod (Domains only) */\n> ! \t\t\t 0,\t\t\t\t/* Array dimensions of typbasetype */\n> ! \t\t\t 'f');\t\t\t/* Type NOT NULL */\n>\n> \tpfree(shadow_type);\n> }\n\n> diff -rc pgsql.orig/src/backend/nodes/copyfuncs.c pgsqldomain/src/backend/nodes/copyfuncs.c\n> *** pgsql.orig/src/backend/nodes/copyfuncs.c\tThu Mar 7 11:35:34 2002\n> --- pgsqldomain/src/backend/nodes/copyfuncs.c\tThu Mar 7 22:53:19 2002\n> ***************\n> *** 2227,2232 ****\n> --- 2227,2247 ----\n> \treturn newnode;\n> }\n>\n> + static CreateDomainStmt *\n> + _copyCreateDomainStmt(CreateDomainStmt *from)\n> + {\n> + \tCreateDomainStmt *newnode = makeNode(CreateDomainStmt);\n> +\n> + \tif (from->domainname)\n> + \t\tnewnode->domainname = pstrdup(from->domainname);\n> + \tif (from->typename)\n> + \t\tnewnode->typename = from->typename;\n\nThat's not a copy.\n\n> + \tif (from->constraints)\n> + \t\tnewnode->constraints = from->constraints;\n> +\n> + \treturn newnode;\n> + }\n> +\n> static CreatedbStmt *\n> _copyCreatedbStmt(CreatedbStmt *from)\n> {\n> ***************\n> *** 3026,3031 ****\n> --- 3041,3049 ----\n> \t\t\tbreak;\n> \t\tcase T_FuncWithArgs:\n> \t\t\tretval = _copyFuncWithArgs(from);\n> + \t\t\tbreak;\n> + \t\tcase T_CreateDomainStmt:\n> + \t\t\tretval = _copyCreateDomainStmt(from);\n> \t\t\tbreak;\n>\n> \t\tdefault:\n\n> diff -rc pgsql.orig/src/backend/parser/gram.y pgsqldomain/src/backend/parser/gram.y\n> *** pgsql.orig/src/backend/parser/gram.y\tThu Mar 7 11:35:35 2002\n> --- pgsqldomain/src/backend/parser/gram.y\tThu Mar 7 22:34:00 2002\n> ***************\n> *** 97,103 ****\n>\n> %}\n>\n> -\n> %union\n> {\n> \tint\t\t\t\t\tival;\n> --- 97,102 ----\n> ***************\n> *** 135,141 ****\n> \t\tClosePortalStmt, ClusterStmt, CommentStmt, ConstraintsSetStmt,\n> \t\tCopyStmt, CreateAsStmt, CreateGroupStmt, CreatePLangStmt,\n> \t\tCreateSchemaStmt, CreateSeqStmt, CreateStmt, CreateTrigStmt,\n> ! \t\tCreateUserStmt, CreatedbStmt, CursorStmt, DefineStmt, DeleteStmt,\n> \t\tDropGroupStmt, DropPLangStmt, DropSchemaStmt, DropStmt, DropTrigStmt,\n> \t\tDropUserStmt, DropdbStmt, ExplainStmt, FetchStmt,\n> \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt, LoadStmt, LockStmt,\n> --- 134,141 ----\n> \t\tClosePortalStmt, ClusterStmt, CommentStmt, ConstraintsSetStmt,\n> \t\tCopyStmt, CreateAsStmt, CreateGroupStmt, CreatePLangStmt,\n> \t\tCreateSchemaStmt, CreateSeqStmt, CreateStmt, CreateTrigStmt,\n> ! \t\tCreateUserStmt, CreateDomainStmt, CreatedbStmt, CursorStmt,\n\nAlphabetical order?\n\n> ! \t\tDefineStmt, DeleteStmt,\n> \t\tDropGroupStmt, DropPLangStmt, DropSchemaStmt, DropStmt, DropTrigStmt,\n> \t\tDropUserStmt, DropdbStmt, ExplainStmt, FetchStmt,\n> \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt, LoadStmt, LockStmt,\n> ***************\n> *** 289,294 ****\n> --- 289,296 ----\n> %type <list>\tconstraints_set_namelist\n> %type <boolean>\tconstraints_set_mode\n>\n> + %type <boolean> opt_as\n> +\n> /*\n> * If you make any token changes, remember to:\n> *\t\t- use \"yacc -d\" and update parse.h\n> ***************\n> *** 343,349 ****\n> \t\tWITHOUT\n>\n> /* Keywords (in SQL92 non-reserved words) */\n> ! %token\tCOMMITTED, SERIALIZABLE, TYPE_P\n>\n> /* Keywords for Postgres support (not in SQL92 reserved words)\n> *\n> --- 345,351 ----\n> \t\tWITHOUT\n>\n> /* Keywords (in SQL92 non-reserved words) */\n> ! %token\tCOMMITTED, SERIALIZABLE, TYPE_P, DOMAIN_P\n>\n> /* Keywords for Postgres support (not in SQL92 reserved words)\n> *\n> ***************\n> *** 446,451 ****\n> --- 448,454 ----\n> \t\t| CopyStmt\n> \t\t| CreateStmt\n> \t\t| CreateAsStmt\n> + \t\t| CreateDomainStmt\n> \t\t| CreateSchemaStmt\n> \t\t| CreateGroupStmt\n> \t\t| CreateSeqStmt\n> ***************\n> *** 776,783 ****\n> --- 779,789 ----\n> \t\t\t\t\tn->dbname = $3;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> + \t\t;\n>\n>\n> +\n> +\n> /*****************************************************************************\n> *\n> * Set PG internal variable\n> ***************\n> *** 1461,1467 ****\n> \t\t\t\t\tn->name = NULL;\n> \t\t\t\t\tif (exprIsNullConstant($2))\n> \t\t\t\t\t{\n> ! \t\t\t\t\t\t/* DEFAULT NULL should be reported as empty expr */\n> \t\t\t\t\t\tn->raw_expr = NULL;\n> \t\t\t\t\t}\n> \t\t\t\t\telse\n> --- 1467,1476 ----\n> \t\t\t\t\tn->name = NULL;\n> \t\t\t\t\tif (exprIsNullConstant($2))\n> \t\t\t\t\t{\n> ! \t\t\t\t\t\t/*\n> ! \t\t\t\t\t\t * DEFAULT NULL should be reported as empty expr\n> ! \t\t\t\t\t\t * Required for NOT NULL Domain overrides\n> ! \t\t\t\t\t\t */\n> \t\t\t\t\t\tn->raw_expr = NULL;\n> \t\t\t\t\t}\n> \t\t\t\t\telse\n> ***************\n> *** 2043,2055 ****\n> \t\t| def_list ',' def_elem\t\t\t\t{ $$ = lappend($1, $3); }\n> \t\t;\n>\n> ! def_elem: ColLabel '=' def_arg\n> \t\t\t\t{\n> \t\t\t\t\t$$ = makeNode(DefElem);\n> \t\t\t\t\t$$->defname = $1;\n> \t\t\t\t\t$$->arg = (Node *)$3;\n> \t\t\t\t}\n> ! \t\t| ColLabel\n> \t\t\t\t{\n> \t\t\t\t\t$$ = makeNode(DefElem);\n> \t\t\t\t\t$$->defname = $1;\n> --- 2052,2073 ----\n> \t\t| def_list ',' def_elem\t\t\t\t{ $$ = lappend($1, $3); }\n> \t\t;\n>\n> ! def_elem: DEFAULT '=' c_expr\n> ! \t\t\t\t{\n> ! \t\t\t\t\t$$ = makeNode(DefElem);\n> ! \t\t\t\t\t$$->defname = \"default\";\n> ! \t\t\t\t\tif (exprIsNullConstant($3))\n> ! \t\t\t\t\t\t$$->arg = (Node *)NULL;\n> ! \t\t\t\t\telse\n> ! \t\t\t\t\t\t$$->arg = $3;\n> ! \t\t\t\t}\n> ! \t\t| ColId '=' def_arg\n> \t\t\t\t{\n> \t\t\t\t\t$$ = makeNode(DefElem);\n> \t\t\t\t\t$$->defname = $1;\n> \t\t\t\t\t$$->arg = (Node *)$3;\n> \t\t\t\t}\n> ! \t\t| ColId\n> \t\t\t\t{\n> \t\t\t\t\t$$ = makeNode(DefElem);\n> \t\t\t\t\t$$->defname = $1;\n> ***************\n> *** 2078,2083 ****\n> --- 2096,2110 ----\n> \t\t\t\t\tDropStmt *n = makeNode(DropStmt);\n> \t\t\t\t\tn->removeType = $2;\n> \t\t\t\t\tn->names = $3;\n> + \t\t\t\t\tn->behavior = RESTRICT;\t\t/* Restricted by default */\n> + \t\t\t\t\t$$ = (Node *)n;\n> + \t\t\t\t}\n> + \t\t| DROP DOMAIN_P name_list drop_behavior\n> + \t\t\t\t{\n> + \t\t\t\t\tDropStmt *n = makeNode(DropStmt);\n> + \t\t\t\t\tn->removeType = DROP_DOMAIN_P;\n> + \t\t\t\t\tn->names = $3;\n> + \t\t\t\t\tn->behavior = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> \t\t;\n> ***************\n> *** 2110,2116 ****\n> * The COMMENT ON statement can take different forms based upon the type of\n> * the object associated with the comment. The form of the statement is:\n> *\n> ! * COMMENT ON [ [ DATABASE | INDEX | RULE | SEQUENCE | TABLE | TYPE | VIEW ]\n> * <objname> | AGGREGATE <aggname> (<aggtype>) | FUNCTION\n> *\t\t <funcname> (arg1, arg2, ...) | OPERATOR <op>\n> *\t\t (leftoperand_typ rightoperand_typ) | TRIGGER <triggername> ON\n> --- 2137,2143 ----\n> * The COMMENT ON statement can take different forms based upon the type of\n> * the object associated with the comment. The form of the statement is:\n> *\n> ! * COMMENT ON [ [ DATABASE | DOMAIN | INDEX | RULE | SEQUENCE | TABLE | TYPE | VIEW ]\n> * <objname> | AGGREGATE <aggname> (<aggtype>) | FUNCTION\n> *\t\t <funcname> (arg1, arg2, ...) | OPERATOR <op>\n> *\t\t (leftoperand_typ rightoperand_typ) | TRIGGER <triggername> ON\n> ***************\n> *** 2196,2201 ****\n> --- 2223,2229 ----\n> \t\t| RULE { $$ = RULE; }\n> \t\t| SEQUENCE { $$ = SEQUENCE; }\n> \t\t| TABLE { $$ = TABLE; }\n> + \t\t| DOMAIN_P { $$ = TYPE_P; }\n> \t\t| TYPE_P { $$ = TYPE_P; }\n> \t\t| VIEW { $$ = VIEW; }\n> \t\t;\n> ***************\n> *** 3178,3183 ****\n> --- 3206,3227 ----\n> \t\t\t\t{\n> \t\t\t\t\t$$ = lconsi(3, makeListi1(-1));\n> \t\t\t\t}\n> + \t\t;\n> +\n> +\n> + /*****************************************************************************\n> + *\n> + *\t\tDROP DATABASE\n> + *\n> + *\n> + *****************************************************************************/\n> +\n> + DropdbStmt:\tDROP DATABASE database_name\n> + \t\t\t\t{\n> + \t\t\t\t\tDropdbStmt *n = makeNode(DropdbStmt);\n> + \t\t\t\t\tn->dbname = $3;\n> + \t\t\t\t\t$$ = (Node *)n;\n> + \t\t\t\t}\n> \t\t| OWNER opt_equal name\n> \t\t\t\t{\n> \t\t\t\t\t$$ = lconsi(4, makeList1($3));\n\nThis doesn't look right.\n\n> ***************\n> *** 3222,3243 ****\n> \t\t\t\t}\n> \t\t;\n>\n> -\n> /*****************************************************************************\n> *\n> ! *\t\tDROP DATABASE\n> *\n> *\n> *****************************************************************************/\n>\n> ! DropdbStmt:\tDROP DATABASE database_name\n> \t\t\t\t{\n> ! \t\t\t\t\tDropdbStmt *n = makeNode(DropdbStmt);\n> ! \t\t\t\t\tn->dbname = $3;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> \t\t;\n>\n>\n> /*****************************************************************************\n> *\n> --- 3266,3295 ----\n> \t\t\t\t}\n> \t\t;\n>\n> /*****************************************************************************\n> *\n> ! * Manipulate a domain\n> *\n> *\n> *****************************************************************************/\n>\n> ! CreateDomainStmt: CREATE DOMAIN_P name opt_as Typename ColQualList opt_collate\n> \t\t\t\t{\n> ! \t\t\t\t\tCreateDomainStmt *n = makeNode(CreateDomainStmt);\n> ! \t\t\t\t\tn->domainname = $3;\n> ! \t\t\t\t\tn->typename = $5;\n> ! \t\t\t\t\tn->constraints = $6;\n> !\n> ! \t\t\t\t\tif ($7 != NULL)\n> ! \t\t\t\t\t\telog(NOTICE,\"CREATE DOMAIN / COLLATE %s not yet \"\n> ! \t\t\t\t\t\t\t\"implemented; clause ignored\", $7);\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> \t\t;\n>\n> + opt_as:\tAS\t{$$ = TRUE; }\n> + \t| /* EMPTY */\t{$$ = FALSE; }\n> + \t;\n>\n> /*****************************************************************************\n> *\n> ***************\n> *** 5879,5884 ****\n> --- 5931,5937 ----\n> \t\t| DEFERRED\t\t\t\t\t\t{ $$ = \"deferred\"; }\n> \t\t| DELETE\t\t\t\t\t\t{ $$ = \"delete\"; }\n> \t\t| DELIMITERS\t\t\t\t\t{ $$ = \"delimiters\"; }\n> + \t\t| DOMAIN_P\t\t\t\t\t\t{ $$ = \"domain\"; }\n> \t\t| DOUBLE\t\t\t\t\t\t{ $$ = \"double\"; }\n> \t\t| DROP\t\t\t\t\t\t\t{ $$ = \"drop\"; }\n> \t\t| EACH\t\t\t\t\t\t\t{ $$ = \"each\"; }\n\n> diff -rc pgsql.orig/src/backend/parser/parse_coerce.c pgsqldomain/src/backend/parser/parse_coerce.c\n> *** pgsql.orig/src/backend/parser/parse_coerce.c\tThu Mar 7 11:35:35 2002\n> --- pgsqldomain/src/backend/parser/parse_coerce.c\tThu Mar 7 22:24:24 2002\n> ***************\n> *** 38,43 ****\n> --- 38,44 ----\n> {\n> \tNode\t *result;\n>\n> +\n\nNo.\n\n> \tif (targetTypeId == inputTypeId ||\n> \t\ttargetTypeId == InvalidOid ||\n> \t\tnode == NULL)\n> ***************\n> *** 605,607 ****\n> --- 606,637 ----\n> \t}\n> \treturn result;\n> }\t/* PreferredType() */\n> +\n> +\n> + /*\n> + * If the targetTypeId is a domain, we really want to coerce\n> + * the tuple to the domain type -- not the domain itself\n> + */\n> + Oid\n> + getBaseType(Oid inType)\n> + {\n> + \tHeapTuple\ttup;\n> + \tForm_pg_type typTup;\n> +\n> + \ttup = SearchSysCache(TYPEOID,\n> + \t\t\t\t\t\t ObjectIdGetDatum(inType),\n> + \t\t\t\t\t\t 0, 0, 0);\n> +\n> + \ttypTup = ((Form_pg_type) GETSTRUCT(tup));\n> +\n> + \t/*\n> + \t * Assume that typbasetype exists and is a base type, where inType\n> + \t * was a domain\n> + \t */\n> + \tif (typTup->typtype == 'd')\n> + \t\tinType = typTup->typbasetype;\n> +\n> + \tReleaseSysCache(tup);\n> +\n> + \treturn inType;\n> + }\n\n> diff -rc pgsql.orig/src/backend/tcop/postgres.c pgsqldomain/src/backend/tcop/postgres.c\n> *** pgsql.orig/src/backend/tcop/postgres.c\tWed Mar 6 01:10:09 2002\n> --- pgsqldomain/src/backend/tcop/postgres.c\tThu Mar 7 22:24:24 2002\n> ***************\n> *** 2212,2217 ****\n> --- 2212,2218 ----\n> \t\t\t}\n> \t\t\tbreak;\n>\n> + \t\tcase T_CreateDomainStmt:\n> \t\tcase T_CreateStmt:\n> \t\t\ttag = \"CREATE\";\n> \t\t\tbreak;\n\nThe result tag for CREATE DOMAIN is CREATE DOMAIN. (Yes, there's actually\na standard about this.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 11 Mar 2002 19:14:46 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "New set with most of Peters comments corrected. Left the deal about\nschema though :) Took nearly an hour to do a cvs diff for some reason\nthis time (normally a couple of minutes is enough).\n\n> Random nitpicking below. Also, have you created a regression test?\n\nThey had been posted a few times and haven't changed. (Attached\nanyway)\n\n\n> > + <structfield>typnotnull</structfield> represents a NOT NULL\n> > + constraint on a type. Normally used only for domains.\n>\n> And unnormally...?\n\nUnnormally is when someone sets it by hand on a type which isn't a\ndomain -- I guess. Corrected.\n\n> > + <!entity createDomain system \"create_domain.sgml\">\n>\n> I don't see this file included.\n\nOther messages. Full package included on this one however.\n\n\n\n> > + * MergeDomainAttributes\n> > + * Returns a new table schema with the constraints, types,\nand other\n> > + * attributes of the domain resolved for fields using the\ndomain as\n> > + * their type.\n>\n> I didn't know we had schemas yet. You should probably not overload\nthat\n> term to mean \"a list of database objects\".\n\nMerge attributes says something very similar about inheritance and\ntable schemas. Kinda correct considering\nthe variable used in both cases is *schema.\n\n\nThe diff weirdness in regards to DROP DATABASE is probably because I\nstarted by copying the DROP DATABASE element, then altered it. I\ndon't know why it chose that method to do the diff though, but it is\naccurate. Using -cd flags didn't make it any prettier.",
"msg_date": "Mon, 11 Mar 2002 22:25:27 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "\nRemoved, superceeded by new versions.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> Attached is a diff to the patch of the below message to use b_expr\n> rather than c_expr.\n> \n> Also includes an improved regress set. Less redundant failures, and\n> tests numeric types as they're different from the others enough to\n> warrent it.\n> --\n> Rod Taylor\n> \n> This message represents the official view of the voices in my head\n> \n> ----- Original Message -----\n> From: \"Rod Taylor\" <rbt@zort.ca>\n> To: <pgsql-patches@postgresql.org>\n> Sent: Thursday, March 07, 2002 11:21 PM\n> Subject: [PATCHES] Domain Support -- another round\n> \n> \n> > Ok....\n> >\n> > gram.y is fixed (no more %expect usage)\n> >\n> > Using the copyCreateDomainStmt in the proper place.\n> >\n> > Evolution is the mail client of choice for different (improved?)\n> mime\n> > headers.\n> >\n> > And attached is a regular diff -c, rather than a cvs diff -c.\n> >\n> >\n> > I updated the poor descriptions of MergeDomainAttributes().\n> Hopefully\n> > its current and future use is more obvious.\n> >\n> >\n> > Am I getting close?\n> >\n> >\n> \n> \n> ----------------------------------------------------------------------\n> ----------\n> \n> \n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to\n> majordomo@postgresql.org)\n> >\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 23:42:28 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nRod Taylor wrote:\n> New set with most of Peters comments corrected. Left the deal about\n> schema though :) Took nearly an hour to do a cvs diff for some reason\n> this time (normally a couple of minutes is enough).\n> \n> > Random nitpicking below. Also, have you created a regression test?\n> \n> They had been posted a few times and haven't changed. (Attached\n> anyway)\n> \n> \n> > > + <structfield>typnotnull</structfield> represents a NOT NULL\n> > > + constraint on a type. Normally used only for domains.\n> >\n> > And unnormally...?\n> \n> Unnormally is when someone sets it by hand on a type which isn't a\n> domain -- I guess. Corrected.\n> \n> > > + <!entity createDomain system \"create_domain.sgml\">\n> >\n> > I don't see this file included.\n> \n> Other messages. Full package included on this one however.\n> \n> \n> \n> > > + * MergeDomainAttributes\n> > > + * Returns a new table schema with the constraints, types,\n> and other\n> > > + * attributes of the domain resolved for fields using the\n> domain as\n> > > + * their type.\n> >\n> > I didn't know we had schemas yet. You should probably not overload\n> that\n> > term to mean \"a list of database objects\".\n> \n> Merge attributes says something very similar about inheritance and\n> table schemas. Kinda correct considering\n> the variable used in both cases is *schema.\n> \n> \n> The diff weirdness in regards to DROP DATABASE is probably because I\n> started by copying the DROP DATABASE element, then altered it. I\n> don't know why it chose that method to do the diff though, but it is\n> accurate. Using -cd flags didn't make it any prettier.\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 14 Mar 2002 16:21:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "Patch applied. I am attaching the expected/domain.out file that I\ngenerated when I added your domain test file to the regression tests. \nPlease verify that the output is correct. Thanks.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> New set with most of Peters comments corrected. Left the deal about\n> schema though :) Took nearly an hour to do a cvs diff for some reason\n> this time (normally a couple of minutes is enough).\n> \n> > Random nitpicking below. Also, have you created a regression test?\n> \n> They had been posted a few times and haven't changed. (Attached\n> anyway)\n> \n> \n> > > + <structfield>typnotnull</structfield> represents a NOT NULL\n> > > + constraint on a type. Normally used only for domains.\n> >\n> > And unnormally...?\n> \n> Unnormally is when someone sets it by hand on a type which isn't a\n> domain -- I guess. Corrected.\n> \n> > > + <!entity createDomain system \"create_domain.sgml\">\n> >\n> > I don't see this file included.\n> \n> Other messages. Full package included on this one however.\n> \n> \n> \n> > > + * MergeDomainAttributes\n> > > + * Returns a new table schema with the constraints, types,\n> and other\n> > > + * attributes of the domain resolved for fields using the\n> domain as\n> > > + * their type.\n> >\n> > I didn't know we had schemas yet. You should probably not overload\n> that\n> > term to mean \"a list of database objects\".\n> \n> Merge attributes says something very similar about inheritance and\n> table schemas. Kinda correct considering\n> the variable used in both cases is *schema.\n> \n> \n> The diff weirdness in regards to DROP DATABASE is probably because I\n> started by copying the DROP DATABASE element, then altered it. I\n> don't know why it chose that method to do the diff though, but it is\n> accurate. Using -cd flags didn't make it any prettier.\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- Test Comment / Drop\ncreate domain domaindroptest int4;\ncomment on domain domaindroptest is 'About to drop this..';\ncreate domain basetypetest domaindroptest;\nERROR: DefineDomain: domaindroptest is not a basetype\ndrop domain domaindroptest;\nERROR: parser: parse error at or near \";\"\ndrop domain domaindroptest restrict;\n-- TEST Domains.\ncreate domain domainvarchar varchar(5);\ncreate domain domainnumeric numeric(8,2);\ncreate domain domainint4 int4;\ncreate domain domaintext text;\n-- Test tables using domains\ncreate table basictest\n ( testint4 domainint4\n , testtext domaintext\n , testvarchar domainvarchar\n , testnumeric domainnumeric\n );\nINSERT INTO basictest values ('88', 'haha', 'short', '123.12'); -- Good\nINSERT INTO basictest values ('88', 'haha', 'short text', '123.12'); -- Bad varchar\nERROR: value too long for type character varying(5)\nINSERT INTO basictest values ('88', 'haha', 'short', '123.1212'); -- Truncate numeric\nselect * from basictest;\n testint4 | testtext | testvarchar | testnumeric \n----------+----------+-------------+-------------\n 88 | haha | short | 123.12\n 88 | haha | short | 123.12\n(2 rows)\n\ndrop table basictest;\ndrop domain domainvarchar restrict;\ndrop domain domainnumeric restrict;\ndrop domain domainint4 restrict;\ndrop domain domaintext restrict;\n-- Array Test\ncreate domain domainint4arr int4[1];\ncreate domain domaintextarr text[2][3];\ncreate table domarrtest\n ( testint4arr domainint4arr\n , testtextarr domaintextarr\n );\nINSERT INTO domarrtest values ('{2,2}', '{{\"a\",\"b\"}{\"c\",\"d\"}}');\nINSERT INTO domarrtest values ('{{2,2}{2,2}}', '{{\"a\",\"b\"}}');\nINSERT INTO domarrtest values ('{2,2}', '{{\"a\",\"b\"}{\"c\",\"d\"}{\"e\"}}');\nINSERT INTO domarrtest values ('{2,2}', '{{\"a\"}{\"c\"}}');\nINSERT INTO domarrtest values (NULL, '{{\"a\",\"b\"}{\"c\",\"d\",\"e\"}}');\ndrop table domarrtest;\ndrop domain domainint4arr restrict;\ndrop domain domaintextarr restrict;\ncreate domain dnotnull varchar(15) NOT NULL;\ncreate domain dnull varchar(15) NULL;\ncreate table nulltest\n ( col1 dnotnull\n , col2 dnotnull NULL -- NOT NULL in the domain cannot be overridden\n , col3 dnull NOT NULL\n , col4 dnull\n );\nINSERT INTO nulltest DEFAULT VALUES;\nERROR: ExecAppend: Fail to add null value in not null attribute col1\nINSERT INTO nulltest values ('a', 'b', 'c', 'd'); -- Good\nINSERT INTO nulltest values (NULL, 'b', 'c', 'd');\nERROR: ExecAppend: Fail to add null value in not null attribute col1\nINSERT INTO nulltest values ('a', NULL, 'c', 'd');\nERROR: ExecAppend: Fail to add null value in not null attribute col2\nINSERT INTO nulltest values ('a', 'b', NULL, 'd');\nERROR: ExecAppend: Fail to add null value in not null attribute col3\nINSERT INTO nulltest values ('a', 'b', 'c', NULL); -- Good\nselect * from nulltest;\n col1 | col2 | col3 | col4 \n------+------+------+------\n a | b | c | d\n a | b | c | \n(2 rows)\n\ndrop table nulltest;\ndrop domain dnotnull restrict;\ndrop domain dnull restrict;\ncreate domain ddef1 int4 DEFAULT 3;\ncreate domain ddef2 oid DEFAULT '12';\n-- Type mixing, function returns int8\ncreate domain ddef3 text DEFAULT 5;\ncreate sequence ddef4_seq;\ncreate domain ddef4 int4 DEFAULT nextval(cast('ddef4_seq' as text));\ncreate domain ddef5 numeric(8,2) NOT NULL DEFAULT '12.12';\ncreate table defaulttest\n ( col1 ddef1\n , col2 ddef2\n , col3 ddef3\n , col4 ddef4\n , col5 ddef1 NOT NULL DEFAULT NULL\n , col6 ddef2 DEFAULT '88'\n , col7 ddef4 DEFAULT 8000\n\t\t, col8 ddef5\n );\ninsert into defaulttest default values;\ninsert into defaulttest default values;\ninsert into defaulttest default values;\nselect * from defaulttest;\n col1 | col2 | col3 | col4 | col5 | col6 | col7 | col8 \n------+------+------+------+------+------+------+-------\n 3 | 12 | 5 | 1 | 3 | 88 | 8000 | 12.12\n 3 | 12 | 5 | 2 | 3 | 88 | 8000 | 12.12\n 3 | 12 | 5 | 3 | 3 | 88 | 8000 | 12.12\n(3 rows)\n\ndrop sequence ddef4_seq;\ndrop table defaulttest;\ndrop domain ddef1 restrict;\ndrop domain ddef2 restrict;\ndrop domain ddef3 restrict;\ndrop domain ddef4 restrict;\ndrop domain ddef5 restrict;",
"msg_date": "Mon, 18 Mar 2002 21:16:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "Output looks good, but I always got a bunch of NOTICE statements.\n\nI'll assume the lack of those is related to the logging changes that\nhave been going on?\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Peter Eisentraut\" <peter_e@gmx.net>;\n<pgsql-patches@postgresql.org>\nSent: Monday, March 18, 2002 9:16 PM\nSubject: Re: [PATCHES] Domain Support -- another round\n\n\n>\n> Patch applied. I am attaching the expected/domain.out file that I\n> generated when I added your domain test file to the regression\ntests.\n> Please verify that the output is correct. Thanks.\n>\n> --------------------------------------------------------------------\n-------\n>\n> Rod Taylor wrote:\n> > New set with most of Peters comments corrected. Left the deal\nabout\n> > schema though :) Took nearly an hour to do a cvs diff for some\nreason\n> > this time (normally a couple of minutes is enough).\n> >\n> > > Random nitpicking below. Also, have you created a regression\ntest?\n> >\n> > They had been posted a few times and haven't changed. (Attached\n> > anyway)\n> >\n> >\n> > > > + <structfield>typnotnull</structfield> represents a NOT\nNULL\n> > > > + constraint on a type. Normally used only for domains.\n> > >\n> > > And unnormally...?\n> >\n> > Unnormally is when someone sets it by hand on a type which isn't a\n> > domain -- I guess. Corrected.\n> >\n> > > > + <!entity createDomain system \"create_domain.sgml\">\n> > >\n> > > I don't see this file included.\n> >\n> > Other messages. Full package included on this one however.\n> >\n> >\n> >\n> > > > + * MergeDomainAttributes\n> > > > + * Returns a new table schema with the constraints,\ntypes,\n> > and other\n> > > > + * attributes of the domain resolved for fields using\nthe\n> > domain as\n> > > > + * their type.\n> > >\n> > > I didn't know we had schemas yet. You should probably not\noverload\n> > that\n> > > term to mean \"a list of database objects\".\n> >\n> > Merge attributes says something very similar about inheritance and\n> > table schemas. Kinda correct considering\n> > the variable used in both cases is *schema.\n> >\n> >\n> > The diff weirdness in regards to DROP DATABASE is probably because\nI\n> > started by copying the DROP DATABASE element, then altered it. I\n> > don't know why it chose that method to do the diff though, but it\nis\n> > accurate. Using -cd flags didn't make it any prettier.\n>\n> [ Attachment, skipping... ]\n>\n> [ Attachment, skipping... ]\n>\n> [ Attachment, skipping... ]\n>\n> [ Attachment, skipping... ]\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n\n\n----------------------------------------------------------------------\n----------\n\n\n> -- Test Comment / Drop\n> create domain domaindroptest int4;\n> comment on domain domaindroptest is 'About to drop this..';\n> create domain basetypetest domaindroptest;\n> ERROR: DefineDomain: domaindroptest is not a basetype\n> drop domain domaindroptest;\n> ERROR: parser: parse error at or near \";\"\n> drop domain domaindroptest restrict;\n> -- TEST Domains.\n> create domain domainvarchar varchar(5);\n> create domain domainnumeric numeric(8,2);\n> create domain domainint4 int4;\n> create domain domaintext text;\n> -- Test tables using domains\n> create table basictest\n> ( testint4 domainint4\n> , testtext domaintext\n> , testvarchar domainvarchar\n> , testnumeric domainnumeric\n> );\n> INSERT INTO basictest values ('88', 'haha', 'short',\n12'); -- Good\n> INSERT INTO basictest values ('88', 'haha', 'short text',\n'123.12'); -- Bad varchar\n> ERROR: value too long for type character varying(5)\n> INSERT INTO basictest values ('88', 'haha', 'short',\n3.1212'); -- Truncate numeric\n> select * from basictest;\n> testint4 | testtext | testvarchar | testnumeric\n> ----------+----------+-------------+-------------\n> 88 | haha | short | 123.12\n> 88 | haha | short | 123.12\n> (2 rows)\n>\n> drop table basictest;\n> drop domain domainvarchar restrict;\n> drop domain domainnumeric restrict;\n> drop domain domainint4 restrict;\n> drop domain domaintext restrict;\n> -- Array Test\n> create domain domainint4arr int4[1];\n> create domain domaintextarr text[2][3];\n> create table domarrtest\n> ( testint4arr domainint4arr\n> , testtextarr domaintextarr\n> );\n> INSERT INTO domarrtest values ('{2,2}', '{{\"a\",\"b\"}{\"c\",\"d\"}}');\n> INSERT INTO domarrtest values ('{{2,2}{2,2}}', '{{\"a\",\"b\"}}');\n> INSERT INTO domarrtest values ('{2,2}',\n'{{\"a\",\"b\"}{\"c\",\"d\"}{\"e\"}}');\n> INSERT INTO domarrtest values ('{2,2}', '{{\"a\"}{\"c\"}}');\n> INSERT INTO domarrtest values (NULL, '{{\"a\",\"b\"}{\"c\",\"d\",\"e\"}}');\n> drop table domarrtest;\n> drop domain domainint4arr restrict;\n> drop domain domaintextarr restrict;\n> create domain dnotnull varchar(15) NOT NULL;\n> create domain dnull varchar(15) NULL;\n> create table nulltest\n> ( col1 dnotnull\n> , col2 dnotnull NULL -- NOT NULL in the domain cannot be\noverridden\n> , col3 dnull NOT NULL\n> , col4 dnull\n> );\n> INSERT INTO nulltest DEFAULT VALUES;\n> ERROR: ExecAppend: Fail to add null value in not null attribute\ncol1\n> INSERT INTO nulltest values ('a', 'b', 'c', 'd'); -- Good\n> INSERT INTO nulltest values (NULL, 'b', 'c', 'd');\n> ERROR: ExecAppend: Fail to add null value in not null attribute\ncol1\n> INSERT INTO nulltest values ('a', NULL, 'c', 'd');\n> ERROR: ExecAppend: Fail to add null value in not null attribute\ncol2\n> INSERT INTO nulltest values ('a', 'b', NULL, 'd');\n> ERROR: ExecAppend: Fail to add null value in not null attribute\ncol3\n> INSERT INTO nulltest values ('a', 'b', 'c', NULL); -- Good\n> select * from nulltest;\n> col1 | col2 | col3 | col4\n> ------+------+------+------\n> a | b | c | d\n> a | b | c |\n> (2 rows)\n>\n> drop table nulltest;\n> drop domain dnotnull restrict;\n> drop domain dnull restrict;\n> create domain ddef1 int4 DEFAULT 3;\n> create domain ddef2 oid DEFAULT '12';\n> -- Type mixing, function returns int8\n> create domain ddef3 text DEFAULT 5;\n> create sequence ddef4_seq;\n> create domain ddef4 int4 DEFAULT nextval(cast('ddef4_seq' as text));\n> create domain ddef5 numeric(8,2) NOT NULL DEFAULT '12.12';\n> create table defaulttest\n> ( col1 ddef1\n> , col2 ddef2\n> , col3 ddef3\n> , col4 ddef4\n> , col5 ddef1 NOT NULL DEFAULT NULL\n> , col6 ddef2 DEFAULT '88'\n> , col7 ddef4 DEFAULT 8000\n> , col8 ddef5\n> );\n> insert into defaulttest default values;\n> insert into defaulttest default values;\n> insert into defaulttest default values;\n> select * from defaulttest;\n> col1 | col2 | col3 | col4 | col5 | col6 | col7 | col8\n> ------+------+------+------+------+------+------+-------\n> 3 | 12 | 5 | 1 | 3 | 88 | 8000 | 12.12\n> 3 | 12 | 5 | 2 | 3 | 88 | 8000 | 12.12\n> 3 | 12 | 5 | 3 | 3 | 88 | 8000 | 12.12\n> (3 rows)\n>\n> drop sequence ddef4_seq;\n> drop table defaulttest;\n> drop domain ddef1 restrict;\n> drop domain ddef2 restrict;\n> drop domain ddef3 restrict;\n> drop domain ddef4 restrict;\n> drop domain ddef5 restrict;\n>\n\n",
"msg_date": "Mon, 18 Mar 2002 21:36:34 -0500",
"msg_from": "\"Rod Taylor\" <rbt@barchord.com>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "Rod Taylor wrote:\n> Output looks good, but I always got a bunch of NOTICE statements.\n> \n> I'll assume the lack of those is related to the logging changes that\n> have been going on?\n\nUh, that is very possible, though the messages would now be INFO\nperhaps. I don't think we actually removed messages in the default\ninstall.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 18 Mar 2002 21:44:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "I've committed a bunch of changes after code review of your DOMAIN\npatch. There were a number of minor bugs as well as some stylistic\nthings I didn't like.\n\nProbably the largest change was that I concluded we had to revert the\nhandling of default values for base types to the old way: simple literal\nstored as a string. You can't meaningfully deal with an expression that\nrepresents a value of a type you haven't defined yet --- since you\nsurely haven't defined any functions or operators that yield it, either.\nTherefore the apparent flexibility is illusory. Also, the code just\nplain didn't work: after I fixed preptlist.c to do what it should be\ndoing, I was getting \"can't coerce\" failures in the create_type\nregression test. (For example, it didn't believe that an int4 literal\n\"42\" was a valid default for the test's type int42, which is correct\ngiven that the test doesn't define any conversion function...) So all\nin all I just don't see any way that can work. I've set it up so that\nyou can have *either* an expression default (if typdefaultbin is not\nnull) *or* a simple literal default (if typdefaultbin is null but\ntypdefault isn't). The former case will work for domains, the latter\nfor base types.\n\nThere are still some things that need to be worked on:\n\n1. pg_dump. We *cannot* release this feature in 7.3 if there's not\npg_dump support for it.\n\n2. Arrays. I don't much care for the fact that arrays of domain-type\nvalues aren't supported. The handling of domains that are themselves\narrays seems a tad odd as well: the array-ish nature of the domain is\nexposed, which doesn't make a lot of sense to me. Perhaps we'd be\nbetter off to forbid array domains.\n\n3. Domains on domains. Why shouldn't I be able to make a domain that's\na further restriction of another domain?\n\n4. CHECK constraints for domains (which after all is the real point,\nno?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 15:08:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> 2. Arrays. I don't much care for the fact that arrays of domain-type\n> values aren't supported. The handling of domains that are themselves\n> arrays seems a tad odd as well: the array-ish nature of the domain is\n> exposed, which doesn't make a lot of sense to me. Perhaps we'd be\n> better off to forbid array domains.\n> \n\n From SQL99 11.23 Syntax Rule 6)\n\n\"<data type> should not specify a reference type, user-defined type,\nor an array type.\"\n ==========\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Wed, 20 Mar 2002 15:58:30 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "> There are still some things that need to be worked on:\n>\n> 1. pg_dump. We *cannot* release this feature in 7.3 if there's not\n> pg_dump support for it.\n\nI intend to try to do this next week.\n\n> 2. Arrays. I don't much care for the fact that arrays of\ndomain-type\n> values aren't supported. The handling of domains that are\nthemselves\n> arrays seems a tad odd as well: the array-ish nature of the domain\nis\n> exposed, which doesn't make a lot of sense to me. Perhaps we'd be\n> better off to forbid array domains.\n\nThe reason I didn't make array types for domains is that I have\nabsolutly no idea how to manage the below case once point 4 is\nimplemented.\n\ncreate domain dom as int4 check (VALUE > 5);\ncreate table tab (col1 dom[2][3]);\n\n\n> 3. Domains on domains. Why shouldn't I be able to make a domain\nthat's\n> a further restriction of another domain?\n\nNot entirely sure, except the book I had (SQL99 Complete, Really)\nspecifically forbids it.\n\n> 4. CHECK constraints for domains (which after all is the real point,\n> no?)\n\nYes, I'm slow and only capable of one step at a time. Foreign key\nconstraints are the other real point.\n\n",
"msg_date": "Thu, 21 Mar 2002 10:03:20 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round "
},
{
"msg_contents": "Rod Taylor wrote:\n> \n> > 2. Arrays. I don't much care for the fact that arrays of\n> domain-type\n> > values aren't supported. The handling of domains that are\n> themselves\n> > arrays seems a tad odd as well: the array-ish nature of the domain\n> is\n> > exposed, which doesn't make a lot of sense to me. Perhaps we'd be\n> > better off to forbid array domains.\n> \n> The reason I didn't make array types for domains is that I have\n> absolutly no idea how to manage the below case once point 4 is\n> implemented.\n> \n> create domain dom as int4 check (VALUE > 5);\n> create table tab (col1 dom[2][3]);\n> \n\nSQL'99 explicitly forbids it. Please refer to my posting to HACKERS\nfor the SQL document reference.\n\n\n\n> > 3. Domains on domains. Why shouldn't I be able to make a domain\n> that's\n> > a further restriction of another domain?\n> \n> Not entirely sure, except the book I had (SQL99 Complete, Really)\n> specifically forbids it.\n> \n\nYes, but this is their interpretation of the standard. There is an\nerror in that page anyway, as the standard explicitly forbids \narrays and UDTs and they list REF and ARRAY as valid data types.\n(they also get confused with SESSION_USER and CURENT_USER on page\n281, so it does not surprise me). \n\nI couldn't find anything in the standard explicitly forbidden it.\nBut I don't think this is a very useful feature anyway. As one is\ncreating another domain, he /she can as well specify constraints\nthat represent a further reduction of the valid values range.\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 21 Mar 2002 10:32:19 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "> > Not entirely sure, except the book I had (SQL99 Complete, Really)\n> > specifically forbids it.\n> >\n>\n> Yes, but this is their interpretation of the standard. There is an\nUnderstood. It's the best that I had on me.\n\nI've not found a cheap resource for the real one. Ie. priced suitably\nto fit a hobby project :)\n\n> error in that page anyway, as the standard explicitly forbids\n> arrays and UDTs and they list REF and ARRAY as valid data types.\n> (they also get confused with SESSION_USER and CURENT_USER on page\n> 281, so it does not surprise me).\n\n",
"msg_date": "Thu, 21 Mar 2002 10:42:18 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Domain Support -- another round"
},
{
"msg_contents": "> SQL'99 explicitly forbids it. Please refer to my posting to HACKERS\n> for the SQL document reference.\n\nThe fact that a standard \"forbids\" something does not necessarily mean\nit is a bad idea, as I'm sure you know. Is there any reason that the\nstandard forbids using domains inside arrays, other than someone on the\nstandards committee realized that it would be hard for their company to\nimplement it? That is, does allowing domains in arrays lead to\ninconsistancies or fundamental issues with relational algebra or other\nset logic that should keep it out of the next set of standards?\n\nIf Postgres was developed to only the current standard, it would never\nhave been written. And since the start of the open source days, if we\nhad worked solely to get it to conform to the current standard we'd be\nstarting at ground zero for implementing SQL99, since many of our\nfeatures now appear in that standard. Someone cheated and looked at what\nwe could already do... ;)\n\n - Thomas\n",
"msg_date": "Thu, 21 Mar 2002 08:10:08 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > SQL'99 explicitly forbids it. Please refer to my posting to HACKERS\n> > for the SQL document reference.\n> \n> The fact that a standard \"forbids\" something does not necessarily mean\n> it is a bad idea, as I'm sure you know. Is there any reason that the\n> standard forbids using domains inside arrays, other than someone on the\n> standards committee realized that it would be hard for their company to\n> implement it? That is, does allowing domains in arrays lead to\n> inconsistancies or fundamental issues with relational algebra or other\n> set logic that should keep it out of the next set of standards?\n> \n\nI partially agree, but I guess Tom has already given some of the reasons\nnot to do it.\n\n\n> If Postgres was developed to only the current standard, it would never\n> have been written. And since the start of the open source days, if we\n> had worked solely to get it to conform to the current standard we'd be\n> starting at ground zero for implementing SQL99, since many of our\n> features now appear in that standard. Someone cheated and looked at what\n> we could already do... ;)\n> \n\nAgain, I only partially agree with that, Adding significant features\nthat \nwill allow people to solve significantly different problems that can not\nbe solved with the vanilla standard is a good think. And I believe it\nis\nacknowledged in many places that many SQL3 features were inspired on\nPostgres.\n\nHowever, adding extensions to the SQL standard otherwise is a bad thing.\nIf affects portability. Actually, \"extending\" standards has been a\nweapon\nused by some proprietary companies to hurt the competition. Standards\nare\nfriends of Open Source software and we should try to stick to them \nwhenever possible.\n\nIn the case of DOMAINS, which are already considered by some as not very\nuseful and passive of removal from next editions of the standard (by one\nSQL editor, at least -- I can give you the book reference this\nafternoon),\nadding extension to the SQL to it would just aggravate the issue.\n\nSo, although I agree with you in principle, I believe that in these\ncases\nwe should stick to the standard and avoid gratuitous extensions.\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 21 Mar 2002 11:22:57 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "...\n> So, although I agree with you in principle, I believe that in these\n> cases we should stick to the standard and avoid gratuitous extensions.\n\nHmm. In any case, supporting arrays (esp. if it is not allowed in the\nstandard) should not be a requirement for implementing the DOMAIN\nfunctionality. No point in arguing principles on just, uh, principles,\nwhen we could actually be getting something good done ;)\n\n - Thomas\n",
"msg_date": "Thu, 21 Mar 2002 09:02:28 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> ...\n> > So, although I agree with you in principle, I believe that in these\n> > cases we should stick to the standard and avoid gratuitous extensions.\n> \n> Hmm. In any case, supporting arrays (esp. if it is not allowed in the\n> standard) should not be a requirement for implementing the DOMAIN\n> functionality. No point in arguing principles on just, uh, principles,\n> when we could actually be getting something good done ;)\n> \n\nI couldn't agree more.\n\nCheers,\nFernando\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 21 Mar 2002 12:04:54 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Domain Support -- another round"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> I've not found a cheap resource for the real one. Ie. priced suitably\n> to fit a hobby project :)\n\nTry ANSI's electronic standards store: they'll sell you PDFs of ANSI's\nprinting of the spec at a reasonable price.\n\nhttp://webstore.ansi.org/ansidocstore/default.asp\n\nGo to the \"search\" page and enter \"9075\" (the IS number for SQL).\nAlong with the overpriced ISO offerings, there are:\n\nANSI X3.135-1992\t\tSQL92\n\nANSI/ISO/IEC 9075-n-1999\tSQL99, parts 1-5\n\nEach of these is $18 US. You don't really need all five parts of\nSQL99; I've seldom found any use for anything but part 2. It is\nworth having SQL92, mainly because it's so much more readable\nthan the 99 spec :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 12:13:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Where to get official SQL spec (was Re: Domain Support)"
},
{
"msg_contents": "> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > I've not found a cheap resource for the real one. Ie. priced suitably\n> > to fit a hobby project :)\n\nI seem to have parts 1-5 .txt of sql99 on my computer here. I ftp'd them\noff some ftp site yonks ago. Anyone want them? Is it legal for me to have\nthem or distribute them?\n\nChris\n\n",
"msg_date": "Fri, 22 Mar 2002 09:57:48 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support)"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > \"Rod Taylor\" <rbt@zort.ca> writes:\n> > > I've not found a cheap resource for the real one. Ie. priced suitably\n> > > to fit a hobby project :)\n> \n> I seem to have parts 1-5 .txt of sql99 on my computer here. I ftp'd them\n> off some ftp site yonks ago. Anyone want them? Is it legal for me to have\n> them or distribute them?\n\nI have these URL's:\n\n> http://www.ansi.org\n> http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt\n> ftp://gatekeeper.dec.com/pub/standards/sql\n> ftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 21:01:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support)"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I seem to have parts 1-5 .txt of sql99 on my computer here. I ftp'd them\n> off some ftp site yonks ago. Anyone want them? Is it legal for me to have\n> them or distribute them?\n\nMy understanding of the legal situation is that what's circulating\naround the net in plain text form is *draft* versions of the spec.\nIt is okay to redistribute these freely. The *official* version\nyou are supposed to pay for.\n\nNo, I don't know how close the drafts really are to the final.\n\nPersonally I tend to consult the draft versions more than the PDF\nversions anyway, because it's vastly easier to search flat ASCII\nfiles than PDFs ... so I sure hope they're pretty close ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 21:29:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support) "
},
{
"msg_contents": "It would be nice to add it to the docs of the project.\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nCc: \"Rod Taylor\" <rbt@zort.ca>; \"Hackers List\"\n<pgsql-hackers@postgresql.org>\nSent: Friday, March 22, 2002 1:29 PM\nSubject: Re: [HACKERS] Where to get official SQL spec (was Re: Domain\nSupport)\n\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > I seem to have parts 1-5 .txt of sql99 on my computer here. I ftp'd\nthem\n> > off some ftp site yonks ago. Anyone want them? Is it legal for me to\nhave\n> > them or distribute them?\n>\n> My understanding of the legal situation is that what's circulating\n> around the net in plain text form is *draft* versions of the spec.\n> It is okay to redistribute these freely. The *official* version\n> you are supposed to pay for.\n>\n> No, I don't know how close the drafts really are to the final.\n>\n> Personally I tend to consult the draft versions more than the PDF\n> versions anyway, because it's vastly easier to search flat ASCII\n> files than PDFs ... so I sure hope they're pretty close ...\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\n",
"msg_date": "Fri, 22 Mar 2002 15:20:51 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support)"
},
{
"msg_contents": "> It would be nice to add it to the docs of the project.\n\nIf anyone wants a copy, just holler. A bunch of us have exchanged those\ndrafts over the years so speak up and someone will forward you a copy...\n\nI'm sure we stumbled on them via google or somesuch so a quick search\nwould get you an independent copy too...\n\n - Thomas\n",
"msg_date": "Thu, 21 Mar 2002 20:36:05 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support)"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > It would be nice to add it to the docs of the project.\n> \n> If anyone wants a copy, just holler. A bunch of us have exchanged those\n> drafts over the years so speak up and someone will forward you a copy...\n> \n> I'm sure we stumbled on them via google or somesuch so a quick search\n> would get you an independent copy too...\n\nShould I add the URL's to the developer's FAQ?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 00:05:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support)"
},
{
"msg_contents": "Does it mean that we are not 100% sure they are open documents?\n----- Original Message -----\nFrom: \"Thomas Lockhart\" <thomas@fourpalms.org>\nTo: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\nCc: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>; \"Tom Lane\"\n<tgl@sss.pgh.pa.us>; \"Rod Taylor\" <rbt@zort.ca>; \"Hackers List\"\n<pgsql-hackers@postgresql.org>\nSent: Friday, March 22, 2002 3:36 PM\nSubject: Re: Where to get official SQL spec (was Re: Domain Support)\n\n\n> > It would be nice to add it to the docs of the project.\n>\n> If anyone wants a copy, just holler. A bunch of us have exchanged those\n> drafts over the years so speak up and someone will forward you a copy...\n>\n> I'm sure we stumbled on them via google or somesuch so a quick search\n> would get you an independent copy too...\n>\n> - Thomas\n>\n\n\n",
"msg_date": "Fri, 22 Mar 2002 16:37:45 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support)"
},
{
"msg_contents": "> Does it mean that we are not 100% sure they are open documents?\n\nHmm. Yeah, though afaics there is no copyright inside the docs.\n\n - Thomas\n",
"msg_date": "Thu, 21 Mar 2002 22:01:21 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support)"
},
{
"msg_contents": "\nI have updated the developer's FAQ with this information:\n\n---------------------------------------------------------------------------\n\n\n1.12) Where can I get a copy of the SQL standards?\n\nThere are two pertinent standards, SQL92 and SQL99. These standards are\nendorsed by ANSI and ISO. A draft of the SQL92 standard is available at\nhttp://www.contrib.andrew.cmu.edu/~shadow/. The SQL99 standard must be\npurchased from ANSI at\nhttp://webstore.ansi.org/ansidocstore/default.asp. The main standards\ndocuments are ANSI X3.135-1992 for SQL92 and ANSI/ISO/IEC 9075-2-1999\nfor SQL99.\n\nA summary of these standards is at\nhttp://dbs.uni-leipzig.de/en/lokal/standards.pdf and\nhttp://db.konkuk.ac.kr/present/SQL3.pdf.\n\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > I've not found a cheap resource for the real one. Ie. priced suitably\n> > to fit a hobby project :)\n> \n> Try ANSI's electronic standards store: they'll sell you PDFs of ANSI's\n> printing of the spec at a reasonable price.\n> \n> http://webstore.ansi.org/ansidocstore/default.asp\n> \n> Go to the \"search\" page and enter \"9075\" (the IS number for SQL).\n> Along with the overpriced ISO offerings, there are:\n> \n> ANSI X3.135-1992\t\tSQL92\n> \n> ANSI/ISO/IEC 9075-n-1999\tSQL99, parts 1-5\n> \n> Each of these is $18 US. You don't really need all five parts of\n> SQL99; I've seldom found any use for anything but part 2. It is\n> worth having SQL92, mainly because it's so much more readable\n> than the 99 spec :-(\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 01:16:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Where to get official SQL spec (was Re: Domain Support)"
}
] |
[
{
"msg_contents": "I just committed this fix:\n\n * Change FIXED_CHAR_SEL to 0.20 from 0.04 to give better selectivity (Bruce)\n\nI believe we decided on 0.20 but didn't put it into 7.2 because we were\nin beta.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Mar 2002 23:29:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Char selectivity"
}
] |
[
{
"msg_contents": "The following patch supresses \"USING btree\" for btree indexes in\npg_dump:\n\t\n\tCREATE INDEX ii ON test (x);\n\n\tCREATE INDEX kkas ON test USING hash (x);\n\nThis is possible because btree is the default. TODO item is:\n\n\t* Remove USING clause from pg_get_indexdef() if index is btree (Bruce)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/utils/adt/ruleutils.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/ruleutils.c,v\nretrieving revision 1.92\ndiff -c -r1.92 ruleutils.c\n*** src/backend/utils/adt/ruleutils.c\t6 Mar 2002 19:58:26 -0000\t1.92\n--- src/backend/utils/adt/ruleutils.c\t8 Mar 2002 04:45:51 -0000\n***************\n*** 395,405 ****\n \t * Start the index definition\n \t */\n \tinitStringInfo(&buf);\n! \tappendStringInfo(&buf, \"CREATE %sINDEX %s ON %s USING %s (\",\n \t\t\t\t\t idxrec->indisunique ? \"UNIQUE \" : \"\",\n \t\t\t\t\t quote_identifier(NameStr(idxrelrec->relname)),\n! \t\t\t\t\t quote_identifier(NameStr(indrelrec->relname)),\n \t\t\t\t\t quote_identifier(NameStr(amrec->amname)));\n \n \t/*\n \t * Collect the indexed attributes in keybuf\n--- 395,410 ----\n \t * Start the index definition\n \t */\n \tinitStringInfo(&buf);\n! \tappendStringInfo(&buf, \"CREATE %sINDEX %s ON %s \",\n \t\t\t\t\t idxrec->indisunique ? \"UNIQUE \" : \"\",\n \t\t\t\t\t quote_identifier(NameStr(idxrelrec->relname)),\n! \t\t\t\t\t quote_identifier(NameStr(indrelrec->relname)));\n! \n! \tif (strcmp(NameStr(amrec->amname), \"btree\") != 0)\n! \t\tappendStringInfo(&buf, \"USING %s \",\n \t\t\t\t\t quote_identifier(NameStr(amrec->amname)));\n+ \n+ \tappendStringInfo(&buf, \"(\");\n \n \t/*\n \t * Collect the indexed attributes in keybuf",
"msg_date": "Thu, 7 Mar 2002 23:51:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Index USING in pg_dump"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> This is possible because btree is the default. TODO item is:\n> \t* Remove USING clause from pg_get_indexdef() if index is btree (Bruce)\n\nI do not think this is necessary or helpful. The only possible\nreason to change it would be if we thought btree might someday\nnot be the default index type --- but no such change is on the\nhorizon. And if one was, you've just embedded special knowledge\nabout btree in yet one more place...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 00:15:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index USING in pg_dump "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > This is possible because btree is the default. TODO item is:\n> > \t* Remove USING clause from pg_get_indexdef() if index is btree (Bruce)\n> \n> I do not think this is necessary or helpful. The only possible\n> reason to change it would be if we thought btree might someday\n> not be the default index type --- but no such change is on the\n> horizon. And if one was, you've just embedded special knowledge\n> about btree in yet one more place...\n\nYes, but it doesn't look like the way they created it. Very few use\nUSING in there queries. Why show it in pg_dump output?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 8 Mar 2002 11:07:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Index USING in pg_dump"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, but it doesn't look like the way they created it.\n\n(a) And you know that how? (b) Are we also supposed to preserve\nspacing, keyword case, etc? Not much of an argument...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 11:35:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index USING in pg_dump "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, but it doesn't look like the way they created it.\n> \n> (a) And you know that how? (b) Are we also supposed to preserve\n> spacing, keyword case, etc? Not much of an argument...\n\nWell, the USING part was confusing people because they didn't even know\nwe had other index types. It is just an attempt to clean up pg_dump\noutput to be clearer. One change I did make is to add a\nDEFAULT_INDEX_TYPE macro and replace \"btree\" with the use of that macro\nin a few places.\n\nHere is a new patch. I am moving the discussion to patches because of\nthe patch attachment.\n\nHow is this? Comments from others?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/parser/analyze.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/parser/analyze.c,v\nretrieving revision 1.217\ndiff -c -r1.217 analyze.c\n*** src/backend/parser/analyze.c\t6 Mar 2002 06:09:51 -0000\t1.217\n--- src/backend/parser/analyze.c\t8 Mar 2002 16:25:59 -0000\n***************\n*** 16,21 ****\n--- 16,22 ----\n #include \"access/heapam.h\"\n #include \"catalog/catname.h\"\n #include \"catalog/heap.h\"\n+ #include \"catalog/index.h\"\n #include \"catalog/pg_index.h\"\n #include \"catalog/pg_type.h\"\n #include \"nodes/makefuncs.h\"\n***************\n*** 1049,1055 ****\n \t\t\tindex->idxname = NULL;\t\t/* will set it later */\n \n \t\tindex->relname = cxt->relname;\n! \t\tindex->accessMethod = \"btree\";\n \t\tindex->indexParams = NIL;\n \t\tindex->whereClause = NULL;\n \n--- 1050,1056 ----\n \t\t\tindex->idxname = NULL;\t\t/* will set it later */\n \n \t\tindex->relname = cxt->relname;\n! \t\tindex->accessMethod = DEFAULT_INDEX_TYPE;\n \t\tindex->indexParams = NIL;\n \t\tindex->whereClause = NULL;\n \nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.287\ndiff -c -r2.287 gram.y\n*** src/backend/parser/gram.y\t7 Mar 2002 16:35:35 -0000\t2.287\n--- src/backend/parser/gram.y\t8 Mar 2002 16:26:03 -0000\n***************\n*** 51,56 ****\n--- 51,57 ----\n #include <ctype.h>\n \n #include \"access/htup.h\"\n+ #include \"catalog/index.h\"\n #include \"catalog/pg_type.h\"\n #include \"nodes/params.h\"\n #include \"nodes/parsenodes.h\"\n***************\n*** 2539,2545 ****\n \t\t;\n \n access_method_clause: USING access_method\t\t{ $$ = $2; }\n! \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = \"btree\"; }\n \t\t;\n \n index_params: index_list\t\t\t\t\t\t{ $$ = $1; }\n--- 2540,2547 ----\n \t\t;\n \n access_method_clause: USING access_method\t\t{ $$ = $2; }\n! \t\t/* If btree changes as our default, update pg_get_indexdef() */\n! \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = DEFAULT_INDEX_TYPE; }\n \t\t;\n \n index_params: index_list\t\t\t\t\t\t{ $$ = $1; }\nIndex: src/backend/utils/adt/ruleutils.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/ruleutils.c,v\nretrieving revision 1.92\ndiff -c -r1.92 ruleutils.c\n*** src/backend/utils/adt/ruleutils.c\t6 Mar 2002 19:58:26 -0000\t1.92\n--- src/backend/utils/adt/ruleutils.c\t8 Mar 2002 16:26:08 -0000\n***************\n*** 395,405 ****\n \t * Start the index definition\n \t */\n \tinitStringInfo(&buf);\n! \tappendStringInfo(&buf, \"CREATE %sINDEX %s ON %s USING %s (\",\n \t\t\t\t\t idxrec->indisunique ? \"UNIQUE \" : \"\",\n \t\t\t\t\t quote_identifier(NameStr(idxrelrec->relname)),\n! \t\t\t\t\t quote_identifier(NameStr(indrelrec->relname)),\n \t\t\t\t\t quote_identifier(NameStr(amrec->amname)));\n \n \t/*\n \t * Collect the indexed attributes in keybuf\n--- 395,410 ----\n \t * Start the index definition\n \t */\n \tinitStringInfo(&buf);\n! \tappendStringInfo(&buf, \"CREATE %sINDEX %s ON %s \",\n \t\t\t\t\t idxrec->indisunique ? \"UNIQUE \" : \"\",\n \t\t\t\t\t quote_identifier(NameStr(idxrelrec->relname)),\n! \t\t\t\t\t quote_identifier(NameStr(indrelrec->relname)));\n! \n! \tif (strcmp(NameStr(amrec->amname), DEFAULT_INDEX_TYPE) != 0)\n! \t\tappendStringInfo(&buf, \"USING %s \",\n \t\t\t\t\t quote_identifier(NameStr(amrec->amname)));\n+ \n+ \tappendStringInfo(&buf, \"(\");\n \n \t/*\n \t * Collect the indexed attributes in keybuf\nIndex: src/include/catalog/index.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/catalog/index.h,v\nretrieving revision 1.44\ndiff -c -r1.44 index.h\n*** src/include/catalog/index.h\t19 Feb 2002 20:11:19 -0000\t1.44\n--- src/include/catalog/index.h\t8 Mar 2002 16:26:12 -0000\n***************\n*** 18,23 ****\n--- 18,24 ----\n #include \"catalog/pg_index.h\"\n #include \"nodes/execnodes.h\"\n \n+ #define DEFAULT_INDEX_TYPE\t\"btree\"\n \n /* Typedef for callback function for IndexBuildHeapScan */\n typedef void (*IndexBuildCallback) (Relation index,",
"msg_date": "Fri, 8 Mar 2002 11:43:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Index USING in pg_dump"
},
{
"msg_contents": "On Fri, Mar 08, 2002 at 11:07:57AM -0500, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > This is possible because btree is the default. TODO item is:\n> > > \t* Remove USING clause from pg_get_indexdef() if index is btree (Bruce)\n> > \n> > I do not think this is necessary or helpful. The only possible\n> > reason to change it would be if we thought btree might someday\n> > not be the default index type --- but no such change is on the\n> > horizon. And if one was, you've just embedded special knowledge\n> > about btree in yet one more place...\n> \n> Yes, but it doesn't look like the way they created it.\n\nWhy is this relevant?\n\n> Very few use\n> USING in there queries. Why show it in pg_dump output?\n\nI agree with Tom: this seems like a waste of time, and may even be worse\nthan the current pg_dump output. The type of the index is \"btree\"; by\nassuming that Pg happens to default to \"btree\", you're just making the\nprocess of index restoration more fragile.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Fri, 8 Mar 2002 12:01:02 -0500",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Index USING in pg_dump"
},
{
"msg_contents": "Neil Conway wrote:\n> On Fri, Mar 08, 2002 at 11:07:57AM -0500, Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > This is possible because btree is the default. TODO item is:\n> > > > \t* Remove USING clause from pg_get_indexdef() if index is btree (Bruce)\n> > > \n> > > I do not think this is necessary or helpful. The only possible\n> > > reason to change it would be if we thought btree might someday\n> > > not be the default index type --- but no such change is on the\n> > > horizon. And if one was, you've just embedded special knowledge\n> > > about btree in yet one more place...\n> > \n> > Yes, but it doesn't look like the way they created it.\n> \n> Why is this relevant?\n> \n> > Very few use\n> > USING in there queries. Why show it in pg_dump output?\n> \n> I agree with Tom: this seems like a waste of time, and may even be worse\n> than the current pg_dump output. The type of the index is \"btree\"; by\n> assuming that Pg happens to default to \"btree\", you're just making the\n> process of index restoration more fragile.\n\nOK, how about this patch? It just creates a macro so btree is a clear\ndefault. It no longer affects pg_dump.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/parser/analyze.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/parser/analyze.c,v\nretrieving revision 1.217\ndiff -c -r1.217 analyze.c\n*** src/backend/parser/analyze.c\t6 Mar 2002 06:09:51 -0000\t1.217\n--- src/backend/parser/analyze.c\t8 Mar 2002 16:25:59 -0000\n***************\n*** 16,21 ****\n--- 16,22 ----\n #include \"access/heapam.h\"\n #include \"catalog/catname.h\"\n #include \"catalog/heap.h\"\n+ #include \"catalog/index.h\"\n #include \"catalog/pg_index.h\"\n #include \"catalog/pg_type.h\"\n #include \"nodes/makefuncs.h\"\n***************\n*** 1049,1055 ****\n \t\t\tindex->idxname = NULL;\t\t/* will set it later */\n \n \t\tindex->relname = cxt->relname;\n! \t\tindex->accessMethod = \"btree\";\n \t\tindex->indexParams = NIL;\n \t\tindex->whereClause = NULL;\n \n--- 1050,1056 ----\n \t\t\tindex->idxname = NULL;\t\t/* will set it later */\n \n \t\tindex->relname = cxt->relname;\n! \t\tindex->accessMethod = DEFAULT_INDEX_TYPE;\n \t\tindex->indexParams = NIL;\n \t\tindex->whereClause = NULL;\n \nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.287\ndiff -c -r2.287 gram.y\n*** src/backend/parser/gram.y\t7 Mar 2002 16:35:35 -0000\t2.287\n--- src/backend/parser/gram.y\t8 Mar 2002 16:26:03 -0000\n***************\n*** 51,56 ****\n--- 51,57 ----\n #include <ctype.h>\n \n #include \"access/htup.h\"\n+ #include \"catalog/index.h\"\n #include \"catalog/pg_type.h\"\n #include \"nodes/params.h\"\n #include \"nodes/parsenodes.h\"\n***************\n*** 2539,2545 ****\n \t\t;\n \n access_method_clause: USING access_method\t\t{ $$ = $2; }\n! \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = \"btree\"; }\n \t\t;\n \n index_params: index_list\t\t\t\t\t\t{ $$ = $1; }\n--- 2540,2547 ----\n \t\t;\n \n access_method_clause: USING access_method\t\t{ $$ = $2; }\n! \t\t/* If btree changes as our default, update pg_get_indexdef() */\n! \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = DEFAULT_INDEX_TYPE; }\n \t\t;\n \n index_params: index_list\t\t\t\t\t\t{ $$ = $1; }\nIndex: src/include/catalog/index.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/catalog/index.h,v\nretrieving revision 1.44\ndiff -c -r1.44 index.h\n*** src/include/catalog/index.h\t19 Feb 2002 20:11:19 -0000\t1.44\n--- src/include/catalog/index.h\t8 Mar 2002 16:26:12 -0000\n***************\n*** 18,23 ****\n--- 18,24 ----\n #include \"catalog/pg_index.h\"\n #include \"nodes/execnodes.h\"\n \n+ #define DEFAULT_INDEX_TYPE\t\"btree\"\n \n /* Typedef for callback function for IndexBuildHeapScan */\n typedef void (*IndexBuildCallback) (Relation index,",
"msg_date": "Fri, 8 Mar 2002 12:26:08 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Index USING in pg_dump"
},
{
"msg_contents": "\nOK, there was a tie in votes of whether we should remove \"USING btree\"\nfrom pg_dump, so it isn't worth changing it. I will apply the following\npatch that adds DEFAULT_INDEX_TYPE so things are clearer.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Neil Conway wrote:\n> > On Fri, Mar 08, 2002 at 11:07:57AM -0500, Bruce Momjian wrote:\n> > > Tom Lane wrote:\n> > > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > > This is possible because btree is the default. TODO item is:\n> > > > > \t* Remove USING clause from pg_get_indexdef() if index is btree (Bruce)\n> > > > \n> > > > I do not think this is necessary or helpful. The only possible\n> > > > reason to change it would be if we thought btree might someday\n> > > > not be the default index type --- but no such change is on the\n> > > > horizon. And if one was, you've just embedded special knowledge\n> > > > about btree in yet one more place...\n> > > \n> > > Yes, but it doesn't look like the way they created it.\n> > \n> > Why is this relevant?\n> > \n> > > Very few use\n> > > USING in there queries. Why show it in pg_dump output?\n> > \n> > I agree with Tom: this seems like a waste of time, and may even be worse\n> > than the current pg_dump output. The type of the index is \"btree\"; by\n> > assuming that Pg happens to default to \"btree\", you're just making the\n> > process of index restoration more fragile.\n> \n> OK, how about this patch? It just creates a macro so btree is a clear\n> default. It no longer affects pg_dump.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n> Index: src/backend/parser/analyze.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/backend/parser/analyze.c,v\n> retrieving revision 1.217\n> diff -c -r1.217 analyze.c\n> *** src/backend/parser/analyze.c\t6 Mar 2002 06:09:51 -0000\t1.217\n> --- src/backend/parser/analyze.c\t8 Mar 2002 16:25:59 -0000\n> ***************\n> *** 16,21 ****\n> --- 16,22 ----\n> #include \"access/heapam.h\"\n> #include \"catalog/catname.h\"\n> #include \"catalog/heap.h\"\n> + #include \"catalog/index.h\"\n> #include \"catalog/pg_index.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"nodes/makefuncs.h\"\n> ***************\n> *** 1049,1055 ****\n> \t\t\tindex->idxname = NULL;\t\t/* will set it later */\n> \n> \t\tindex->relname = cxt->relname;\n> ! \t\tindex->accessMethod = \"btree\";\n> \t\tindex->indexParams = NIL;\n> \t\tindex->whereClause = NULL;\n> \n> --- 1050,1056 ----\n> \t\t\tindex->idxname = NULL;\t\t/* will set it later */\n> \n> \t\tindex->relname = cxt->relname;\n> ! \t\tindex->accessMethod = DEFAULT_INDEX_TYPE;\n> \t\tindex->indexParams = NIL;\n> \t\tindex->whereClause = NULL;\n> \n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.287\n> diff -c -r2.287 gram.y\n> *** src/backend/parser/gram.y\t7 Mar 2002 16:35:35 -0000\t2.287\n> --- src/backend/parser/gram.y\t8 Mar 2002 16:26:03 -0000\n> ***************\n> *** 51,56 ****\n> --- 51,57 ----\n> #include <ctype.h>\n> \n> #include \"access/htup.h\"\n> + #include \"catalog/index.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"nodes/params.h\"\n> #include \"nodes/parsenodes.h\"\n> ***************\n> *** 2539,2545 ****\n> \t\t;\n> \n> access_method_clause: USING access_method\t\t{ $$ = $2; }\n> ! \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = \"btree\"; }\n> \t\t;\n> \n> index_params: index_list\t\t\t\t\t\t{ $$ = $1; }\n> --- 2540,2547 ----\n> \t\t;\n> \n> access_method_clause: USING access_method\t\t{ $$ = $2; }\n> ! \t\t/* If btree changes as our default, update pg_get_indexdef() */\n> ! \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = DEFAULT_INDEX_TYPE; }\n> \t\t;\n> \n> index_params: index_list\t\t\t\t\t\t{ $$ = $1; }\n> Index: src/include/catalog/index.h\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/include/catalog/index.h,v\n> retrieving revision 1.44\n> diff -c -r1.44 index.h\n> *** src/include/catalog/index.h\t19 Feb 2002 20:11:19 -0000\t1.44\n> --- src/include/catalog/index.h\t8 Mar 2002 16:26:12 -0000\n> ***************\n> *** 18,23 ****\n> --- 18,24 ----\n> #include \"catalog/pg_index.h\"\n> #include \"nodes/execnodes.h\"\n> \n> + #define DEFAULT_INDEX_TYPE\t\"btree\"\n> \n> /* Typedef for callback function for IndexBuildHeapScan */\n> typedef void (*IndexBuildCallback) (Relation index,\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 01:01:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Index USING in pg_dump"
}
] |
[
{
"msg_contents": "I have an old server which crashed yesterday, which had a few Postgresql\ndatabase running on v6.5. The operating system would not load, but I managed\nto extract the files from the drive by placing it as a secondary drive on\nanother machine.\n\nThe rough and ready solution is to move the databases to our newer server,\nwhich is running v7.0, but I now have the problem that there is apparently\nno way to upgrade the database files to the new version (without having a\nserver running 6.5 to dump them).\n\nIs there a way to convert the databases from the files?\n\n\n\n\n",
"msg_date": "Fri, 08 Mar 2002 06:59:25 GMT",
"msg_from": "\"Adam Wyard\" <ism@candela.com.au>",
"msg_from_op": true,
"msg_subject": "Update 6.5 database files to 7.0"
},
{
"msg_contents": "\"Adam Wyard\" <ism@candela.com.au> writes:\n\n> The rough and ready solution is to move the databases to our newer server,\n> which is running v7.0, but I now have the problem that there is apparently\n> no way to upgrade the database files to the new version (without having a\n> server running 6.5 to dump them).\n> \n> Is there a way to convert the databases from the files?\n\nNo, but:\n\nftp://ftp.us.postgresql.org/source/v6.5/postgresql-6.5.3.tar.gz\n\nYou should be able to build it and dump out your existing data files\nto be imported into a later version.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "13 Mar 2002 12:12:36 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Update 6.5 database files to 7.0"
}
] |
[
{
"msg_contents": "This new patch corrects 2 new bugs:\nbug 1:\nEXEC SQL define JOKER '?';\nEXEC SQL define LINE \"LINE\";\ncould not be parsed\n\nbug 2:\nEXEC SQL define LEN 2;\nmemset(dst, '?', LEN);\n\nwas translated into\n\nmemset(dst, '?', 2\n#line XX \"thefile.ec\"\n);\n\nwhich could not be compiled with gcc for instance\n\nNicolas BAZIN",
"msg_date": "Fri, 8 Mar 2002 18:35:18 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Additional fixes to ecpg - please apply patch"
},
{
"msg_contents": "It seems you patch is reversed.\n\nAnyway, I will look into it. I will have to change it some though as it\ncontains C++ comments. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 10 Mar 2002 12:40:36 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Additional fixes to ecpg - please apply patch"
},
{
"msg_contents": "On Fri, Mar 08, 2002 at 06:35:18PM +1100, Nicolas Bazin wrote:\n> This new patch corrects 2 new bugs:\n\nSlighlty differently fixed this in CVS.\n\nThanks for reporting the bugs. Please tell me of my fix is not\nsufficient.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 10 Mar 2002 13:10:36 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Additional fixes to ecpg - please apply patch"
},
{
"msg_contents": "On Sun, Mar 10, 2002 at 12:40:36PM +0100, Michael Meskes wrote:\n> \n> Anyway, I will look into it. I will have to change it some though as it\n> contains C++ comments. :-)\n\nWhich are also valid C comments nowadays, IIRC. OTOH not all compilers\nseem to honour that change...\n\n\nJeroen\n\n",
"msg_date": "Sun, 10 Mar 2002 14:28:12 +0100",
"msg_from": "jtv <jtv@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Additional fixes to ecpg - please apply patch"
},
{
"msg_contents": "On Sun, Mar 10, 2002 at 02:28:12PM +0100, jtv wrote:\n> On Sun, Mar 10, 2002 at 12:40:36PM +0100, Michael Meskes wrote:\n> > \n> > Anyway, I will look into it. I will have to change it some though as it\n> > contains C++ comments. :-)\n> \n> Which are also valid C comments nowadays, IIRC. OTOH not all compilers\n> seem to honour that change...\n\nYes, that's what I meant. :-(\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 10 Mar 2002 15:39:06 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Additional fixes to ecpg - please apply patch"
},
{
"msg_contents": "jtv wrote:\n> On Sun, Mar 10, 2002 at 12:40:36PM +0100, Michael Meskes wrote:\n> > \n> > Anyway, I will look into it. I will have to change it some though as it\n> > contains C++ comments. :-)\n> \n> Which are also valid C comments nowadays, IIRC. OTOH not all compilers\n> seem to honour that change...\n\nWe don't support // comments in PostgreSQL in C. Too many platforms\ndon't accept them. We are not targeting only modern systems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Mar 2002 16:39:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Additional fixes to ecpg - please apply patch"
}
] |
[
{
"msg_contents": "Hi,\n\nThe inventor of the B-Tree algorythm, R. Bayer, published \nin the past years ('96-'01) several scientific papers where \nhe describes the new UB-Tree algorythm. It is based on \nordinary B-Tree implementation but allows >multidimensional< \nindexing.\n\nApart from the need to update only one index instead of\nseveral for each table insert/delete the UB-Tree indexing\nallows to perform extremly performant lookups for queries \nthat contain constraints for several columns (dimensions)\nand sort criterias, like:\n\nSELECT * FROM EMPLOYEES WHERE ID>10 AND ID<100 AND INCOME>5000 SORTED BY NAME\n\nSuch a query could profit from an UB-Tree index on the columns\n(dimensions) ID, INCOME and NAME - all at once, using only \none UB-Tree!\n\nSo the UB-Tree allows you:\n - to make range queries on all dimensions at once, and\n - read the result in sorted order (with only little overhead), where\n - each dimension can be read in sorted or reverse order\n\nI guess such an indexing method would please any SQL database. \nHas anybody of you ever stumbled across the UB-Tree related\nalgorythms?\n\nIf not, you can download several PDF documents describing\nthe UB-Tree related algorythms from this URL (in english \nlanguage!):\n\nhttp://mistral.in.tum.de\n\nI found no free implementation of the UB-Tree. The team\nof R. Bayer only released closed source and sell it.\n\nkind regards,\nRobert Schrem\n",
"msg_date": "Fri, 8 Mar 2002 16:48:15 +0100",
"msg_from": "Robert Schrem <robert.schrem@WiredMinds.de>",
"msg_from_op": true,
"msg_subject": "UB-Tree"
},
{
"msg_contents": "On Fri, 2002-03-08 at 20:48, Robert Schrem wrote:\n> \n> If not, you can download several PDF documents describing\n> the UB-Tree related algorythms from this URL (in english \n> language!):\n> \n> http://mistral.in.tum.de\n\nThe last one tere seems to be the original explanation of the idea.\n \n> I found no free implementation of the UB-Tree. The team\n> of R. Bayer only released closed source and sell it.\n\nThis technique seems to be a good candidate for implementing using GiST\nor perhaps just defined using the <. operator mentioned there. \n\nMappign from ordinary query with several = , < and between may be a\nlittle tricky though.\n\nThey may also have patents on it, so we should move carefully here.\n\n-------------\nHannu\n\n\n",
"msg_date": "09 Mar 2002 00:37:32 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: UB-Tree"
}
] |
[
{
"msg_contents": "\n> > Yes, but it doesn't look like the way they created it.\n> \n> (a) And you know that how? (b) Are we also supposed to preserve\n> spacing, keyword case, etc? Not much of an argument...\n\nI think the initial idea was rather to try to use most common\nsyntax where possible, and USING is not very common :-)\n\nAndreas\n",
"msg_date": "Fri, 8 Mar 2002 19:06:55 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Index USING in pg_dump "
}
] |
[
{
"msg_contents": "hi all.\nWe have done a like tpch benchmark and we would like to have some performance tips.\nthe benchmark is based on this database:\n\nwe are creating indexes on indeices.sql\n\n\nThere are 22 ad-hoc queries running on diferent streams while there are concurrent insertions and deletes, numeric are part of the specifications.\nAre there any tips?\n\nthanks in advance",
"msg_date": "Fri, 8 Mar 2002 20:05:35 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "help on tpch benchmark"
}
] |
[
{
"msg_contents": "Hi I'm new to the list, and I'm new to the PostgreSQL also. But I have been using Object Relation Mapping for a period of time. I would like to put native binding with PostgreSQL . It is fairly easy to read and write Object into the relayed table e.g.\n\ncreate table base (\nmyname text,\nunique( myname )\n);\n\ncreate table child (\nmyfather base,\nmyname text\n);\n\nINSERT INTO base ( myname ) Values ( 'alex' ) ;\nINSERT 56578 1 <<---- oid\nINSERT INTO child ( myfather, myname ) values ( 56578::base, 'alexbaby' );\nINSERT 56579 1 <<---- oid\n\nHowever, there is no way to get the value back in the WHERE clause. because the return type is 'base' but the value output ( correct me if I'm wrong from digging the source by hand ) is actually oid returns int4 from internal seteval() function.\nselect * from child;\nmyfather myname\n-------------------\n56578 alexbaby\n\nIt could be a easy fix in the jdbc, or c to match the seteval(base.oid) with int4.[string, string] compare, but then I need to loop through the full Record Set by hand to get the data. is there a possible way to do some function to convert the TYPE 'base' to oid or int4 or string?\nso I can do something like this\n\nSELECT * from child where myfather=56578::base;\n\nor how am I getting internal seteval to work right with the return set from a custom function.\nI really want to see this coming out right... thanks a lot.\nAlex\n\n\n\n",
"msg_date": "Fri, 8 Mar 2002 14:42:22 -0600",
"msg_from": "alex@AvengerGear.com (Debian User)",
"msg_from_op": true,
"msg_subject": "Object ID reference"
},
{
"msg_contents": "Alex,\n\nMost of the Object Relation Mapping I have seen get the id from a\nspecial mechanism, so they know it before hand?\n\nFYI oid's are not guaranteed to be unique in Postgres.\n\nDave\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Debian User\nSent: Friday, March 08, 2002 3:42 PM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] Object ID reference\n\n\nHi I'm new to the list, and I'm new to the PostgreSQL also. But I have\nbeen using Object Relation Mapping for a period of time. I would like to\nput native binding with PostgreSQL . It is fairly easy to read and write\nObject into the relayed table e.g.\n\ncreate table base (\nmyname text,\nunique( myname )\n);\n\ncreate table child (\nmyfather base,\nmyname text\n);\n\nINSERT INTO base ( myname ) Values ( 'alex' ) ;\nINSERT 56578 1 <<---- oid\nINSERT INTO child ( myfather, myname ) values ( 56578::base, 'alexbaby'\n); INSERT 56579 1 <<---- oid\n\nHowever, there is no way to get the value back in the WHERE clause.\nbecause the return type is 'base' but the value output ( correct me if\nI'm wrong from digging the source by hand ) is actually oid returns\nint4 from internal seteval() function. select * from child; myfather\nmyname\n-------------------\n56578 alexbaby\n\nIt could be a easy fix in the jdbc, or c to match the seteval(base.oid)\nwith int4.[string, string] compare, but then I need to loop through the\nfull Record Set by hand to get the data. is there a possible way to do\nsome function to convert the TYPE 'base' to oid or int4 or string? so I\ncan do something like this\n\nSELECT * from child where myfather=56578::base;\n\nor how am I getting internal seteval to work right with the return set\nfrom a custom function. I really want to see this coming out right...\nthanks a lot. Alex\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n\n",
"msg_date": "Fri, 8 Mar 2002 16:51:20 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Object ID reference"
}
] |
[
{
"msg_contents": "Hi I'm new to the list, and I'm new to the PostgreSQL also. But I have \nbeen using Object Relation Mapping for a period of time. I would like to \nput native binding with PostgreSQL . It is fairly easy to read and write \nObject into the relayed table e.g.\n\ncreate table base (\nmyname text,\nunique( myname )\n);\n\ncreate table child (\nmyfather base,\nmyname text\n);\n\nINSERT INTO base ( myname ) Values ( 'alex' ) ;\nINSERT 56578 1 <<---- oid\nINSERT INTO child ( myfather, myname ) values ( 56578::base, 'alexbaby' );\nINSERT 56579 1 <<---- oid\n\nHowever, there is no way to get the value back in the WHERE clause. \nbecause the return type is 'base' but the value output ( correct me if \nI'm wrong from digging the source by hand ) is actually oid returns \nint4 from internal seteval() function.\nselect * from child;\nmyfather myname\n-------------------\n56578 alexbaby\n\nIt could be a easy fix in the jdbc, or c to match the seteval(base.oid) \nwith int4.[string, string] compare, but then I need to loop through the \nfull Record Set by hand to get the data. is there a possible way to do \nsome function to convert the TYPE 'base' to oid or int4 or string?\nso I can do something like this\n\nSELECT * from child where myfather=56578::base;\n\nor how am I getting internal seteval to work right with the return set \nfrom a custom function.\nI really want to see this coming out right... thanks a lot.\nAlex\n\n\n",
"msg_date": "Fri, 08 Mar 2002 14:57:49 -0600",
"msg_from": "alex <alex@dpcgroup.com>",
"msg_from_op": true,
"msg_subject": "Object reference"
}
] |
[
{
"msg_contents": "I have a couple way to map the object base of different database. \nFor PostgreSQL I'm new to it so I try to use oid + object name\nas a handle. and object name converted to table name which in my \nscheme is the same key as oid+tableoid that according to calcualtion\nI should have 4 billion row in each table....( correct me if I'm wrong ) \nand I don't think I'm going to push that limit. But the problem is \nI'm not able to get an object by another object id. e.g \nI get base object id 1234 that is in table child, how is that possible to \nbe compare in the WHERE clause.\n\nSELECT * from child WHERE myfather=1234::base; \n\nwill case error because of miss = operation\nI try to build internal function for it but the return type is a int4 run\nafter by a function seteval with the oid SET datatype in cache? \n( not sure about how this actually work, if someone write the in \nout function for the default table type, please let me know I\ncan give it a hack ) \n\nI know I can always roll back to the generic serial as pkey link as\nreference key in other table but I really really like to see the \nthis object casting work for postgresql. \nThanks alot guys. \nAlex :) \n\nDave Cramer wrote:\n\nAlex,\n\nMost of the Object Relation Mapping I have seen get the id from a\nspecial mechanism, so they know it before hand?\n\nFYI oid's are not guaranteed to be unique in Postgres.\n\nDave\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Debian User\nSent: Friday, March 08, 2002 3:42 PM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] Object ID reference\n\n\nHi I'm new to the list, and I'm new to the PostgreSQL also. But I have\nbeen using Object Relation Mapping for a period of time. I would like to\nput native binding with PostgreSQL . It is fairly easy to read and write\nObject into the relayed table e.g.\n\ncreate table base (\nmyname text,\nunique( myname )\n);\n\ncreate table child (\nmyfather base,\nmyname text\n);\n\nINSERT INTO base ( myname ) Values ( 'alex' ) ;\nINSERT 56578 1 <<---- oid\nINSERT INTO child ( myfather, myname ) values ( 56578::base, 'alexbaby'\n); INSERT 56579 1 <<---- oid\n\nHowever, there is no way to get the value back in the WHERE clause.\nbecause the return type is 'base' but the value output ( correct me if\nI'm wrong from digging the source by hand ) is actually oid returns\nint4 from internal seteval() function. select * from child; myfather\nmyname\n-------------------\n56578 alexbaby\n\nIt could be a easy fix in the jdbc, or c to match the seteval(base.oid)\nwith int4.[string, string] compare, but then I need to loop through the\nfull Record Set by hand to get the data. is there a possible way to do\nsome function to convert the TYPE 'base' to oid or int4 or string? so I\ncan do something like this\n\nSELECT * from child where myfather=56578::base;\n\nor how am I getting internal seteval to work right with the return set\nfrom a custom function. I really want to see this coming out right...\nthanks a lot. Alex\n\n\n",
"msg_date": "Fri, 8 Mar 2002 15:53:13 -0600",
"msg_from": "alex@AvengerGear.com (Debian User)",
"msg_from_op": true,
"msg_subject": "Re: Object ID reference"
}
] |
[
{
"msg_contents": "Okay folks, time to put on your language-lawyer hats ...\n\nI have been trying to puzzle out the SQL rules concerning whether two\nFROM items conflict in the presence of schemas. It is entirely clear\nthat one is not allowed to write\n\n\tSELECT * FROM tab1, tab1;\n\nsince this introduces two FROM items of the same name in the same scope.\nOne *can* write\n\n\tSELECT * FROM tab1, tab1 AS x;\n\nsince the alias x effectively becomes the name of the second FROM item.\nBut what about\n\n\tSELECT * FROM schema1.tab1, schema2.tab1;\n\nIs this allowed? SQL92 appears to allow it: section 6.3 <table\nreference> says:\n\n 3) A <table name> that is exposed by a <table reference> TR shall\n not be the same as any other <table name> that is exposed by a\n <table reference> with the same scope clause as TR.\n\nand <table name> quite clearly means the fully qualified table name.\nHowever, the very next paragraph says\n\n 4) A <correlation name> that is exposed by a <table reference> TR\n shall not be the same as any other <correlation name> that is\n exposed by a <table reference> with the same scope clause as TR\n and shall not be the same as the <qualified identifier> of any\n <table name> that is exposed by a <table reference> with the\n same scope clause as TR.\n\nHere <correlation name> means alias; <qualified identifier> actually means\nthe unqualified name (sic) of the table, stripped of any schema. Now as\nfar as I can see, that last restriction makes no sense unless it is\nintended to allow FROM-items to be referenced by unqualified name alone.\nWhich isn't going to work if qualified FROM-items can have duplicate\nunqualified names.\n\nThis restriction also suggests strongly that the spec authors intended\nto allow unqualified references to qualified FROM-items, viz:\n\n\tSELECT tab1.col1 FROM schema1.tab1;\n\nBut as far as I can tell, this is only valid if schema1 is the schema\nthat tab1 would have been found in anyway, cf 5.4 syntax rule 10:\n\n 10)Two <qualified name>s are equal if and only if they have the\n same <qualified identifier> and the same <schema name>, regard-\n less of whether the <schema name>s are implicit or explicit.\n\nI don't much care for this since it implies that the system must try to\nassociate a schema name with the column reference \"tab1.col1\" even\nbefore it looks for matching FROM-items. What if tab1 is actually a\nreference to an alias? We might not find any schema containing tab1.\nCertainly this would completely destroy any hope of having a schema\nsearch path; which path entry should we associate with tab1 if we don't\nfind any tab1?\n\nWhat I would like to do is say the following:\n\n1. Two FROM-items in the same scope cannot have equal <correlation\nname>s or <qualified identifier>s.\n\n2. A column reference that includes a table name but no schema name is\nmatched to FROM-items on the basis of <correlation name> or <qualified\nidentifier> only; that is, \"SELECT tab1.col1 FROM schema1.tab1\" will\nwork whether schema1 is in the search path or not.\n\n3. A column reference that includes a schema name must refer to an\nextant table, and will match only FROM-items that refer to the same\ntable and have the same correlation name. (Fine point here: this means\na reference like schema1.tab1.col1 will match \"FROM schema1.tab1\",\nand it will match \"FROM schema1.tab1 AS tab1\", but it will not match\n\"FROM schema1.tab1 AS x\".) Note also that \"same table\" avoids the\nquestion of whether the FROM clause had an implicit or explicit schema\nqualifier.\n\nThese rules essentially say that a FROM entry \"FROM foo.bar\" is exactly\nequivalent to \"FROM foo.bar AS bar\", and also that \"FROM bar\" is exactly\nequivalent to \"FROM foo.bar\" where foo is the schema in which bar is\nfound. I like these symmetries ... and I am not at all sure that they\nhold if we interpret the SQL92 rules literally.\n\nComments? Is anyone familiar with the details of how other DBMSes\nhandle these issues?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 17:00:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Do FROM items of different schemas conflict?"
},
{
"msg_contents": "Tom Lane wrote:\n> Okay folks, time to put on your language-lawyer hats ...\n> \n> I have been trying to puzzle out the SQL rules concerning whether two\n> FROM items conflict in the presence of schemas. It is entirely clear\n> that one is not allowed to write\n> \n> \tSELECT * FROM tab1, tab1;\n> \n> since this introduces two FROM items of the same name in the same scope.\n> One *can* write\n> \n> \tSELECT * FROM tab1, tab1 AS x;\n> \n> since the alias x effectively becomes the name of the second FROM item.\n> But what about\n> \n> \tSELECT * FROM schema1.tab1, schema2.tab1;\n\n From my simplistic understanding, I would say if we allowed this, we\nwould have to require the schema designtation be on every reference to\ntab1 in the query. Is that something we can do?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 8 Mar 2002 17:27:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do FROM items of different schemas conflict?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> SELECT * FROM schema1.tab1, schema2.tab1;\n\n> From my simplistic understanding, I would say if we allowed this, we\n> would have to require the schema designtation be on every reference to\n> tab1 in the query. Is that something we can do?\n\nWell, that's what's not entirely clear to me.\n\nIf you write\n\n\tSELECT ... FROM schema1.tab1 AS tab1;\n\nthen clearly this item *can* be referenced by just tab1.col1, and\nprobably a strict reading would say that it *must* be referenced\nthat way (ie, schema1.tab1.col1 should not work). But in the case\nwithout the AS clause, I'm not at all sure what the spec means to\nallow.\n\n(BTW, the equivalent passages in SQL99 are no help; they are several\ntimes longer but utterly fail to clarify the point.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 17:34:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Do FROM items of different schemas conflict? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> One *can* write\n> \n> SELECT * FROM tab1, tab1 AS x;\n> \n> since the alias x effectively becomes the name of the second FROM item.\n> But what about\n> \n> SELECT * FROM schema1.tab1, schema2.tab1;\n> \n> Is this allowed? \n\nTom, I do not have the standard here. But as far as I can tell\nthis is allowed. However, you'll have to refer to these tables\nby the qualified name, like:\n\nSELECT schema1.tab1.col1, schema2.tab1.col5 FROM schema1.tab1,\nschema2.tab1 WHERE...\n\nIf you had\n SELECT * FROM schema1.tab1, schema2.tab2;\nyou could abbreviate:\nSELECT tab1.col1, tab2.col5 FROM schema1.tab1, schema2.tab2 WHERE...\n\ni.e., as long as it is not ambiguous you can omit the schema\nqualification. Otherwise you have to use AS, like in the non-schema\ncase when you are using the same table twice.\n\nThe idea seems to be: if there is ambiguity, you must use AS.\nAnd you cannot cause an ambiguity with the name you give in the AS\nclause.\n\n\n> I don't much care for this since it implies that the system must try to\n> associate a schema name with the column reference \"tab1.col1\" even\n> before it looks for matching FROM-items. What if tab1 is actually a\n> reference to an alias? We might not find any schema containing tab1.\n> Certainly this would completely destroy any hope of having a schema\n> search path; which path entry should we associate with tab1 if we don't\n> find any tab1?\n> \n\nEach SQL-session has a schema associated with it, which should be the\nschema\nwith the same name as the current userid. That is the schema from where\nyou\nmust take the path.\n\n\n> What I would like to do is say the following:\n> \n> 1. Two FROM-items in the same scope cannot have equal <correlation\n> name>s or <qualified identifier>s.\n> \n\nOnly if the qualified identifiers are exposed. As soon as you give\nthen an alias with AS the original name is hidden.\n\n> 2. A column reference that includes a table name but no schema name is\n> matched to FROM-items on the basis of <correlation name> or <qualified\n> identifier> only; that is, \"SELECT tab1.col1 FROM schema1.tab1\" will\n> work whether schema1 is in the search path or not.\n> \n\nYes.\n\n\n> 3. A column reference that includes a schema name must refer to an\n> extant table, and will match only FROM-items that refer to the same\n> table and have the same correlation name. (Fine point here: this means\n> a reference like schema1.tab1.col1 will match \"FROM schema1.tab1\",\n> and it will match \"FROM schema1.tab1 AS tab1\", but it will not match\n> \"FROM schema1.tab1 AS x\".) Note also that \"same table\" avoids the\n> question of whether the FROM clause had an implicit or explicit schema\n> qualifier.\n> \n\nYes, \"schema1.tab1 AS x\" makes \"schema1.tab1\" disappear.\n\n\n> These rules essentially say that a FROM entry \"FROM foo.bar\" is exactly\n> equivalent to \"FROM foo.bar AS bar\",\n\nA small difference. With the first you can refer to columns as\n\nfoo.bar.col1\n\nwith the second you cannot. You must say: bar.col1\n\n> and also that \"FROM bar\" is exactly\n> equivalent to \"FROM foo.bar\" where foo is the schema in which bar is\n> found. \n\nYes, as long as the path for the session schema finds bar in foo \nfirst than any other schema.\n\n>I like these symmetries ... and I am not at all sure that they\n> hold if we interpret the SQL92 rules literally.\n> \n> Comments? Is anyone familiar with the details of how other DBMSes\n> handle these issues?\n> \n\nI remember some professor saying that not using the AS clause is a \nbad SQL programming practice. With all these resolution rules one\ntends to agree with that...\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Fri, 08 Mar 2002 18:13:37 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Do FROM items of different schemas conflict?"
},
{
"msg_contents": "Tom Lane wrote:\n> Okay folks, time to put on your language-lawyer hats ...\n> \n> \tSELECT * FROM schema1.tab1, schema2.tab1;\n> \n> Is this allowed? SQL92 appears to allow it: section 6.3 <table\n> reference> says:\n\nFWIW:\nThis works in Oracle 8.1.6\n\nConnected to Oracle8i Enterprise Edition Release 8.1.6.3.0\nConnected as cyapps\n\nSQL> select * from apps.plan_table, cyapps.plan_table;\n\n <snip>\n\n24 rows selected\n\n\n\n> This restriction also suggests strongly that the spec authors intended\n> to allow unqualified references to qualified FROM-items, viz:\n> \n> \tSELECT tab1.col1 FROM schema1.tab1;\n> \n\n...so does this...\nSQL> select plan_table.operation from apps.plan_table;\n\n <snip>\n\n12 rows selected\n\n\n> Comments? Is anyone familiar with the details of how other DBMSes\n> handle these issues?\n\nMSSQL 7 seems to handle the first syntax also, but not the second.\n\nJoe\n\n",
"msg_date": "Fri, 08 Mar 2002 15:26:01 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Do FROM items of different schemas conflict?"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> If you had\n> SELECT * FROM schema1.tab1, schema2.tab2;\n> you could abbreviate:\n> SELECT tab1.col1, tab2.col5 FROM schema1.tab1, schema2.tab2 WHERE...\n\n> i.e., as long as it is not ambiguous you can omit the schema\n> qualification.\n\nWhat I am wondering about is how you tell whether it is ambiguous.\n\nIn particular, if schema1 is not in the search path then I do not\nsee how the spec can be read to say that \"tab1.col1\" matches \"FROM\nschema1.tab1\" (note no AS here). It does seem that everyone agrees that\nthat is the meaning --- there is a footnote in Date that shows he thinks\nso too --- but as far as I can tell this directly contradicts the text\nof the spec, because there is noplace that says how to match an\nunqualified \"tab1\" against the qualified \"schema1.tab1\", except for\n5.4-10 which would clearly disallow such a match. Where am I missing\nsomething?\n\n>> What I would like to do is say the following:\n>> \n>> 1. Two FROM-items in the same scope cannot have equal <correlation\n>> name>s or <qualified identifier>s.\n\n> Only if the qualified identifiers are exposed. As soon as you give\n> then an alias with AS the original name is hidden.\n\nRight, of course. Sorry for the imprecision.\n\n>> These rules essentially say that a FROM entry \"FROM foo.bar\" is exactly\n>> equivalent to \"FROM foo.bar AS bar\",\n\n> A small difference. With the first you can refer to columns as\n\n> foo.bar.col1\n\n> with the second you cannot. You must say: bar.col1\n\nWell, the point is that I would like to allow that, specifically because\nI would like to say that the equivalence is exact. I don't see any\nvalue in enforcing this particular nitpick.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 18:28:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Do FROM items of different schemas conflict? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > If you had\n> > SELECT * FROM schema1.tab1, schema2.tab2;\n> > you could abbreviate:\n> > SELECT tab1.col1, tab2.col5 FROM schema1.tab1, schema2.tab2 WHERE...\n> \n> > i.e., as long as it is not ambiguous you can omit the schema\n> > qualification.\n> \n> What I am wondering about is how you tell whether it is ambiguous.\n> \n\nAmbiguous == \"found more than one match in the list of tables in FROM\"\n\n\n> In particular, if schema1 is not in the search path then I do not\n> see how the spec can be read to say that \"tab1.col1\" matches \"FROM\n> schema1.tab1\" (note no AS here). It does seem that everyone agrees that\n> that is the meaning --- there is a footnote in Date that shows he thinks\n> so too \n\nI will really need some time to see which clauses in the spec can be \ninterpreted that way. But what people seem to believe is that the\nmatch should occur whenever possible (i.e., unless is ambiguous).\n\n> --- but as far as I can tell this directly contradicts the text\n> of the spec, because there is noplace that says how to match an\n> unqualified \"tab1\" against the qualified \"schema1.tab1\", except for\n> 5.4-10 which would clearly disallow such a match. Where am I missing\n> something?\n> \n\nYes, read in solation it looks like it forbids it (darn, I wish I had\nthe standard here). But remember that the only place you should look\nfor a match for \"tab1\" is in the things listed in the FROM clause.\nThe POSTQUEL extension of adding the tables for you (if I understood\nright) is an aberration (if it is still supported it will ave to be\nremoved).\n\nAs your namespace is now restricted to the FROM clause it is easy to\nsee what would be the \"implicit\" schema you'll give to \"tab1\" -- the\nonly one that you find prefixing a tab1 in te FROM list. If you find\nmore than one there is ambiguity, so it is not allowed -- AS clause\nis required then.\n\n\n\n> >> These rules essentially say that a FROM entry \"FROM foo.bar\" is exactly\n> >> equivalent to \"FROM foo.bar AS bar\",\n> \n> > A small difference. With the first you can refer to columns as\n> \n> > foo.bar.col1\n> \n> > with the second you cannot. You must say: bar.col1\n> \n> Well, the point is that I would like to allow that, specifically because\n> I would like to say that the equivalence is exact. I don't see any\n> value in enforcing this particular nitpick.\n> \n\nBut you must. As soon as you used \"AS bar\" it is not a table\nname anymore, i.e., it cannot be qualified by an schema. It is\na correlation name which is a single unqualified name and whatever\nrefers to it _must_ use only that single name. So \"foo.bar.col1\"\nmakes absolute no sense after you say \"AS bar\".\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Fri, 08 Mar 2002 18:48:42 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Do FROM items of different schemas conflict?"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> This works in Oracle 8.1.6\n\nSo what does Oracle do with\n\nselect plan_table.operation from apps.plan_table, cyapps.plan_table;\n\n??\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 19:19:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Do FROM items of different schemas conflict? "
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> The POSTQUEL extension of adding the tables for you (if I understood\n> right) is an aberration (if it is still supported it will ave to be\n> removed).\n\nNo it won't. The implicit-RTE extension doesn't come into play until\nafter you've failed to find a matching RTE. It cannot break queries\nthat are valid according to spec --- it only affects queries that should\nflag an error according to spec.\n\nMy question is about what it means to find a matching RTE and when two\nsimilarly-named RTEs should be rejected as posing a name conflict.\nImplicit RTEs are not relevant to the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Mar 2002 19:29:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Do FROM items of different schemas conflict? "
},
{
"msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>This works in Oracle 8.1.6\n>>\n> \n> So what does Oracle do with\n> \n> select plan_table.operation from apps.plan_table, cyapps.plan_table;\n> \n> ??\n> \n\nSQL> select plan_table.operation from apps.plan_table, cyapps.plan_table;\n\nselect plan_table.operation from apps.plan_table, cyapps.plan_table\n\nORA-00918: column ambiguously defined\n\n\nJoe\n\n",
"msg_date": "Fri, 08 Mar 2002 16:30:15 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Do FROM items of different schemas conflict?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > The POSTQUEL extension of adding the tables for you (if I understood\n> > right) is an aberration (if it is still supported it will ave to be\n> > removed).\n> \n> No it won't. The implicit-RTE extension doesn't come into play until\n> after you've failed to find a matching RTE. It cannot break queries\n> that are valid according to spec --- it only affects queries that should\n> flag an error according to spec.\n> \n> My question is about what it means to find a matching RTE and when two\n> similarly-named RTEs should be rejected as posing a name conflict.\n> Implicit RTEs are not relevant to the problem.\n> \n\nThat was a side question, as I though this could get in the way.\nI am glad it doesn't.\n\nThe rest I said is still valid and is unrelated to this.\n\nBTW, I believe Oracle got the standard right this time.\nWhat Joe Conway has been posting is exactly what I understood.\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Fri, 08 Mar 2002 19:57:24 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Do FROM items of different schemas conflict?"
},
{
"msg_contents": "Hi,\n\nI'm certainly not a language lawyer, but I tried the following on our Oracle\n8.0.5 install:\n\n* Logged in as two separate users (ypsedba, ypkbdba) and ran\n\n\tcreate table test_from_clause (field1 int)\n\nin both of them.\n\n* Logged in as system (the Oracle super user with access to both users\nschema).\n\n* Executed\n\n\tselect * from ypsedba.test_from_clause\n\nand\n\n\tselect * from ypkbdba.test_from_clause\n\nto verify permissions / sanity.\n\n* Executed\n\n\tselect * from ypsedba.test_from_clause, ypkbdba.test_from_clause\n\nto check your test case.\n\nResults:\n\n* No errors\n\n* Result set had two columns - \"FIELD1\" and \"FIELD1_1\"\n\nAs mentioned above, I'm not a language lawyer so I don't know whether the\nabove is a correct implementation of the standard.\n\nRegards,\n\nMark Pritchard\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Saturday, 9 March 2002 9:01 AM\n> To: pgsql-hackers@postgreSQL.org\n> Subject: [HACKERS] Do FROM items of different schemas conflict?\n>\n>\n> Okay folks, time to put on your language-lawyer hats ...\n>\n> I have been trying to puzzle out the SQL rules concerning whether two\n> FROM items conflict in the presence of schemas. It is entirely clear\n> that one is not allowed to write\n>\n> \tSELECT * FROM tab1, tab1;\n>\n> since this introduces two FROM items of the same name in the same scope.\n> One *can* write\n>\n> \tSELECT * FROM tab1, tab1 AS x;\n>\n> since the alias x effectively becomes the name of the second FROM item.\n> But what about\n>\n> \tSELECT * FROM schema1.tab1, schema2.tab1;\n>\n> Is this allowed? SQL92 appears to allow it: section 6.3 <table\n> reference> says:\n>\n> 3) A <table name> that is exposed by a <table reference> TR shall\n> not be the same as any other <table name> that is exposed by a\n> <table reference> with the same scope clause as TR.\n>\n> and <table name> quite clearly means the fully qualified table name.\n> However, the very next paragraph says\n>\n> 4) A <correlation name> that is exposed by a <table reference> TR\n> shall not be the same as any other <correlation name> that is\n> exposed by a <table reference> with the same scope\n> clause as TR\n> and shall not be the same as the <qualified identifier> of any\n> <table name> that is exposed by a <table reference> with the\n> same scope clause as TR.\n>\n> Here <correlation name> means alias; <qualified identifier> actually means\n> the unqualified name (sic) of the table, stripped of any schema. Now as\n> far as I can see, that last restriction makes no sense unless it is\n> intended to allow FROM-items to be referenced by unqualified name alone.\n> Which isn't going to work if qualified FROM-items can have duplicate\n> unqualified names.\n>\n> This restriction also suggests strongly that the spec authors intended\n> to allow unqualified references to qualified FROM-items, viz:\n>\n> \tSELECT tab1.col1 FROM schema1.tab1;\n>\n> But as far as I can tell, this is only valid if schema1 is the schema\n> that tab1 would have been found in anyway, cf 5.4 syntax rule 10:\n>\n> 10)Two <qualified name>s are equal if and only if they have the\n> same <qualified identifier> and the same <schema\n> name>, regard-\n> less of whether the <schema name>s are implicit or explicit.\n>\n> I don't much care for this since it implies that the system must try to\n> associate a schema name with the column reference \"tab1.col1\" even\n> before it looks for matching FROM-items. What if tab1 is actually a\n> reference to an alias? We might not find any schema containing tab1.\n> Certainly this would completely destroy any hope of having a schema\n> search path; which path entry should we associate with tab1 if we don't\n> find any tab1?\n>\n> What I would like to do is say the following:\n>\n> 1. Two FROM-items in the same scope cannot have equal <correlation\n> name>s or <qualified identifier>s.\n>\n> 2. A column reference that includes a table name but no schema name is\n> matched to FROM-items on the basis of <correlation name> or <qualified\n> identifier> only; that is, \"SELECT tab1.col1 FROM schema1.tab1\" will\n> work whether schema1 is in the search path or not.\n>\n> 3. A column reference that includes a schema name must refer to an\n> extant table, and will match only FROM-items that refer to the same\n> table and have the same correlation name. (Fine point here: this means\n> a reference like schema1.tab1.col1 will match \"FROM schema1.tab1\",\n> and it will match \"FROM schema1.tab1 AS tab1\", but it will not match\n> \"FROM schema1.tab1 AS x\".) Note also that \"same table\" avoids the\n> question of whether the FROM clause had an implicit or explicit schema\n> qualifier.\n>\n> These rules essentially say that a FROM entry \"FROM foo.bar\" is exactly\n> equivalent to \"FROM foo.bar AS bar\", and also that \"FROM bar\" is exactly\n> equivalent to \"FROM foo.bar\" where foo is the schema in which bar is\n> found. I like these symmetries ... and I am not at all sure that they\n> hold if we interpret the SQL92 rules literally.\n>\n> Comments? Is anyone familiar with the details of how other DBMSes\n> handle these issues?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Tue, 12 Mar 2002 07:56:23 +1100",
"msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Do FROM items of different schemas conflict?"
},
{
"msg_contents": "Tom Lane writes:\n\n> But what about\n>\n> \tSELECT * FROM schema1.tab1, schema2.tab1;\n>\n> Is this allowed?\n\nYes. You would just have to schema-qualify any column references.\n\n> SQL92 appears to allow it: section 6.3 <table reference> says:\n>\n> 3) A <table name> that is exposed by a <table reference> TR shall\n> not be the same as any other <table name> that is exposed by a\n> <table reference> with the same scope clause as TR.\n>\n> and <table name> quite clearly means the fully qualified table name.\n> However, the very next paragraph says\n>\n> 4) A <correlation name> that is exposed by a <table reference> TR\n> shall not be the same as any other <correlation name> that is\n> exposed by a <table reference> with the same scope clause as TR\n> and shall not be the same as the <qualified identifier> of any\n> <table name> that is exposed by a <table reference> with the\n> same scope clause as TR.\n>\n> Here <correlation name> means alias; <qualified identifier> actually means\n> the unqualified name (sic) of the table, stripped of any schema. Now as\n> far as I can see, that last restriction makes no sense unless it is\n> intended to allow FROM-items to be referenced by unqualified name alone.\n\nI think you should be able to say\n\n SELECT * FROM schema1.tab1 WHERE tab1.col1 > 0;\n\n> Which isn't going to work if qualified FROM-items can have duplicate\n> unqualified names.\n\nI think the bottom line is that mixing aliased tables and non-aliased\ntables in FROM lists is going to be confusing. But for those that stick\nto either approach, the restrictions are most flexible, yet for those that\nmix it's a sane subset.\n\nFor instance, is you don't use aliases you can say\n\n SELECT * FROM sc1.tab1, sc2.tab1 WHERE sc1.tab1.col1 = sc2.tab1.col1;\n\nwhich looks reasonable.\n\nIf you use aliases it basically says the aliases have to be different.\n\nIf you mix, it prevents you from doing\n\n SELECT * FROM schema1.tab1, foo AS tab1;\n\nsince the reference \"tab1\" is ambiguous.\n\nAnother view is that in a parallel world, explicit table aliases could be\nput into a pseudo-schema ALIAS, so you could write\n\n SELECT * FROM schema1.tab1, foo AS tab1\n WHERE schema1.tab1.col1 = ALIAS.tab1.col1;\n\nBut this is not the real world, so the ambiguity protection afforded to\ntable aliases needs to be stronger than for non-aliased table references.\n\n> This restriction also suggests strongly that the spec authors intended\n> to allow unqualified references to qualified FROM-items, viz:\n>\n> \tSELECT tab1.col1 FROM schema1.tab1;\n>\n> But as far as I can tell, this is only valid if schema1 is the schema\n> that tab1 would have been found in anyway, cf 5.4 syntax rule 10:\n>\n> 10)Two <qualified name>s are equal if and only if they have the\n> same <qualified identifier> and the same <schema name>, regard-\n> less of whether the <schema name>s are implicit or explicit.\n>\n> I don't much care for this since it implies that the system must try to\n> associate a schema name with the column reference \"tab1.col1\" even\n> before it looks for matching FROM-items. What if tab1 is actually a\n> reference to an alias? We might not find any schema containing tab1.\n> Certainly this would completely destroy any hope of having a schema\n> search path; which path entry should we associate with tab1 if we don't\n> find any tab1?\n\nSyntactically you can resolve tab1.col1 as either\n\n <correlation name> . <column name>\n == <identifier> . <identifier>\n\nor\n\n <table name> . <column name>\n == <qualified name> . <identifier>\n\nso you can choose to ignore that rules for <qualified name> if no explicit\nschema name is given.\n\nWow, that's whacky.\n\n> What I would like to do is say the following:\n>\n> 1. Two FROM-items in the same scope cannot have equal <correlation\n> name>s or <qualified identifier>s.\n\nI would like to see the example at the very top working, but if it's too\ncrazy, we can worry about it in a future life.\n\n> 2. A column reference that includes a table name but no schema name is\n> matched to FROM-items on the basis of <correlation name> or <qualified\n> identifier> only; that is, \"SELECT tab1.col1 FROM schema1.tab1\" will\n> work whether schema1 is in the search path or not.\n\nYes.\n\n> 3. A column reference that includes a schema name must refer to an\n> extant table, and will match only FROM-items that refer to the same\n> table and have the same correlation name. (Fine point here: this means\n> a reference like schema1.tab1.col1 will match \"FROM schema1.tab1\",\n> and it will match \"FROM schema1.tab1 AS tab1\",\n\nIs this really necessary? It seems confusing.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 11 Mar 2002 16:21:17 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Do FROM items of different schemas conflict?"
}
] |
[
{
"msg_contents": "As has been noted on this list before, query timeouts are not implemented\nin pgsql-jdbc (see\n\n http://archives.postgresql.org/pgsql-bugs/2000-12/msg00093.php\n\n). This is currently causing a problem for me, and I might (no\npromises) be interested in implementing it. So I'm testing the waters. If\nI did submit a patch for this, would the developers here be interested?\n\nj\n\n",
"msg_date": "Fri, 8 Mar 2002 17:05:47 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "implementing query timeout"
},
{
"msg_contents": "Jessica,\n\nYes we would be interested\n\nThanks,\n\nDave \n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Jessica Perry\nHekman\nSent: Friday, March 08, 2002 5:06 PM\nTo: pgsql-jdbc@postgresql.org\nSubject: [JDBC] implementing query timeout\n\n\nAs has been noted on this list before, query timeouts are not\nimplemented in pgsql-jdbc (see\n\n http://archives.postgresql.org/pgsql-bugs/2000-12/msg00093.php\n\n). This is currently causing a problem for me, and I might (no\npromises) be interested in implementing it. So I'm testing the waters.\nIf I did submit a patch for this, would the developers here be\ninterested?\n\nj\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n\n",
"msg_date": "Wed, 13 Mar 2002 12:40:10 -0500",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Re: implementing query timeout"
},
{
"msg_contents": "Jessica Perry Hekman wrote:\n> As has been noted on this list before, query timeouts are not implemented\n> in pgsql-jdbc (see\n> \n> http://archives.postgresql.org/pgsql-bugs/2000-12/msg00093.php\n> \n> ). This is currently causing a problem for me, and I might (no\n> promises) be interested in implementing it. So I'm testing the waters. If\n> I did submit a patch for this, would the developers here be interested?\n\n[ Hackers list added.]\n\nYou bet, but it would be done in the backend, not in jdbc. Is that OK?\n\nI have some ideas that should make it pretty easy. If you set an\nalarm() in the backend on transaction start, then call the query\ncancel() code if the alarm() goes off, that should do it. Of course,\nyou reset the alarm if the query finishes before the timeout.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Mar 2002 15:20:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: implementing query timeout"
},
{
"msg_contents": "Jessica Perry Hekman wrote:\n> As has been noted on this list before, query timeouts are not implemented\n> in pgsql-jdbc (see\n> \n> http://archives.postgresql.org/pgsql-bugs/2000-12/msg00093.php\n> \n> ). This is currently causing a problem for me, and I might (no\n> promises) be interested in implementing it. So I'm testing the waters. If\n> I did submit a patch for this, would the developers here be interested?\n\nLet me also add that Cancel now works in the CVS copy of the jdbc\ndriver.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Mar 2002 15:22:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: implementing query timeout"
},
{
"msg_contents": "On Wed, 13 Mar 2002, Bruce Momjian wrote:\n\n> You bet, but it would be done in the backend, not in jdbc. Is that OK?\n\nTheoretically this is okay. I am more comfortable in Java than in C and I\nhadn't looked at the backend code at all, but I'll take a peek and see if\nit looks like something I'd feel comfortable doing.\n\n> I have some ideas that should make it pretty easy. If you set an\n> alarm() in the backend on transaction start, then call the query\n> cancel() code if the alarm() goes off, that should do it. Of course,\n> you reset the alarm if the query finishes before the timeout.\n\nSounds straightforward enough. Hopefully I'll get a chance to look at this\nbefore the end of this week.\n\nThanks!\n\nJessica\n\n\n",
"msg_date": "Wed, 13 Mar 2002 20:55:09 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: implementing query timeout"
},
{
"msg_contents": "Hi friends,\n\tI have problems with postgres.jar and tomcat. I have de follow exception :\n\n- Excepcion de persistencia:\ncom.kristinaIbs.persistence.ExceptionPersistence: ManagerPersistencePool\n(getConnection).Connection refused. Check that the hostname and port is\ncorrect, and that the postmaster is running with the -i flag, which enables\nTCP/IP networking.\n at\ncom.kristinaIbs.persistence.ManagerPersistencePool.getConnection(ManagerPers\nistencePool.java:112)\n at\ncom.kristinaIbs.user.UserManager.getUserByLogin(UserManager.java:314)\n\nI have the follows parameters :\n\t driver \t = org.postgresql.Driver\n\turl = jdbc:postgresql://192.168.0.7:5432/easysite\n\tuser \t = postgres\n\tpassword =\n\nDo you can Help please!!!!!\n\n\n-----Mensaje original-----\nDe: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org]En nombre de Bruce Momjian\nEnviado el: miercoles 13 de marzo de 2002 21:23\nPara: Jessica Perry Hekman\nCC: pgsql-jdbc@postgresql.org; PostgreSQL-development\nAsunto: Re: [JDBC] implementing query timeout\n\n\nJessica Perry Hekman wrote:\n> As has been noted on this list before, query timeouts are not implemented\n> in pgsql-jdbc (see\n>\n> http://archives.postgresql.org/pgsql-bugs/2000-12/msg00093.php\n>\n> ). This is currently causing a problem for me, and I might (no\n> promises) be interested in implementing it. So I'm testing the waters. If\n> I did submit a patch for this, would the developers here be interested?\n\nLet me also add that Cancel now works in the CVS copy of the jdbc\ndriver.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n",
"msg_date": "Thu, 14 Mar 2002 08:56:29 +0100",
"msg_from": "\"Jose Javier Gutierrez\" <jgutierrez@kristina.es>",
"msg_from_op": false,
"msg_subject": "problems with Tomcat and postgres"
},
{
"msg_contents": "Hi friend ;P\n\nThe first problem that I can see is a clear \"Connection refused\".\nBefor investigate if the cause is Tomcat or something else, correct this \nerror! Be sure that the postmaster is running with the -i\n\np.s. usually my run.sh is like:\n\"./postmaster -i -D /home/me/postgresql/data > logfile 2>&1 &\"\n\nCiao, Auri \n\nOn Thu, 14 Mar 2002, Jose Javier Gutierrez wrote:\n\n> Hi friends,\n> \tI have problems with postgres.jar and tomcat. I have de follow exception :\n> \n> - Excepcion de persistencia:\n> com.kristinaIbs.persistence.ExceptionPersistence: ManagerPersistencePool\n> (getConnection).Connection refused. Check that the hostname and port is\n> correct, and that the postmaster is running with the -i flag, which enables\n> TCP/IP networking.\n> at\n> com.kristinaIbs.persistence.ManagerPersistencePool.getConnection(ManagerPers\n> istencePool.java:112)\n> at\n> com.kristinaIbs.user.UserManager.getUserByLogin(UserManager.java:314)\n> \n> I have the follows parameters :\n> \t driver \t = org.postgresql.Driver\n> \turl = jdbc:postgresql://192.168.0.7:5432/easysite\n> \tuser \t = postgres\n> \tpassword =\n> \n> Do you can Help please!!!!!\n\n",
"msg_date": "Thu, 14 Mar 2002 09:20:01 +0100 (CET)",
"msg_from": "Auri Mason <amason@syntrex.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with Tomcat and postgres"
},
{
"msg_contents": "\"Jose Javier Gutierrez\" <jgutierrez@kristina.es> writes:\n\n> com.kristinaIbs.persistence.ExceptionPersistence: ManagerPersistencePool\n> (getConnection).Connection refused. Check that the hostname and port is\n> correct, and that the postmaster is running with the -i flag, which enables\n> TCP/IP networking.\n\nIs the postmaster indeed listening on a TCP/IP port, (usually 5432) or\njust on the Unix-domain socket? You have to specifically turn on\nTCP/IP for security reasons--it's not enabled by default.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "14 Mar 2002 11:05:08 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] problems with Tomcat and postgres"
}
] |
[
{
"msg_contents": "I hope I'm not writing wrong question to the wrong mailing-list. \nLet me know if I'm. \n\nI would like to ask when the table reference e.g ( 1234::table1 ) \nreturn from the SELECT statment. \ne.g. \n> select col1 from table2 \n> col1\n>---------\n> 1234\n\ncol1 type is now from ( SET?? not sure) table1 converted to oid by seteval \nand is there a function can converted this to text or int to preform \ncalculation? \nThanks \nAlex\n",
"msg_date": "Fri, 8 Mar 2002 17:13:42 -0600",
"msg_from": "alex@AvengerGear.com (Debian User)",
"msg_from_op": true,
"msg_subject": "Make my question clear."
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm looking for some help on that case.\n\nWhat I want is to write a bool function that returns true or false\nwhether an int is in an int[].\n\nI've been looking at contrib/int_array but the operators in just test if\nan array is included in an other.\n\nAnyone has an idea. My project has to be finished on monday, and I'm\ndeeply in search for that.\n\nTIA\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Sat, 9 Mar 2002 19:04:18 +0100 (MET)",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "need help"
},
{
"msg_contents": "On Sat, 9 Mar 2002, Olivier PRENANT wrote:\n\n> Hi all,\n>\n> I'm looking for some help on that case.\n>\n> What I want is to write a bool function that returns true or false\n> whether an int is in an int[].\n>\n> I've been looking at contrib/int_array but the operators in just test if\n> an array is included in an other.\n\nI think you might find what you want in contrib/array.\n\n",
"msg_date": "Sun, 10 Mar 2002 10:50:29 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: need help"
}
] |
[
{
"msg_contents": "Hi All,\n\n(1) I have written the following test function with\nintention to experiment how extending a function works\nin postgreSQL.\n\n//func.c\n\n#include \"postgres.h\"\n#include <string.h>\n#include \"fmgr.h\"\n\nPG_FUNCTION_INFO_V1(MyInc);\n\nDatum MyInc(PG_FUNCTION_ARGS)\n{\n\tint32 arg = PG_GETARG_INT32(0);\n\tPG_RETURN_INT32(arg+1);\n}\n\n// I compiled it and made a .so file and placed it in\npgsql/lib directory which is my default\ndynamic_library_path\n\n(2) Then I made the following entry in pg_proc.h\n \nDATA(insert OID = 4000 ( MyInc\t\t\tPGUID 12 f t t t 1 f\n23 \"23\" 100 0 0 100 MyInc testfunc ));\nDESCR(\"test function \");\n\n(3) I included the pg_proc.h file in\nsrc/include/catalog\nand then I executed gmake from \nsrc/backend/catalog \ndirectory\n\nI encountered the following problem\n\nPROBLEM 1:\n\n(4) the compiled postgres.bki file has the entries\nwith ';' like (I am showing one of the places where\nthe ';' is placed )\n\n// First\n\n# PostgreSQL 7.2\n# 1 \"/tmp/genbkitmp.c\"\ncreate bootstrap pg_proc\n (\n proname;^M = name ,\n\nand like that\n\n//Second\n\n'));' remains from the DATA() entry in pg_proc.h\nlike\n\ninsert OID = 4000 ( MyInc 1 12 f t t t 1 f 23 \"23\" 100\n0 0 100 MyInc func ));\n\nHence I have to remove these semicolon manually form\npostgres.bki file and had to place manually in the\npgsql/share directory from where the initdb reads this\nfile.\n\nPROBLEM 2:\n\n(5) And when I run the following query \n \ntest=# select MyInc(6);\n\nFollowing error came\n\nERROR: Function 'myinc(int4)' does not exist\n Unable to identify a function that satisfies\nthe given argument types\n You may need to add explicit typecasts\n\nPROBLEM 3:\n\nI didn't get a way to reflect all the new DATA()\nentries in pg_proc.h to pg_proc system table without\ndeleting the \"data\" directory and reinitializing it\nagain by running \"initdb\".\n\n\n(6) Kindly guide me how should I rectify these\nproblem?\n\nThanks in advance for your help and time\n\nRegards\nAmit Khare\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTry FREE Yahoo! Mail - the world's greatest free email!\nhttp://mail.yahoo.com/\n",
"msg_date": "Sat, 9 Mar 2002 10:10:24 -0800 (PST)",
"msg_from": "Amit Kumar Khare <skamit2000@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Error in executing user defined function"
},
{
"msg_contents": "Amit Kumar Khare <skamit2000@yahoo.com> writes:\n> (2) Then I made the following entry in pg_proc.h\n \n> DATA(insert OID = 4000 ( MyInc\t\t\tPGUID 12 f t t t 1 f\n> 23 \"23\" 100 0 0 100 MyInc testfunc ));\n> DESCR(\"test function \");\n\nI think you ignored the advice that appears at the head of pg_proc.h\n(and all the other include/catalog headers):\n\n *\t XXX do NOT break up DATA() statements into multiple lines!\n *\t\t the scripts are not as smart as you might think...\n *\t XXX (eg. #if 0 #endif won't do what you think)\n\n> PROBLEM 3:\n\n> I didn't get a way to reflect all the new DATA()\n> entries in pg_proc.h to pg_proc system table without\n> deleting the \"data\" directory and reinitializing it\n> again by running \"initdb\".\n\nQuite. CREATE FUNCTION is the usual way of creating new pg_proc entries\non-the-fly. The include/catalog files contain exactly those entries\nthat are inserted by initdb; there is no other path from them to the\nrunning system.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Mar 2002 13:38:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error in executing user defined function "
}
] |
[
{
"msg_contents": "Now that Bruce has done some work on rationalizing elog() output, seems\nlike we ought to take another look at EXPLAIN VERBOSE. Currently, that\ncommand does this:\n\n1. A non-pretty-printed plan dump (nodeToString output) is sent to\nelog(INFO). Formerly that always output to both postmaster log and\nclient, but now it'll typically go only to the client.\n\n2. The short-form output (same as non-VERBOSE EXPLAIN) is sent to\nelog(INFO). See above.\n\n3. The pretty-printed plan dump is sent to postmaster stdout.\n\nNow postmaster stdout is just about the least good destination we\ncould possibly use. It may well end up in the bit bucket (if someone is\nonly saving stderr output, and/or is using syslog logging instead of\nstderr). In any case it's not necessarily an easy place for the client\nto get at.\n\nAlso, I find the non-pretty-printed dump format damn near unreadable,\nalthough I have seen comments suggesting that there are some people who\nactually like it. I don't see the point of giving it pride of place on\nthe client's terminal.\n\nWhat I would suggest is that EXPLAIN VERBOSE ought to emit either\nnon-pretty-print or pretty-print dump format, not both (probably control\nthis with debug_pretty_print or another newly-invented GUC parameter;\nIMHO the factory default should be pretty-printing). Furthermore, the\noutput should go to elog(INFO) in either case. This will take some work\nto make the prettyprinter capable of that, but it's not a big job.\n(A side effect of this is that pprint dumps logged by the\ndebug_print_plan and so forth options could go through elog as well,\nwhich they don't now.)\n\nA disadvantage of elog(INFO) output for pretty-printed plans is that\nAFAIK psql doesn't currently have any way of capturing NOTICE output\ninto a file. I find it much better to look at pretty-printed dumps\nin Emacs than on a terminal window, mainly because Emac's M-C-f and\nM-C-b commands understand the nesting structure so it's easy to move\naround in the dump with them. How hard would it be to get psql to\nsend notice output into a \\g file?\n\nComments? In particular, should EXPLAIN use the existing\ndebug_pretty_print GUC variable, or have its own?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Mar 2002 14:09:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Tom Lane wrote:\n> Now that Bruce has done some work on rationalizing elog() output, seems\n> like we ought to take another look at EXPLAIN VERBOSE. Currently, that\n> command does this:\n\nYes, the elog() tags finally match some reality. :-)\n\n> 1. A non-pretty-printed plan dump (nodeToString output) is sent to\n> elog(INFO). Formerly that always output to both postmaster log and\n> client, but now it'll typically go only to the client.\n> \n> 2. The short-form output (same as non-VERBOSE EXPLAIN) is sent to\n> elog(INFO). See above.\n> \n> 3. The pretty-printed plan dump is sent to postmaster stdout.\n> \n> Now postmaster stdout is just about the least good destination we\n> could possibly use. It may well end up in the bit bucket (if someone is\n> only saving stderr output, and/or is using syslog logging instead of\n> stderr). In any case it's not necessarily an easy place for the client\n> to get at.\n\n\nSeems EXPLAIN may need a level capability like DEBUG1-5 now. We have\nEXPLAIN and EXPLAIN VERBOSE. Now have pretty print vs. \"jumble\" print,\nwhich some people do actually prefer. They must have better cognitive\nskills than me.\n\nWe now also have the index clause printing that you mentioned. Should\nwe go with some kind of numeric level to EXPLAIN that would control\nthis?\n\nThat is the only simple solution I can think of. GUC seems way beyond\nwhat someone would want. Having SET control EXPLAIN just seems overkill\nbecause EXPLAIN should be able to control itself.\n\nAlso, clearly, we need to fix the output of pretty print to honor ELOG\ncontrol, and in any other places we may have missed it.\n\nHow about?\n\n\tEXPLAIN select * from pg_class;\n\tEXPLAIN VERBOSE select * from pg_class;\n\tEXPLAIN VERBOSE 1 select * from pg_class;\n\tEXPLAIN VERBOSE 5 select * from pg_class;\n\nSeems pretty clear. VERBOSE takes an optional argument that controls\nthe level of detail.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 9 Mar 2002 18:49:20 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How about?\n\n> \tEXPLAIN select * from pg_class;\n> \tEXPLAIN VERBOSE select * from pg_class;\n> \tEXPLAIN VERBOSE 1 select * from pg_class;\n> \tEXPLAIN VERBOSE 5 select * from pg_class;\n\nSeems kinda ugly. But maybe same idea with repeated VERBOSE,\na la some Unix commands (\"more -v's get you more detail\"):\n\n\tEXPLAIN [ANALYZE] [VERBOSE] [ VERBOSE ... ] statement;\n\nI'd sugggest\n\nEXPLAIN select * from pg_class;\n\n\tDefault output: same as now\n\nEXPLAIN VERBOSE select * from pg_class;\n\n\tAdd prettyprinted qual clauses\n\nEXPLAIN VERBOSE VERBOSE select * from pg_class;\n\n\tAdd full plan-tree dump\n\nand there's room for expansion if we need it.\n\nThere's still the question of how to format the plan-tree dump.\nI still rather like a GUC variable for that choice, since it seems\nto be a personal preference that's unlikely to change from one\ncommand to the next.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Mar 2002 11:52:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "Tom Lane writes:\n\n> What I would suggest is that EXPLAIN VERBOSE ought to emit either\n> non-pretty-print or pretty-print dump format, not both (probably control\n> this with debug_pretty_print or another newly-invented GUC parameter;\n> IMHO the factory default should be pretty-printing).\n\nSounds good. I think we can reuse the parameter.\n\n> A disadvantage of elog(INFO) output for pretty-printed plans is that\n> AFAIK psql doesn't currently have any way of capturing NOTICE output\n> into a file. I find it much better to look at pretty-printed dumps\n> in Emacs than on a terminal window, mainly because Emac's M-C-f and\n> M-C-b commands understand the nesting structure so it's easy to move\n> around in the dump with them. How hard would it be to get psql to\n> send notice output into a \\g file?\n\n\\g (and \\o) send only the query results to a file. The idea is that you\nwant to save the results, but if there's a warning or error, you want to\nsee it. We could add alternative commands (\\G and \\O?) that save the\nnotices and errors as well. Not sure if this is useful beyond this\napplication. In non-interactive situations you'd usually use shell\nredirections to save all output.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 21:25:44 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> ... How hard would it be to get psql to\n>> send notice output into a \\g file?\n\n> \\g (and \\o) send only the query results to a file. The idea is that you\n> want to save the results, but if there's a warning or error, you want to\n> see it. We could add alternative commands (\\G and \\O?) that save the\n> notices and errors as well. Not sure if this is useful beyond this\n> application. In non-interactive situations you'd usually use shell\n> redirections to save all output.\n\nThe other possibility is to make EXPLAIN output look like a SELECT\nresult. Not sure how hard this would be to do, but in the long run\nI suppose that would be the direction to move in.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Mar 2002 21:28:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > What I would suggest is that EXPLAIN VERBOSE ought to emit either\n> > non-pretty-print or pretty-print dump format, not both (probably control\n> > this with debug_pretty_print or another newly-invented GUC parameter;\n> > IMHO the factory default should be pretty-printing).\n> \n> Sounds good. I think we can reuse the parameter.\n\nAgreed. I like parameter reuse.\n\n> > A disadvantage of elog(INFO) output for pretty-printed plans is that\n> > AFAIK psql doesn't currently have any way of capturing NOTICE output\n> > into a file. I find it much better to look at pretty-printed dumps\n> > in Emacs than on a terminal window, mainly because Emac's M-C-f and\n> > M-C-b commands understand the nesting structure so it's easy to move\n> > around in the dump with them. How hard would it be to get psql to\n> > send notice output into a \\g file?\n> \n> \\g (and \\o) send only the query results to a file. The idea is that you\n> want to save the results, but if there's a warning or error, you want to\n> see it. We could add alternative commands (\\G and \\O?) that save the\n> notices and errors as well. Not sure if this is useful beyond this\n> application. In non-interactive situations you'd usually use shell\n> redirections to save all output.\n\nCould we send notices to the \\g, \\o file and to the terminal, and send\nnormal output only to the file? Seems that would make sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 21:32:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> ... How hard would it be to get psql to\n> >> send notice output into a \\g file?\n> \n> > \\g (and \\o) send only the query results to a file. The idea is that you\n> > want to save the results, but if there's a warning or error, you want to\n> > see it. We could add alternative commands (\\G and \\O?) that save the\n> > notices and errors as well. Not sure if this is useful beyond this\n> > application. In non-interactive situations you'd usually use shell\n> > redirections to save all output.\n> \n> The other possibility is to make EXPLAIN output look like a SELECT\n> result. Not sure how hard this would be to do, but in the long run\n> I suppose that would be the direction to move in.\n\nSeems EXPLAIN as SELECT would break our elog() control of output to the\nserver logs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 21:33:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems EXPLAIN as SELECT would break our elog() control of output to the\n> server logs.\n\nEXPLAIN as SELECT would mean that the server log is out of the picture\nentirely, which is not necessarily a bad thing. Is there a good reason\nfor logging EXPLAIN output? I can't see one other than \"we've always\ndone it that way\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Mar 2002 21:36:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems EXPLAIN as SELECT would break our elog() control of output to the\n> > server logs.\n> \n> EXPLAIN as SELECT would mean that the server log is out of the picture\n> entirely, which is not necessarily a bad thing. Is there a good reason\n> for logging EXPLAIN output? I can't see one other than \"we've always\n> done it that way\".\n\nI can't think of a good reason, but making it a select output makes\nEXPLAIN one of the few things you can't get into the server logs, even\nif you want to. At DEBUG5, you get almost everything about a query. \nSeems you may want to capture EXPLAIN in there too, but because we can\ncontrol those with print_* using various SET parameters, I guess it is\nOK.\n\nThere are other INFO types that are sent to the client that can't be\ncaptured in psql output, like VACUUM VERBOSE. I guess I would rather\nsee NOTICES go to the \\g/\\o output file and to the terminal as a fix\nthat would solve the problem easily.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 21:48:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I can't think of a good reason, but making it a select output makes\n> EXPLAIN one of the few things you can't get into the server logs, even\n> if you want to. At DEBUG5, you get almost everything about a query. \n\n... including the query plan dump, no? I don't see the point here.\n\nOne reason in favor of SELECT-like output is that a lot of user\ninterfaces are not prepared for large NOTICE outputs. (Even psql\nisn't really, since it can't paginate NOTICE output.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Mar 2002 22:45:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "Tom Lane writes:\n\n> The other possibility is to make EXPLAIN output look like a SELECT\n> result. Not sure how hard this would be to do, but in the long run\n> I suppose that would be the direction to move in.\n\nYou could internally rewrite it to something like\n\nselect explain('select * from pg_class;');\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 23:25:40 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> The other possibility is to make EXPLAIN output look like a SELECT\n>> result. Not sure how hard this would be to do, but in the long run\n>> I suppose that would be the direction to move in.\n\n> You could internally rewrite it to something like\n> select explain('select * from pg_class;');\n\nHaving looked, I think it wouldn't be that bad to call the regular\nprinttup.c routines directly. Assuming that the output model we\nwant is \"one text column, with one row per line\", it'd only be\nnecessary to fake up a correct TupleDesc and then form a HeapTuple\nfrom each line of output. Lots less work than trying to rewrite\nthe query, I think.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Mar 2002 23:26:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I can't think of a good reason, but making it a select output makes\n> > EXPLAIN one of the few things you can't get into the server logs, even\n> > if you want to. At DEBUG5, you get almost everything about a query. \n> \n> ... including the query plan dump, no? I don't see the point here.\n> \n> One reason in favor of SELECT-like output is that a lot of user\n> interfaces are not prepared for large NOTICE outputs. (Even psql\n> isn't really, since it can't paginate NOTICE output.)\n\nPagination is a good point. EXPLAIN is one of the few cases where the\noutput is clearly multi-line. I am concerned that making explain like\nSELECT means it is on the one piece of debug info you can't get into the\nserver logs. Query dump can already get into the query logs, but not\nEXPLAIN non-verbose.\n\nIn fact, as Peter explains it, NOTICE \\g goes to the terminal because it\nis assumed to be an error. Maybe we need to make psql smarter and only\nsend ERROR/WARNING to terminal, and INFO/NOTICE to the log file. With\nnew elog() levels, seems this is needed anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 23:36:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> ... I am concerned that making explain like\n> SELECT means it is on the one piece of debug info you can't get into the\n> server logs. Query dump can already get into the query logs, but not\n> EXPLAIN non-verbose.\n\nA week ago you were willing to set things up so that INFO output could\nnot get into the server logs period. Why so concerned now? EXPLAIN\noutput does not seem like suitable data for logs to me, any more than\nthe output of SELECT queries does. It's only a historical artifact\nthat we are accustomed to thinking of it as being loggable.\n\n> In fact, as Peter explains it, NOTICE \\g goes to the terminal because it\n> is assumed to be an error. Maybe we need to make psql smarter and only\n> send ERROR/WARNING to terminal, and INFO/NOTICE to the log file.\n\nWhile I suggested that to start with, it seems like a bad idea on\nfurther thought. Mixing INFO/NOTICE with query output would be just\nlike piping stdout and stderr to the same place. There's usually\ngood reason to keep them separate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Mar 2002 23:43:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How about?\n> \n> > \tEXPLAIN select * from pg_class;\n> > \tEXPLAIN VERBOSE select * from pg_class;\n> > \tEXPLAIN VERBOSE 1 select * from pg_class;\n> > \tEXPLAIN VERBOSE 5 select * from pg_class;\n> \n> Seems kinda ugly. But maybe same idea with repeated VERBOSE,\n> a la some Unix commands (\"more -v's get you more detail\"):\n> \n> \tEXPLAIN [ANALYZE] [VERBOSE] [ VERBOSE ... ] statement;\n> \n> I'd sugggest\n> \n> EXPLAIN select * from pg_class;\n> \n> \tDefault output: same as now\n> \n> EXPLAIN VERBOSE select * from pg_class;\n> \n> \tAdd prettyprinted qual clauses\n> \n> EXPLAIN VERBOSE VERBOSE select * from pg_class;\n> \n> \tAdd full plan-tree dump\n> \n> and there's room for expansion if we need it.\n\nI was never a fan of the -v -v more-verbose options, and I don't see any\ncase where we use such behavior in our code. We do use detail levels\nfor debug, and that is fairly common.\n\nHow about:\n\n> > \tEXPLAIN select * from pg_class;\n> > \tEXPLAIN VERBOSE select * from pg_class;\n> > \tEXPLAIN LEVEL 1 select * from pg_class;\n> > \tEXPLAIN LEVEL 5 select * from pg_class;\n\nHere I use LEVEL to tell how much detail you want.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 23:45:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> In fact, as Peter explains it, NOTICE \\g goes to the terminal because it\n> is assumed to be an error. Maybe we need to make psql smarter and only\n> send ERROR/WARNING to terminal, and INFO/NOTICE to the log file. With\n> new elog() levels, seems this is needed anyway.\n\nINFO is just as irrelevant to the query results as WARNING is.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 23:47:15 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... I am concerned that making explain like\n> > SELECT means it is on the one piece of debug info you can't get into the\n> > server logs. Query dump can already get into the query logs, but not\n> > EXPLAIN non-verbose.\n> \n> A week ago you were willing to set things up so that INFO output could\n> not get into the server logs period. Why so concerned now? EXPLAIN\n> output does not seem like suitable data for logs to me, any more than\n> the output of SELECT queries does. It's only a historical artifact\n> that we are accustomed to thinking of it as being loggable.\n> \n> > In fact, as Peter explains it, NOTICE \\g goes to the terminal because it\n> > is assumed to be an error. Maybe we need to make psql smarter and only\n> > send ERROR/WARNING to terminal, and INFO/NOTICE to the log file.\n> \n> While I suggested that to start with, it seems like a bad idea on\n> further thought. Mixing INFO/NOTICE with query output would be just\n> like piping stdout and stderr to the same place. There's usually\n> good reason to keep them separate.\n\nOK, sounds interesting.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 23:48:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "* Tom Lane (tgl@sss.pgh.pa.us) [020310 22:46]:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I can't think of a good reason, but making it a select output makes\n> > EXPLAIN one of the few things you can't get into the server logs, even\n> > if you want to. At DEBUG5, you get almost everything about a query. \n> \n> ... including the query plan dump, no? I don't see the point here.\n> \n> One reason in favor of SELECT-like output is that a lot of user\n> interfaces are not prepared for large NOTICE outputs. (Even psql\n> isn't really, since it can't paginate NOTICE output.)\n\nAnother reason is that explain output would be easily available in\nnon-postgres specific client utilities written on top of standardized\ndatabase interfaces, like ODBC and JDBC.\n\nWe're just polishing off a sizable MS SQL Server to PG migration, and\nwe have a department of three folks that use an ODBC based tool to do\nlots of one-off SQL queries. They like their existing tool, and it\nworks well. Getting explain output requires that they either use\nPgAdmin II, which they're not used to, or a shell connection to psql,\nwhich they're really not used to, or having the DBA pull the explain\ndata out of the log, which is truly a nuisance.\n\nSo, please, please, please add a select-like output path for explain.\nI'm ambivalent about whether or not it still logs the output.\n\n-Brad\n",
"msg_date": "Sun, 10 Mar 2002 23:48:58 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > In fact, as Peter explains it, NOTICE \\g goes to the terminal because it\n> > is assumed to be an error. Maybe we need to make psql smarter and only\n> > send ERROR/WARNING to terminal, and INFO/NOTICE to the log file. With\n> > new elog() levels, seems this is needed anyway.\n> \n> INFO is just as irrelevant to the query results as WARNING is.\n\nOh, \\g is just the query result, not the query itself. I get it now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 23:49:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "...\n> > > EXPLAIN VERBOSE select * from pg_class;\n> > > EXPLAIN LEVEL 1 select * from pg_class;\n> > > EXPLAIN LEVEL 5 select * from pg_class;\n\nHow about leaving off \"LEVEL\" and just allow a numeric argument after\nVERBOSE? It does not give shift/reduce troubles. And I'm not sure that\n\"level\" makes it clearer (level of what?). So it would be\n\n EXPLAIN VERBOSE select ...\n EXPLAIN VERBOSE 5 select ...\n\netc\n\n - Thomas\n",
"msg_date": "Sun, 10 Mar 2002 21:18:01 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > > > EXPLAIN VERBOSE select * from pg_class;\n> > > > EXPLAIN LEVEL 1 select * from pg_class;\n> > > > EXPLAIN LEVEL 5 select * from pg_class;\n> \n> How about leaving off \"LEVEL\" and just allow a numeric argument after\n> VERBOSE? It does not give shift/reduce troubles. And I'm not sure that\n> \"level\" makes it clearer (level of what?). So it would be\n> \n> EXPLAIN VERBOSE select ...\n> EXPLAIN VERBOSE 5 select ...\n\nYes, this was my initial proposal but Tom didn't like it. Seemed very\nclear to me. Tom wants EXPLAIN VERBOSE VERBOSE.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 00:19:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Gavin Sherry wrote:\n> On Sun, 10 Mar 2002, Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> \n> > > Seems kinda ugly. But maybe same idea with repeated VERBOSE,\n> > > a la some Unix commands (\"more -v's get you more detail\"):\n> > > \n> > > \tEXPLAIN [ANALYZE] [VERBOSE] [ VERBOSE ... ] statement;\n> > > \n> \n> > \n> > I was never a fan of the -v -v more-verbose options, and I don't see any\n> > case where we use such behavior in our code. We do use detail levels\n> > for debug, and that is fairly common.\n> \n> I agree. This is fine under Unix, but command arguments are not really a\n> grammar. Yacc doesn't enjoy terminal repetition and for good reason: it\n> usually suggests a clumsy grammar. \n> \n> Personally, I think that Tom's code should go into standard EXPLAIN.\n\nI am confused. Which grammar do you like?\n\n> As for how to returning explain data as a SELECT. I think I prefer\n> Oracle's idea of output tables with a Postgres twist. EXPLAIN could then\n> be something like:\n> \n> EXPLAIN [VERBOSE] [SET ID='...' ] [INTO [TEMP] <table>] <query>\n> \n> If 'table' exists, EXPLAIN would check if it is a valid explain output\n> table (correct attr names, types) and if so insert the results of explain,\n> one tuple per line of output. ID would be a text identifier of the output.\n> \n> If the table didn't exist, it would be created. TEMP means that the table\n> is removed at the end of the session.\n> \n> Is this overkill?\n\nThat was my initial reaction. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 00:21:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "On Sun, 10 Mar 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n\n> > Seems kinda ugly. But maybe same idea with repeated VERBOSE,\n> > a la some Unix commands (\"more -v's get you more detail\"):\n> > \n> > \tEXPLAIN [ANALYZE] [VERBOSE] [ VERBOSE ... ] statement;\n> > \n\n> \n> I was never a fan of the -v -v more-verbose options, and I don't see any\n> case where we use such behavior in our code. We do use detail levels\n> for debug, and that is fairly common.\n\nI agree. This is fine under Unix, but command arguments are not really a\ngrammar. Yacc doesn't enjoy terminal repetition and for good reason: it\nusually suggests a clumsy grammar. \n\nPersonally, I think that Tom's code should go into standard EXPLAIN.\n\nAs for how to returning explain data as a SELECT. I think I prefer\nOracle's idea of output tables with a Postgres twist. EXPLAIN could then\nbe something like:\n\nEXPLAIN [VERBOSE] [SET ID='...' ] [INTO [TEMP] <table>] <query>\n\nIf 'table' exists, EXPLAIN would check if it is a valid explain output\ntable (correct attr names, types) and if so insert the results of explain,\none tuple per line of output. ID would be a text identifier of the output.\n\nIf the table didn't exist, it would be created. TEMP means that the table\nis removed at the end of the session.\n\nIs this overkill?\n\nGavin\n\n\n",
"msg_date": "Mon, 11 Mar 2002 16:22:50 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "...\n> Yes, this was my initial proposal but Tom didn't like it. Seemed very\n> clear to me. Tom wants EXPLAIN VERBOSE VERBOSE.\n\nEh. Don't like that myself. How about adding V's to verbose? So\n\n EXPLAIN VERBOSE\n EXPLAIN VVERBOSE\n EXPLAIN VVVERBOSE\n\nThen for maximum verbosity, duplicate every letter:\n\n EXPLAIN VVEERRBBOOSSEE\n\n\nUh, just kidding. I'm not partial to the duplicated keyword. Really.\n\n - Thomas\n",
"msg_date": "Sun, 10 Mar 2002 21:23:42 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > Yes, this was my initial proposal but Tom didn't like it. Seemed very\n> > clear to me. Tom wants EXPLAIN VERBOSE VERBOSE.\n> \n> Eh. Don't like that myself. How about adding V's to verbose? So\n> \n> EXPLAIN VERBOSE\n> EXPLAIN VVERBOSE\n> EXPLAIN VVVERBOSE\n> \n> Then for maximum verbosity, duplicate every letter:\n> \n> EXPLAIN VVEERRBBOOSSEE\n> \n> \n> Uh, just kidding. I'm not partial to the duplicated keyword. Really.\n\nYou had me going there for a while. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 00:25:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Gavin Sherry wrote:\n> On Mon, 11 Mar 2002, Bruce Momjian wrote:\n> \n> > > I agree. This is fine under Unix, but command arguments are not really a\n> > > grammar. Yacc doesn't enjoy terminal repetition and for good reason: it\n> > > usually suggests a clumsy grammar. \n> > > \n> > > Personally, I think that Tom's code should go into standard EXPLAIN.\n> > \n> > I am confused. Which grammar do you like?\n> \n> Neither =).\n\nOK, would you suggest one?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 00:34:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "On Mon, 11 Mar 2002, Bruce Momjian wrote:\n\n> > I agree. This is fine under Unix, but command arguments are not really a\n> > grammar. Yacc doesn't enjoy terminal repetition and for good reason: it\n> > usually suggests a clumsy grammar. \n> > \n> > Personally, I think that Tom's code should go into standard EXPLAIN.\n> \n> I am confused. Which grammar do you like?\n\nNeither =).\n\n> > Is this overkill?\n> \n> That was my initial reaction. :-)\n\nFair enough.\n\nGavin\n\n",
"msg_date": "Mon, 11 Mar 2002 16:36:36 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "On Mon, 11 Mar 2002, Bruce Momjian wrote:\n\n> Gavin Sherry wrote:\n> > On Mon, 11 Mar 2002, Bruce Momjian wrote:\n> > \n> > > > I agree. This is fine under Unix, but command arguments are not really a\n> > > > grammar. Yacc doesn't enjoy terminal repetition and for good reason: it\n> > > > usually suggests a clumsy grammar. \n> > > > \n> > > > Personally, I think that Tom's code should go into standard EXPLAIN.\n> > > \n> > > I am confused. Which grammar do you like?\n> > \n> > Neither =).\n> \n> OK, would you suggest one?\n\nI don't think there needs to be a grammar change. I think that Tom's\nqualification changes should go into non-verbose EXPLAIN and that pretty\nvs. non-pretty debug just gets handled via debug_print_pretty.\n\nThe disadvantage of this is, of course, that users would want to be able\nto change debug_print_pretty. I don't think that the solution to this is\nanother GUC variable though. I think it EXPLAIN output tables.\n\nYes, this results in a grammar change but IMHO users get a lot more out of\nthis modification than levels, since they can store/manipulate EXPLAIN\noutput if they choose. Naturally, there would be a psql \\command tie in.\n\nThis is does some of what I want to get into a release some time in the\nfuture: auditing. Perhaps storage of explain output would be more suited\nto that. Just my 2 cents.\n\nGavin\n\n\n\n",
"msg_date": "Mon, 11 Mar 2002 16:55:45 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": ">> I'm not partial to the duplicated keyword. Really.\n\nOkay, okay, I concede. \"EXPLAIN VERBOSE n stmt\" it is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Mar 2002 01:18:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "> >> I'm not partial to the duplicated keyword. Really.\n> Okay, okay, I concede. \"EXPLAIN VERBOSE n stmt\" it is.\n\nAnother possibility is to implement\n\n SET VERBOSITY = n;\n\nWhy not do that and not bother extending/polluting the EXPLAIN syntax?\n\n - Thomas\n",
"msg_date": "Mon, 11 Mar 2002 05:51:06 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > >> I'm not partial to the duplicated keyword. Really.\n> > Okay, okay, I concede. \"EXPLAIN VERBOSE n stmt\" it is.\n> \n> Another possibility is to implement\n> \n> SET VERBOSITY = n;\n> \n> Why not do that and not bother extending/polluting the EXPLAIN syntax?\n\nUnless you have another use for VERBOSITY, it seems like a waste. I\ndon't see a value in moving control away from the EXPLAIN command\nitself. I realize it would be used as a default for all EXPLAIN\ncommands, but the level is just a single-digit number.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 09:26:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Why not do that and not bother extending/polluting the EXPLAIN syntax?\n\n> Unless you have another use for VERBOSITY, it seems like a waste.\n\nFor the moment, I plan to not touch the syntax; I'll follow Gavin's\nsuggestion of just putting the qual info into the default output.\nIf we really hate it after a month or two of looking at it, we can\nfigure out what kind of control knob to add then.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Mar 2002 09:38:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Why not do that and not bother extending/polluting the EXPLAIN syntax?\n> \n> > Unless you have another use for VERBOSITY, it seems like a waste.\n> \n> For the moment, I plan to not touch the syntax; I'll follow Gavin's\n> suggestion of just putting the qual info into the default output.\n> If we really hate it after a month or two of looking at it, we can\n> figure out what kind of control knob to add then.\n\nSounds like a plan. I can't imagine the new index clause being any more\ncomplicated than what is already there. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 10:11:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
},
{
"msg_contents": "On Sun, Mar 10, 2002 at 11:52:49AM -0500, Tom Lane wrote:\n\n> EXPLAIN VERBOSE select * from pg_class;\n> \n> \tAdd prettyprinted qual clauses\n> \n> EXPLAIN VERBOSE VERBOSE select * from pg_class;\n> \n> \tAdd full plan-tree dump\n\nI'd prefer having the non-prety-printed plan-tree dump moved off into\nits own keyword. Eg:\n\nEXPLAIN DUMP select * from pg_class;\n\nThe dump is sufficiently different from VERBOSE <n> output that it\nshould have its own keyword. Then the VERBOSE levels can just be used\nfor addition additional information to the pretty-printed tree and there\nis no nasty shift from nice tree to ugly mess at some level.\n\nLiam\n\n-- \nLiam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com\n",
"msg_date": "Mon, 11 Mar 2002 10:24:51 -0500",
"msg_from": "Liam Stewart <liams@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
}
] |
[
{
"msg_contents": "I have been fooling around with adding decompiled display of plan\nqualification conditions to EXPLAIN output. With this, you can\nfor example tell the difference between indexscanned and\nnot-indexscanned clauses, without having to dig through EXPLAIN\nVERBOSE dumps. Here is an example motivated by Rob Hoopman's\nrecent query on pgsql-general:\n\nregression=# create table foo (f1 int, f2 int, f3 int, unique(f1,f2));\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'foo_f1_key' for table 'foo'\nCREATE\nregression=# explain select * from foo where f1 = 11;\nINFO: QUERY PLAN:\n\nIndex Scan using foo_f1_key on foo (cost=0.00..17.07 rows=5 width=12)\n indxqual: (f1 = 11)\n\nEXPLAIN\nregression=# explain select * from foo where f1 = 11 and f2 = 44;\nINFO: QUERY PLAN:\n\nIndex Scan using foo_f1_key on foo (cost=0.00..4.83 rows=1 width=12)\n indxqual: ((f1 = 11) AND (f2 = 44))\n\nEXPLAIN\nregression=# explain select * from foo where f1 = 11 and f3 = 44;\nINFO: QUERY PLAN:\n\nIndex Scan using foo_f1_key on foo (cost=0.00..17.08 rows=1 width=12)\n indxqual: (f1 = 11)\n qual: (f3 = 44)\n\nEXPLAIN\nregression=# explain select * from foo where f2 = 11 and f3 = 44;\nINFO: QUERY PLAN:\n\nSeq Scan on foo (cost=0.00..25.00 rows=1 width=12)\n qual: ((f2 = 11) AND (f3 = 44))\n\nEXPLAIN\n\nThe display of join conditions isn't yet ready for prime time:\n\nregression=# explain select * from tenk1 a left join tenk1 b using (unique1)\nregression-# where a.hundred < b.hundred;\nINFO: QUERY PLAN:\n\nMerge Join (cost=0.00..2343.45 rows=10000 width=296)\n merge: (\"outer\".\"?column1?\" = \"inner\".\"?column16?\")\n qual: (\"outer\".\"?column7?\" < \"inner\".\"?column6?\")\n -> Index Scan using tenk1_unique1 on tenk1 a (cost=0.00..1071.78 rows=10000 width=148)\n -> Index Scan using tenk1_unique1 on tenk1 b (cost=0.00..1071.78 rows=10000 width=148)\n\nEXPLAIN\n\nbut it's getting there.\n\nQuestion for the group: does this seem valuable enough to put into the\nstandard EXPLAIN output, or should it be a special option? I can\nimagine showing it only in EXPLAIN VERBOSE's summary display, or adding\na GUC variable to enable it, or adding another option keyword to\nEXPLAIN, but I don't much want to do any of those things. On the other\nhand, maybe this stuff won't make any sense to non-experts anyway.\nThoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Mar 2002 18:02:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Adding qualification conditions to EXPLAIN output"
},
{
"msg_contents": "> Question for the group: does this seem valuable enough to put into the\n> standard EXPLAIN output, or should it be a special option? I can\n> imagine showing it only in EXPLAIN VERBOSE's summary display, or adding\n> a GUC variable to enable it, or adding another option keyword to\n> EXPLAIN, but I don't much want to do any of those things. On the other\n> hand, maybe this stuff won't make any sense to non-experts anyway.\n> Thoughts?\n\nI like EXPLAIN VERBOSE for that. GUC seems overkill.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 9 Mar 2002 18:43:08 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding qualification conditions to EXPLAIN output"
},
{
"msg_contents": "On Sat, 09 Mar 2002 18:02:17 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I have been fooling around with adding decompiled display of plan\n> qualification conditions to EXPLAIN output. With this, you can\n> for example tell the difference between indexscanned and\n> not-indexscanned clauses, without having to dig through EXPLAIN\n> VERBOSE dumps. Here is an example motivated by Rob Hoopman's\n> recent query on pgsql-general:\n ...\n> Question for the group: does this seem valuable enough to put into the\n> standard EXPLAIN output, or should it be a special option? I can\n> imagine showing it only in EXPLAIN VERBOSE's summary display, or adding\n> a GUC variable to enable it, or adding another option keyword to\n> EXPLAIN, but I don't much want to do any of those things. On the other\n> hand, maybe this stuff won't make any sense to non-experts anyway.\n> Thoughts?\n\nAFAIC, I'd think adding another keyword is better if the standard\nEXPLAIN is extended. \n\n e.g. \n EXPLAIN keyword SELECT * FROM ...\n EXPLAIN ANALYZE keyword SELECT * FROM ...\n \n\nRegards,\nMasaru Sugawara\n\n\n",
"msg_date": "Sun, 10 Mar 2002 14:43:14 +0900",
"msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>",
"msg_from_op": false,
"msg_subject": "Re: Adding qualification conditions to EXPLAIN output"
},
{
"msg_contents": "On Sat, Mar 09, 2002 at 06:02:17PM -0500, Tom Lane wrote:\n> I have been fooling around with adding decompiled display of plan\n> qualification conditions to EXPLAIN output. With this, you can\n> for example tell the difference between indexscanned and\n> not-indexscanned clauses, without having to dig through EXPLAIN\n> VERBOSE dumps. Here is an example motivated by Rob Hoopman's\n> recent query on pgsql-general:\n\nVery neat, Tom. Information on projections would also be nice.\n\n> Question for the group: does this seem valuable enough to put into the\n> standard EXPLAIN output, or should it be a special option? I can\n> imagine showing it only in EXPLAIN VERBOSE's summary display, or adding\n> a GUC variable to enable it, or adding another option keyword to\n> EXPLAIN, but I don't much want to do any of those things. On the other\n> hand, maybe this stuff won't make any sense to non-experts anyway.\n> Thoughts?\n\nMy initial thought is to display the information in one of the new\nVERBOSE levels, perhaps the first (default)?\n\nLiam\n\n-- \nLiam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com\n",
"msg_date": "Mon, 11 Mar 2002 10:00:34 -0500",
"msg_from": "Liam Stewart <liams@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding qualification conditions to EXPLAIN output"
}
] |
[
{
"msg_contents": "We have the following TODO item:\n\n\t* Allow usernames to be specified directly in pg_hba.conf (Bruce)\n\nMy idea is to allow comma-separated usernames in the AUTH_ARGUMENT\ncolumn. Right now we use it for ident user map files and secondary\npassword files. It seems both easily already allow username\nrestrictions. Adding usernames directly in pg_hba.conf is basically a\nshortcut to creating such secondary files.\n\nMy idea is that if AUTH_ARGUMENT starts with \"=\", it represents a list\nof comma-separated usernames.\n\n host template1 192.168.12.10 255.255.255.255 md5 =bmomjian,jeffw\n\nDo I need to allow usernames with spaces or quoted usernames? I don't\nthink so.\n\nFor implementation, I was going to simulate a secondary password file\nwith no passwords. We already support that internally as a username\nrestriction option. Those are loaded into memory as linked lists of text\nlines, if I remember correclty.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 01:20:13 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Allowing usernames in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We have the following TODO item:\n> \t* Allow usernames to be specified directly in pg_hba.conf (Bruce)\n\n> My idea is to allow comma-separated usernames in the AUTH_ARGUMENT\n> column. Right now we use it for ident user map files and secondary\n> password files. It seems both easily already allow username\n> restrictions. Adding usernames directly in pg_hba.conf is basically a\n> shortcut to creating such secondary files.\n\n> My idea is that if AUTH_ARGUMENT starts with \"=\", it represents a list\n> of comma-separated usernames.\n\nUgh. What of the auth methods that have another interpretation for\nAUTH_ARGUMENT?\n\n> Do I need to allow usernames with spaces or quoted usernames? I don't\n> think so.\n\nI do.\n\nThis is definitely stressing pg_hba past its design limits --- heck, the\nname of the file isn't even appropriate anymore, if usernames are part\nof the match criteria. Rather than contorting things to maintain a\npretense of backwards compatibility, it's time to abandon the current\nfile format, change the name, and start over. (I believe there are\ntraces in the code of this having been done before.) We could probably\narrange to read and convert the existing pg_hba format if we don't see\na new-style authentication config file out there.\n\nMy first thoughts are (a) add a column outright for matching username;\n(b) for both database and username columns, allow a filename reference\nso that a bunch of names can be stored separately from the master\nauthentication file. I don't much care for sticking large lists of\nnames into the auth file itself.\n\nIt would be good to go back over the past complaints about \"I can't\ndo this with pg_hba\" to see if this is sufficient to solve them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Mar 2002 02:31:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf "
},
{
"msg_contents": "Tom Lane writes:\n\n> This is definitely stressing pg_hba past its design limits --- heck, the\n> name of the file isn't even appropriate anymore, if usernames are part\n> of the match criteria. Rather than contorting things to maintain a\n> pretense of backwards compatibility, it's time to abandon the current\n> file format, change the name, and start over.\n\nThe pg_hba.conf thing is slowly growing to become a bad excuse for a\ncompletely general authentication system, such as PAM. Instead of\ncreating our own, maybe we could rip off the \"BSD authentication\" system\nfrom some free *BSD. I haven't seen it, but it's supposed to be like (or\n\"better than\") PAM.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 21:32:02 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf "
},
{
"msg_contents": "> > This is definitely stressing pg_hba past its design limits --- heck, the\n> > name of the file isn't even appropriate anymore, if usernames are part\n> > of the match criteria. Rather than contorting things to maintain a\n> > pretense of backwards compatibility, it's time to abandon the current\n> > file format, change the name, and start over.\n>\n> The pg_hba.conf thing is slowly growing to become a bad excuse for a\n> completely general authentication system, such as PAM. Instead of\n> creating our own, maybe we could rip off the \"BSD authentication\" system\n> from some free *BSD. I haven't seen it, but it's supposed to be like (or\n> \"better than\") PAM.\n\nHmmm...I've never heard of the \"BSD authentication\" system...? As far as I\nwas aware, FreeBSD uses PAM:\n\nman pam\n\nPAM(8) PAM Manual PAM(8)\n\nNAME\n PAM - Pluggable Authentication Modules\n\nSYNOPSIS\n /etc/pam.conf\n\nDESCRIPTION\n This manual is intended to offer a quick introduction to\n PAM. For more information the reader is directed to the\n Linux-PAM system administrators' guide.\n\n PAM Is a system of libraries that handle the authentica-\n tion tasks of applications (services) on the system. The\n library provides a stable general interface (Application\n Programming Interface - API) that privilege granting pro-\n grams (such as login(1) and su(1)) defer to to perform\n standard authentication tasks.\n\n...\n\nChris\n\n",
"msg_date": "Mon, 11 Mar 2002 10:38:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf "
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> Hmmm...I've never heard of the \"BSD authentication\" system...? As far as I\n> was aware, FreeBSD uses PAM:\n\nI found a bsd_auth(3) man page on OpenBSD:\n\nhttp://www.openbsd.org/cgi-bin/man.cgi?query=bsd_auth&apropos=0&sektion=0&manpath=OpenBSD+Current&arch=i386&format=html\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 21:57:49 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf "
},
{
"msg_contents": "OK,\n\nFreeBSD doesn't have a bsd_auth man page, nor any of the functions mentioned\non that page. It doesn't have the login_cap and login.conf references at\nthe bottom, however.\n\nChris\n\n> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net]\n> Sent: Monday, 11 March 2002 10:58 AM\n> To: Christopher Kings-Lynne\n> Cc: Tom Lane; Bruce Momjian; PostgreSQL-development\n> Subject: RE: [HACKERS] Allowing usernames in pg_hba.conf\n>\n>\n> Christopher Kings-Lynne writes:\n>\n> > Hmmm...I've never heard of the \"BSD authentication\" system...?\n> As far as I\n> > was aware, FreeBSD uses PAM:\n>\n> I found a bsd_auth(3) man page on OpenBSD:\n>\n> http://www.openbsd.org/cgi-bin/man.cgi?query=bsd_auth&apropos=0&se\n> ktion=0&manpath=OpenBSD+Current&arch=i386&format=html\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n\n",
"msg_date": "Mon, 11 Mar 2002 11:01:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf "
},
{
"msg_contents": "> This is definitely stressing pg_hba past its design limits --- heck, the\n> name of the file isn't even appropriate anymore, if usernames are part\n> of the match criteria. Rather than contorting things to maintain a\n> pretense of backwards compatibility, it's time to abandon the current\n> file format, change the name, and start over. (I believe there are\n> traces in the code of this having been done before.) We could probably\n> arrange to read and convert the existing pg_hba format if we don't see\n> a new-style authentication config file out there.\n> \n> My first thoughts are (a) add a column outright for matching username;\n> (b) for both database and username columns, allow a filename reference\n> so that a bunch of names can be stored separately from the master\n> authentication file. I don't much care for sticking large lists of\n> names into the auth file itself.\n\nOK, I have an idea. I was never happy with the AUTH_ARGUMENT column. \nWhat I propose is adding an optional auth_type=val capability to the\nfile, so an AUTH_ARGUMENT column isn't needed. If a username column\nstarts with @, it is a file name containing user names. The same can be\ndone with the database column. Seems very backward compatible.. If you\ndon't use auth_argument, it is totally compatible. If you do, you need\nto use the new format auth_type=val:\n\nTYPE DATABASE IP_ADDRESS MASK AUTH_TYPE USERNAMES\nlocal all trust\t fred\nhost all 127.0.0.1 255.255.255.255 trust\t @staff\nhost all 127.0.0.1 255.255.255.255 ident=sales jimmy\n\nI have thought about a redesign of the file, but I can't come up with\nsomething that is as powerful, and cleaner. Do others have ideas?\n\nAs far as missing features, I can't think of other things people have\nasked for in pg_hba.conf except usernames.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 00:06:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Allowing usernames in pg_hba.conf"
},
{
"msg_contents": "Is there a way to grant another user access (full/limited) to an entire \ndatabase?\n\nRight now pg_hba.conf controls connectivity to a database.\n\nHowever from the docs it seems that one has to do a grant for _every_ \ntable. if a new table is created the user can't access it. This can be \nannoying in some situations.\n\nAm I missing something?\n\nThanks,\nLink.\n\nAt 12:06 AM 11-03-2002 -0500, Bruce Momjian wrote:\n\n>I have thought about a redesign of the file, but I can't come up with\n>something that is as powerful, and cleaner. Do others have ideas?\n>\n>As far as missing features, I can't think of other things people have\n>asked for in pg_hba.conf except usernames.\n\n\n",
"msg_date": "Mon, 11 Mar 2002 16:30:03 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf"
},
{
"msg_contents": "Lincoln Yeoh wrote:\n> Is there a way to grant another user access (full/limited) to an entire \n> database?\n> \n> Right now pg_hba.conf controls connectivity to a database.\n> \n> However from the docs it seems that one has to do a grant for _every_ \n> table. if a new table is created the user can't access it. This can be \n> annoying in some situations.\n\nTable access and database access are different issues. One is controled\nby pg_hba.conf and other by GRANT. There is no mass-GRANT capability.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 10:03:10 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Allowing usernames in pg_hba.conf"
},
{
"msg_contents": "\nOn Mon, 11 Mar 2002, Bruce Momjian wrote:\n\n> Lincoln Yeoh wrote:\n> > Is there a way to grant another user access (full/limited) to an entire\n> > database?\n> >\n> > Right now pg_hba.conf controls connectivity to a database.\n> >\n> > However from the docs it seems that one has to do a grant for _every_\n> > table. if a new table is created the user can't access it. This can be\n> > annoying in some situations.\n>\n> Table access and database access are different issues. One is controled\n> by pg_hba.conf and other by GRANT. There is no mass-GRANT capability.\n\nI'd started a long-ish post about how pgsql should have a proper\npermission model for user-to-database access - when someone pointed me to\nthe following url, which I'd like to bring to everybody's attention:\n\nhttp://candle.pha.pa.us/cgi-bin/pgtodo?privileges\n\nIs this something PeterE's still looking at doing for 7.(I guess 3, now?)\n\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Mon, 11 Mar 2002 12:08:36 -0600 (CST)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf"
},
{
"msg_contents": "Dominic J. Eidson wrote:\n> \n> On Mon, 11 Mar 2002, Bruce Momjian wrote:\n> \n> > Lincoln Yeoh wrote:\n> > > Is there a way to grant another user access (full/limited) to an entire\n> > > database?\n> > >\n> > > Right now pg_hba.conf controls connectivity to a database.\n> > >\n> > > However from the docs it seems that one has to do a grant for _every_\n> > > table. if a new table is created the user can't access it. This can be\n> > > annoying in some situations.\n> >\n> > Table access and database access are different issues. One is controled\n> > by pg_hba.conf and other by GRANT. There is no mass-GRANT capability.\n> \n> I'd started a long-ish post about how pgsql should have a proper\n> permission model for user-to-database access - when someone pointed me to\n> the following url, which I'd like to bring to everybody's attention:\n> \n> http://candle.pha.pa.us/cgi-bin/pgtodo?privileges\n> \n> Is this something PeterE's still looking at doing for 7.(I guess 3, now?)\n\nI assume it is coming in as part of schemas. Tom?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 13:10:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Allowing usernames in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> http://candle.pha.pa.us/cgi-bin/pgtodo?privileges\n>> \n>> Is this something PeterE's still looking at doing for 7.(I guess 3, now?)\n\n> I assume it is coming in as part of schemas. Tom?\n\nPrivileges on schemas should improve matters, but I do not know whether\nthat will fully satisfy people ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Mar 2002 20:11:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf "
},
{
"msg_contents": "Dominic J. Eidson writes:\n\n> I'd started a long-ish post about how pgsql should have a proper\n> permission model for user-to-database access - when someone pointed me to\n> the following url, which I'd like to bring to everybody's attention:\n>\n> http://candle.pha.pa.us/cgi-bin/pgtodo?privileges\n>\n> Is this something PeterE's still looking at doing for 7.(I guess 3, now?)\n\nI guess the implementation ideas have changes a little, but the code has\nbeen generalized enough so that you can add privileges on almost anything.\nFunction and language privleges are available in the 7.3 branch. Those\nare the ones most people wanted.\n\nI guess you could add privileges to databases, too. But I'm wary about\nkeeping the connection permissions in the database because you can easily\nlock yourself out that way. However, there are plenty of other ways you\ncan lock yourself out and in most cases you can start a standalone backend\nto fix the situation. So may that would be a possibility.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 11 Mar 2002 22:56:20 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf"
},
{
"msg_contents": "At 10:03 AM 11-03-2002 -0500, Bruce Momjian wrote:\n>Lincoln Yeoh wrote:\n> > Is there a way to grant another user access (full/limited) to an entire\n> > database?\n> >\n> > Right now pg_hba.conf controls connectivity to a database.\n> >\n> > However from the docs it seems that one has to do a grant for _every_\n> > table. if a new table is created the user can't access it. This can be\n> > annoying in some situations.\n>\n>Table access and database access are different issues. One is controled\n>by pg_hba.conf and other by GRANT. There is no mass-GRANT capability.\n\nActually I don't want a mass (table level?) grant. I'm looking for a way to \ngranting users access on a database level. I want a database level grant. I \ndon't need it, it's just a want :).\n\nBecause my assumption is if new tables (etc) are created after a manual \nmass grant, the nonowner won't have access to them.\n\nAm I trying to do things the wrong way tho?\n\nRegards,\nLink.\n\n\n",
"msg_date": "Tue, 12 Mar 2002 14:41:40 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Allowing usernames in pg_hba.conf"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: info [mailto:info@incode.com.eg]\nSent: Tuesday, March 05, 2002 12:08 PM\nTo: pgsql-hackers@postgresql.org\nSubject: FW: Re: [JDBC] DB mirroring\n\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org]On Behalf Of Dave Cramer\nSent: Monday, March 04, 2002 4:58 PM\nTo: 'Hany Ziad'; pgsql-jdbc@postgresql.org\nSubject: Re: [JDBC] DB mirroring\n\n\nHany,\n\nActually IMHO the best way to do this is with database mirroring at the\nbackend. There is a project underway to provide mirroring but it is not\nfinished. Try on the hackers list to see the status, or\ngborg.postgresql.org\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Hany Ziad\nSent: Wednesday, February 27, 2002 2:09 PM\nTo: pgsql-jdbc@postgresql.org\nSubject: [JDBC] DB mirroring\n\n\nHi everyone,\n\n I am new to the PostGres and I am writing in Java and JDBC.\n\n My application consists of several sites, each with a DB server with\nthin clients. When the user finishes work in a site, he moves towards\nanother site with the same architecture.\n The problem I am facing is that the user needs to find his DB updated\nin each site he logs into. He needs to find even the newest updates he\ndid in the previous site.\n So, I thought about making the recent changes in the DB available on\nan authenticated web site, that can be accessed when the user starts a\nsession and then the changes are downloaded and then reflected on to\nthe DB. When the user terminates the session, the updates he made are\nuploaded to the web site for future use and so on.\n\n Am I on the right track? If so, how can I monitor these changes?\n\n How can I update the older DB?\n\n Can \"Batch updates\" do the job?\n\n\nHelp pls,\n\nH. ZIAD\nincode co.\n\n",
"msg_date": "Sun, 10 Mar 2002 15:28:58 +0200",
"msg_from": "\"info\" <info@incode.com.eg>",
"msg_from_op": true,
"msg_subject": "FW: Re: [JDBC] DB mirroring"
}
] |
[
{
"msg_contents": "I'm looking through the index code and just happened to notice that\nINDEX_MAX_KEYS is currently set to 16. It there a reason for this value\nto be at 16 or was it arbitrarily specified?\n\nCurious,\n\tGreg",
"msg_date": "10 Mar 2002 11:13:58 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "INDEX_MAX_KEYS"
},
{
"msg_contents": "Greg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> I'm looking through the index code and just happened to notice that\n> INDEX_MAX_KEYS is currently set to 16. It there a reason for this value\n> to be at 16 or was it arbitrarily specified?\n\nArbitrary, and there is discussion about increasing it.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Mar 2002 13:09:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: INDEX_MAX_KEYS"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Greg Copeland wrote:\n>\n> Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > I'm looking through the index code and just happened to notice that\n> > INDEX_MAX_KEYS is currently set to 16. It there a reason for this value\n> > to be at 16 or was it arbitrarily specified?\n>\n> Arbitrary, and there is discussion about increasing it.\n\n Wasn't it that this number had to be <= the maximum number of\n function args?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 11 Mar 2002 15:55:22 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: INDEX_MAX_KEYS"
},
{
"msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > Greg Copeland wrote:\n> >\n> > Checking application/pgp-signature: FAILURE\n> > -- Start of PGP signed section.\n> > > I'm looking through the index code and just happened to notice that\n> > > INDEX_MAX_KEYS is currently set to 16. It there a reason for this value\n> > > to be at 16 or was it arbitrarily specified?\n> >\n> > Arbitrary, and there is discussion about increasing it.\n> \n> Wasn't it that this number had to be <= the maximum number of\n> function args?\n\nYes, they are related. At least I think so. Anyway, the parameter that\nneeds increasing is max function args. I got mixed up there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 16:34:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: INDEX_MAX_KEYS"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Jan Wieck wrote:\n> > Bruce Momjian wrote:\n> > > Greg Copeland wrote:\n> > >\n> > > Checking application/pgp-signature: FAILURE\n> > > -- Start of PGP signed section.\n> > > > I'm looking through the index code and just happened to notice that\n> > > > INDEX_MAX_KEYS is currently set to 16. It there a reason for this value\n> > > > to be at 16 or was it arbitrarily specified?\n> > >\n> > > Arbitrary, and there is discussion about increasing it.\n> >\n> > Wasn't it that this number had to be <= the maximum number of\n> > function args?\n>\n> Yes, they are related. At least I think so. Anyway, the parameter that\n> needs increasing is max function args. I got mixed up there.\n\n Then again, if they are related, why not let the index max\n keys be automatically set according to the function max arg\n configuration? Is there any reason someone want's to limit\n it smaller than the system could technically handle?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 11 Mar 2002 16:50:09 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: INDEX_MAX_KEYS"
},
{
"msg_contents": "Jan Wieck wrote:\n> > > > Arbitrary, and there is discussion about increasing it.\n> > >\n> > > Wasn't it that this number had to be <= the maximum number of\n> > > function args?\n> >\n> > Yes, they are related. At least I think so. Anyway, the parameter that\n> > needs increasing is max function args. I got mixed up there.\n> \n> Then again, if they are related, why not let the index max\n> keys be automatically set according to the function max arg\n> configuration? Is there any reason someone want's to limit\n> it smaller than the system could technically handle?\n\nI don't think so. I don't remember if there is a NULL bitmap that is\nfixed length for indexes. I don't think so.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 17:11:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: INDEX_MAX_KEYS"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Wasn't it that this number had to be <= the maximum number of\n>> function args?\n\n> Yes, they are related. At least I think so.\n\nThey have to be exactly the same, because both are tied to the size\nof the oidvector type.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Mar 2002 17:59:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: INDEX_MAX_KEYS "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Wasn't it that this number had to be <= the maximum number of\n> >> function args?\n> \n> > Yes, they are related. At least I think so.\n> \n> They have to be exactly the same, because both are tied to the size\n> of the oidvector type.\n\nAnd because oidvector has to be a fixed length, we are have overhead in\nincreasing it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Mar 2002 18:00:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: INDEX_MAX_KEYS"
}
] |
[
{
"msg_contents": "Autoconf 2.53 was just released, and I'd like us to upgrade to this\nrelease in the next few weeks. Compared to 2.13, there are tons of new\nfeatures in this release that will make our lives easier.\n\nThere will be a separate announcement when the change actually happens.\nIn the meantime, download and install the new release some time. Get yours\nhere: ftp://ftp.gnu.org/gnu/autoconf/\n\nNote: It will not work to just run the new autoconf on the existing\nconfigure.in. Some changes to configure.in and *.m4 will need to be\napplied first. Also, if you're using autoconf 2.13 in some other project,\nyou will need to keep it around, as autoconf <2.50 and >=2.50 are not\ncompatible.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 10 Mar 2002 23:24:08 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Autoconf upgrade"
},
{
"msg_contents": "> Autoconf 2.53 was just released, and I'd like us to upgrade to this\n> release in the next few weeks. Compared to 2.13, there are tons of new\n> features in this release that will make our lives easier.\n\nHmm. I'd much rather be using a version which actually ships with some\ndistros. I've got 2.13 on my rather fresh Linux box. Anyone else getting\nsomething later any time soon?\n\n - Thomas\n",
"msg_date": "Sun, 10 Mar 2002 21:25:13 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "...\n> Hmm. I'd much rather be using a version which actually ships with some\n> distros. I've got 2.13 on my rather fresh Linux box. Anyone else getting\n> something later any time soon?\n\nI tried looking at rpmfind.net to see what Linuxen are doing with it,\nbut got:\n\nWarning: Can't connect to local MySQL server through socket\n'/tmp/mysql.sock' (2) in /serveur/WWW/public/linux/rpm2html/search.php\non line 202\n\nWarning: MySQL Connection Failed: Can't connect to local MySQL server\nthrough socket '/tmp/mysql.sock' (2) in\n/serveur/WWW/public/linux/rpm2html/search.php on line 202\n\nCould not connect to the database: Can't connect to local MySQL server\nthrough socket '/tmp/mysql.sock' (2)\n\n\nOh well...\n\n - Thomas\n",
"msg_date": "Sun, 10 Mar 2002 21:30:29 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "> Hmm. I'd much rather be using a version which actually ships with some\n> distros. I've got 2.13 on my rather fresh Linux box. Anyone else getting\n> something later any time soon?\n\nIt looks like Mandrake Cooker (the development builds) have autoconf2.5\nas well as autoconf. Presumably many packages will still use autoconf\nfor the next few months or more...\n\n - Thomas\n",
"msg_date": "Sun, 10 Mar 2002 21:36:12 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n>> Autoconf 2.53 was just released,\n\n> Hmm. I'd much rather be using a version which actually ships with some\n> distros. I've got 2.13 on my rather fresh Linux box. Anyone else getting\n> something later any time soon?\n\n(a) FWIW, I really need to do an autoconf upgrade here, for use on other\nprojects. So I'm in favor of updating fairly soon.\n\n(b) If we wait for the distros to all catch up we might be waiting a\n*long* time.\n\nIt is fair to wait a few weeks and see if 2.53 looks like it will stand\nthe test of time, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Mar 2002 01:31:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade "
},
{
"msg_contents": "On Mon, 2002-03-11 at 05:30, Thomas Lockhart wrote:\n> ...\n> > Hmm. I'd much rather be using a version which actually ships with some\n> > distros. I've got 2.13 on my rather fresh Linux box. Anyone else getting\n> > something later any time soon?\n> \n> I tried looking at rpmfind.net to see what Linuxen are doing with it,\n\nDebian unstable has 2.52\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"I am the vine, ye are the branches; He that abideth in\n me, and I in him, the same bringeth forth much fruit; \n for without me ye can do nothing.\" \n John 15:5 \n\n",
"msg_date": "11 Mar 2002 08:03:46 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n \n> Hmm. I'd much rather be using a version which actually ships with some\n> distros. I've got 2.13 on my rather fresh Linux box\n\nWe are switching to it...\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "11 Mar 2002 11:13:33 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "> We are switching to it...\n\nGreat. I was worried about the \"not compatible with earlier versions\"\nimplications...\n\n - Thomas\n",
"msg_date": "Mon, 11 Mar 2002 08:16:19 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Hmm. I'd much rather be using a version which actually ships with some\n> distros. I've got 2.13 on my rather fresh Linux box. Anyone else getting\n> something later any time soon?\n\nAutoconf 2.13 is probably going to stay around for quite a while since a\nlot of projects are using it. But it's dead and old (>2 years).\n\nWe could argue about using 2.52 or 2.53. 2.52 is widely adopted, but if\nwe're going to switch, why not use the latest stable release? We are\nquite known for scolding people for using old releases of the software\nthat we maintain.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 11 Mar 2002 11:25:34 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "\n\n%autoconf --version\nautoconf (GNU Autoconf) 2.52\nWritten by David J. MacKenzie.\n\nCopyright 1992, 1993, 1994, 1996, 1999, 2000, 2001\nFree Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n%uname -a\nFreeBSD postgresql.org 4.5-STABLE FreeBSD 4.5-STABLE #1: Tue Mar 12 08:30:14 CST 2002 root@mars.hub.org:/usr/obj/usr/src/sys/kernel i386\n\n\n\nOn Mon, 11 Mar 2002, Peter Eisentraut wrote:\n\n> Thomas Lockhart writes:\n>\n> > Hmm. I'd much rather be using a version which actually ships with some\n> > distros. I've got 2.13 on my rather fresh Linux box. Anyone else getting\n> > something later any time soon?\n>\n> Autoconf 2.13 is probably going to stay around for quite a while since a\n> lot of projects are using it. But it's dead and old (>2 years).\n>\n> We could argue about using 2.52 or 2.53. 2.52 is widely adopted, but if\n> we're going to switch, why not use the latest stable release? We are\n> quite known for scolding people for using old releases of the software\n> that we maintain.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Tue, 26 Mar 2002 13:55:59 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> %autoconf --version\n> autoconf (GNU Autoconf) 2.52\n\nFreeBSD HEAD has 2.53 in ports. Not sure if you can pull that in if\nyou're following stable.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 26 Mar 2002 13:13:58 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "\nEasily, just pointing out that we are already at 2.52 on the main server\n... just upgraded it on the 12th, will work on upgrading it over the next\nday or so, since I'm in the middle of dealing with preparations for a\nsecurity audit at the University *oh joy* ... :)\n\n\nOn Tue, 26 Mar 2002, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > %autoconf --version\n> > autoconf (GNU Autoconf) 2.52\n>\n> FreeBSD HEAD has 2.53 in ports. Not sure if you can pull that in if\n> you're following stable.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n>\n\n",
"msg_date": "Tue, 26 Mar 2002 14:37:56 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "On Tue, 26 Mar 2002 13:13:58 -0500 (EST)\nPeter Eisentraut <peter_e@gmx.net> wrote:\n\n> Marc G. Fournier writes:\n> \n> > %autoconf --version\n> > autoconf (GNU Autoconf) 2.52\n> \n> FreeBSD HEAD has 2.53 in ports. Not sure if you can pull that in if\n> you're following stable.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\nHi,\n\nThe FreeBSD ports tree only has one \"branch\" and that is the head. There is no stable or current branch.\n\nWhen you pull the ports tree you get whatever is there. It is up to the port to figure out which version\nof FreeBSD you have, as is evidenced by the massive breakage right now on current for alot of ports.\n\nGB\n\n-- \nGB Clark II | Roaming FreeBSD Admin\ngclarkii@VSServices.COM | General Geek \n CTHULU for President - Why choose the lesser of two evils?\n",
"msg_date": "Tue, 26 Mar 2002 13:18:57 -0600",
"msg_from": "GB Clark <postgres@vsservices.com>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
},
{
"msg_contents": "There's no such thing as stable ports - there is only HEAD...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Peter Eisentraut\n> Sent: Wednesday, 27 March 2002 2:14 AM\n> To: Marc G. Fournier\n> Cc: PostgreSQL Development\n> Subject: Re: [HACKERS] Autoconf upgrade\n> \n> \n> Marc G. Fournier writes:\n> \n> > %autoconf --version\n> > autoconf (GNU Autoconf) 2.52\n> \n> FreeBSD HEAD has 2.53 in ports. Not sure if you can pull that in if\n> you're following stable.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n",
"msg_date": "Wed, 27 Mar 2002 09:40:24 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
}
] |
[
{
"msg_contents": "Iam testing postgresql 7.2\n\nIn postgres 6.5 i used the ~ Operator on type aclitem to search:\n\nhas user xyz the permisssion x on table x ?\n\nThis is useful to disable update-buttons in the GUI for users\n\nwho have only read-permissions.\n\nwith the following command as example\n\nselect\n relname\n , relacl\nfrom\n pg_class\nwhere\n relacl ~ 'nobody=r'::aclitem\n\nand\n\n relname = 'tab1'\n\n;\n\nIn 7.2 this command only is successful if r is the only permission\n\nof nobody on the table. To show the effect the following commands:\n\n relname | relacl\n---------+------------------------------------\n tab1 | {=,probost=arwdRxt,nobody=r}\n tab2 | {=,probost=arwdRxt,nobody=arwdRxt}\n tab3 | {=,probost=arwdRxt,nobody=r}\n(3 rows)\n\nselect relname\n , relacl\nfrom\n pg_class\nwhere\n relkind ='r'\nand\n relname !~ '^pg_'\n relname | relacl\n---------+------------------------------\n tab1 | {=,probost=arwdRxt,nobody=r}\n tab3 | {=,probost=arwdRxt,nobody=r}\n(2 rows)\n\nselect\n relname\n , relacl\nfrom\n pg_class\nwhere\n relacl ~ 'nobody=r'::aclitem\n\n;\n\nMay I change the syntax of the sql ?\n\nis this an error ?\n\nis there another possibility to reply to the above question ?\n\n--\nMfG\n\n-------------------------------------------------------------------------\n- Karin Probost\n- Bergische Universitaet Wuppertal\n- RECHENZENTRUM Raum P-.09.05\n- Gaussstr. 20\n- D-42097 Wuppertal\n- Germany\n-\n- Tel. : +49 -202 /439 3151 ,Fax -2910\n--Email: probost@uni-wuppertal.de\n--Home : http://www.hrz.uni-wuppertal.de/hrz/personen/k_probost.html\n-------------------------------------------------------------------------\n\n\n\n",
"msg_date": "Mon, 11 Mar 2002 09:30:18 +0100",
"msg_from": "karin probost <probost@uni-wuppertal.de>",
"msg_from_op": true,
"msg_subject": "Operator ~ on type aclitem in pgsql 7.2"
}
] |
[
{
"msg_contents": "We are running a tpc-H benchmark and we need help on tuning this benchmark, the running set consists on an arbitrary number of streams of 22 ad-hoc queries while the system under test is receiving continuos inserts and deletes, here are the tables:",
"msg_date": "Mon, 11 Mar 2002 10:45:22 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "need help on tuning tpch"
},
{
"msg_contents": "I mean, is there any mistake on triggers or any possible improve in indices.sql to make it work better, we're using an 8 r10000 powerchallenge, and results become steady from 5 streams and over.\nthanks in advance\n\n\n\n\n\n\n\nI mean, is there any mistake on triggers or any \npossible improve in indices.sql to make it work better, we're using an 8 r10000 \npowerchallenge, and results become steady from 5 streams and over.\nthanks in advance",
"msg_date": "Tue, 12 Mar 2002 09:07:47 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: need help on tuning tpch"
},
{
"msg_contents": "We're running on a 8 r10000 powerchallenge, and results improve from 1 to 5 streams, but from 5 and above results become steady, machine usages reports it's able to manage more workload.\nWe're running postgres 7.2, it has about 640Mb of shared memory.\nAny tip?\nThank in advance\n\n\n\n\n\n\n\nWe're running on a 8 r10000 powerchallenge, and \nresults improve from 1 to 5 streams, but from 5 and above results become steady, \nmachine usages reports it's able to manage more workload.\nWe're running postgres 7.2, it has about 640Mb of \nshared memory.\nAny tip?\nThank in advance",
"msg_date": "Tue, 12 Mar 2002 17:34:03 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "bad performance on SMP"
}
] |
[
{
"msg_contents": "Hi,\n\nOn Friday 08 March 2002 20:37, Hannu Krosing wrote:\n> This technique seems to be a good candidate for implementing using GiST\n> or perhaps just defined using the <. operator mentioned there.\n\nAs I understand GiST is a pretty different algorythm - but I\nmight be wrong because I know very much about GiST.\n\nOn Friday 08 March 2002 20:37, Hannu Krosing wrote:\n> Mappign from ordinary query with several = , < and between may be a\n> little tricky though.\n\nBut the possible performance gain can be huge - at least\nthis is what you find in theier benchmark documentations.\n\nOn Friday 08 March 2002 20:37, Hannu Krosing wrote:\n> They may also have patents on it, so we should move carefully here.\n\nI sent a mail asking R. Bayer about any known patent issues.\nHe said that the UB-Tree is internationally patented.\nSad, because it looked like a briliant idea. Now it looks like\nit will be banned from the open source community for some\ndecades to come... :-(\n\nRobert Schrem\n\n-------------------------------------------------------\n",
"msg_date": "Mon, 11 Mar 2002 10:55:49 +0100",
"msg_from": "Robert Schrem <robert@schrem.de>",
"msg_from_op": true,
"msg_subject": "Fwd: Re: UB-Tree"
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Robert Schrem\" <robert@schrem.de>\nTo: \"Hannu Krosing\" <hannu@krosing.net>\nCc: \"PostgreSQL Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Monday, March 11, 2002 11:55 AM\nSubject: Fwd: Re: [HACKERS] UB-Tree\n> On Friday 08 March 2002 20:37, Hannu Krosing wrote:\n> > They may also have patents on it, so we should move carefully here.\n>\n> I sent a mail asking R. Bayer about any known patent issues.\n> He said that the UB-Tree is internationally patented.\n> Sad, because it looked like a briliant idea. Now it looks like\n> it will be banned from the open source community for some\n> decades to come... :-(\n\nIANAL, but there seem to be some issues :\n\n1. There is no such thing as 'internationally patented' as most countries\nstill don't allow patenting software algorithms.\n\n2. I doubt it is possible to patent a general idea, like replacing multiple\none-dimensional indexes with one multi-dimensional index (conceptually\nthey _are_ the same thing, just optimised for different access paterns)\n\nSo as we can't use UB-Tree, we may well achieve similar result buy\nusing a multi-dimensional/multi-type R-tree and teach our optimiser\nto use it for resolving multiple where clause restrictions simultaneously.\n\nOf course this too may be patented. If it is not, let's hope this e-mail\nwill be archived and can be used as prior art to prevent future patenting\nattempts :)\n\nAnother common way of fighting such patent's is patenting all possible\nfuture improvements and then haggle with the patent holder .\n\nI think this could be something that is effectively doable by open-source\ncommunity, at least the part of generating the ideas.\n\n-------------\nHannu\n\n\n\n\n\n",
"msg_date": "Mon, 11 Mar 2002 13:52:27 +0200",
"msg_from": "\"Hannu Krosing\" <hannu@itmeedia.ee>",
"msg_from_op": false,
"msg_subject": "Re: UB-Tree"
}
] |
[
{
"msg_contents": "The problem:\n\nI want to create a function that returns the result as many tuples (many\nrows of records). Unlike MSSQL, Oracle, etc PostgreSQL couldnt do it. So, I\ndecided the only way to do it is to return result data into temporary table.\n\nBut:\n\n- If I create table into stored procedure, I got the error from the second\ncall of this procedure inside the same session. It's because Plpgsql makes\nprecompilation of the query at the first call of this procedure inside the\nsession. And when I delete the result temporary table that this procedure\nreturned me and call this procedure second time, the query with \"INSERT\"\n(that is already precompiled) uses the table that was already deleted, but\nnot the table that was just created. :(\n\n- I couldnt check is some temporary table exist inside the session. :(\n\nThe way I could decide this problem is:\n\n- At each start of session some stored procedure must run (as some kind of\ntransaction). And in this stored procedure I want to create all temporary\ntables that I want to use to store resulting rows from other stored\nprocedures. And I shall not need to create any temporary table inside these\nprocedures.\n\n\n",
"msg_date": "Mon, 11 Mar 2002 16:47:58 +0500",
"msg_from": "\"Paul\" <magamos@mail.ru>",
"msg_from_op": true,
"msg_subject": "Transaction on start of session ?"
},
{
"msg_contents": "It is true that postgresql does not have an easy way to return multiple rows\nfrom a function, but it can be done with some typing.\n\n(select MyStartfn(...), Myfn('name1') as name1, Myfn('name2') as name2) as\nttable\n\nThe idea is that you write a function \"MyStartfbn(...)\" which does the\noperation. The function \"Myfn(...)\" accepts a field name, or some kind of\nmarker, to return the rows.\n\nThere are a number of strategies on how to do this, but you kind of need to\nunderstand how to write PostgreSQL functions.\n\n\nPaul wrote:\n> \n> The problem:\n> \n> I want to create a function that returns the result as many tuples (many\n> rows of records). Unlike MSSQL, Oracle, etc PostgreSQL couldnt do it. So, I\n> decided the only way to do it is to return result data into temporary table.\n> \n> But:\n> \n> - If I create table into stored procedure, I got the error from the second\n> call of this procedure inside the same session. It's because Plpgsql makes\n> precompilation of the query at the first call of this procedure inside the\n> session. And when I delete the result temporary table that this procedure\n> returned me and call this procedure second time, the query with \"INSERT\"\n> (that is already precompiled) uses the table that was already deleted, but\n> not the table that was just created. :(\n> \n> - I couldnt check is some temporary table exist inside the session. :(\n> \n> The way I could decide this problem is:\n> \n> - At each start of session some stored procedure must run (as some kind of\n> transaction). And in this stored procedure I want to create all temporary\n> tables that I want to use to store resulting rows from other stored\n> procedures. And I shall not need to create any temporary table inside these\n> procedures.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n",
"msg_date": "Wed, 13 Mar 2002 18:01:12 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction on start of session ?"
}
] |
[
{
"msg_contents": "I guess no body really read my last two emails. Try to get some attention from the topic this time see anyone \ncare to answer my simple question. I know from the core design of postgres has object extension vision. \nBut is there anyone here still working on the object possiblity in postgres? \n\nI understand you guys were busy working on the SQL 99 standard. Thanks for your time. Here is my answer\nfor my last question, just in cast someone out there still try to use object database. ( trying ) :) \n\n\n#include \"postgres.h\" \n#include \"fmgr.h\" \n#include \"executor/executor.h\" \n#include \"utils/geo_decls.h\" \n\nDatum spycastoid(PG_FUNCTION_ARGS);\nPG_FUNCTION_INFO_V1(spycastoid);\n\nDatum\nspycastoid(PG_FUNCTION_ARGS){\n TupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);\n int32 slot = PG_GETARG_INT32(1);\n int32 myid;\n bool isnull;\n\n myid = DatumGetInt32(GetAttributeByNum(t, slot, &isnull));\n if (isnull)\n PG_RETURN_INT32(0);\n PG_RETURN_INT32(myid);\n}\n\nwhen the table is created one or two thing need to be done \nCREATE TABLE base (\n myname text,\n);\n\nCREATE UNIQUE INDEX spy_unique_key ON base ( oid, myname ) ;\n\nCREATE TABLE child (\n myfather base, \n myname text\n);\n\nCREATE FUNCTION spycast( child , int4 ) RETURNS int4 // ( myObjectClass_OwnedBy_TableName, myAttributeNum ) \nAS '/spycastoid.so' LANGUAGE 'c'; \nCREATE FUNCTION spycast( child ) RETURNS int4 // ( myObjectClass_OwnedBy_TableName ) \nAS 'select spycast($1,1 );' LANGUAGE 'sql'; // make this programmable\n\nINSERT INTO base ( myname ) Values ( 'alex' ) ;\nINSERT 56578 1 <<---- oid\nINSERT INTO child ( myfather, myname ) values ( 56578::base, 'alexbaby' );\nINSERT 56579 1 <<---- oid\n\nnow you can do \nSELECT * FROM child WHERE spycast( child ) = 56578 ; \n\n\nAlex \n\n\n",
"msg_date": "Mon, 11 Mar 2002 11:28:19 -0600",
"msg_from": "alex@AvengerGear.com (Debian User)",
"msg_from_op": true,
"msg_subject": "Object?? -Relational DBMS Postgresql are you sure? "
},
{
"msg_contents": "Alex,\n\nActually I am interested in this.\n\nThis works:\n\nselect * from child where base.oid=178120\n\nDave\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Debian User\nSent: Monday, March 11, 2002 12:28 PM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] Object?? -Relational DBMS Postgresql are you sure? \n\n\nI guess no body really read my last two emails. Try to get some\nattention from the topic this time see anyone \ncare to answer my simple question. I know from the core design of\npostgres has object extension vision. \nBut is there anyone here still working on the object possiblity in\npostgres? \n\nI understand you guys were busy working on the SQL 99 standard. Thanks\nfor your time. Here is my answer for my last question, just in cast\nsomeone out there still try to use object database. ( trying ) :) \n\n\n#include \"postgres.h\" \n#include \"fmgr.h\" \n#include \"executor/executor.h\" \n#include \"utils/geo_decls.h\" \n\nDatum spycastoid(PG_FUNCTION_ARGS);\nPG_FUNCTION_INFO_V1(spycastoid);\n\nDatum\nspycastoid(PG_FUNCTION_ARGS){\n TupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);\n int32 slot = PG_GETARG_INT32(1);\n int32 myid;\n bool isnull;\n\n myid = DatumGetInt32(GetAttributeByNum(t, slot, &isnull));\n if (isnull)\n PG_RETURN_INT32(0);\n PG_RETURN_INT32(myid);\n}\n\nwhen the table is created one or two thing need to be done \nCREATE TABLE base (\n myname text,\n);\n\nCREATE UNIQUE INDEX spy_unique_key ON base ( oid, myname ) ;\n\nCREATE TABLE child (\n myfather base, \n myname text\n);\n\nCREATE FUNCTION spycast( child , int4 ) RETURNS int4 // (\nmyObjectClass_OwnedBy_TableName, myAttributeNum ) \nAS '/spycastoid.so' LANGUAGE 'c'; \nCREATE FUNCTION spycast( child ) RETURNS int4 // (\nmyObjectClass_OwnedBy_TableName ) \nAS 'select spycast($1,1 );' LANGUAGE 'sql'; // make this programmable\n\nINSERT INTO base ( myname ) Values ( 'alex' ) ;\nINSERT 56578 1 <<---- oid\nINSERT INTO child ( myfather, myname ) values ( 56578::base, 'alexbaby'\n); INSERT 56579 1 <<---- oid\n\nnow you can do \nSELECT * FROM child WHERE spycast( child ) = 56578 ; \n\n\nAlex \n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n\n",
"msg_date": "Mon, 11 Mar 2002 20:44:03 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Object?? -Relational DBMS Postgresql are you sure? "
}
] |
[
{
"msg_contents": "I've been playing around with Intel's x86 C++ compiler (icc) for\nlinux. The compiler is very good for optimizing x86 code. With some\nstruggle, I managed to get postgresql compiled with it. I've listed\nbelow what I had to do to get postgres compiled, along with some\nresults from pgbench.\n\n\nCompilation\n\nicc can compile C code as well as C++. WRT C, icc is binary\ncompatible with gcc. It aims to be compatible with gcc extensions,\nbut has a ways to go.\n\nNote: I only compiled the backend with icc. Targets such as psql/\npgbench were compiled with gcc.\n\n* The first problem I encountered was icc couldn't produce the\nSUBSYS.o targets for the backend. The work around was to ignore the\nSUBSYS.o targets and link with each individual object files.\n\n* Linking doesn't appear to work with icc's \"-ipo\" optimization. The\ngoal of ipo to perform inlining of functions between source files.\nThis is a bummer, since -ipo can produce very good code.\n\n* Since icc can't handle inline assembly, a number of files need to be\ncompiled with gcc. These include:\n access/transam/xlog.c\n storage/ipc/shmem.c\n storage/lmgr/proc.c\n storage/lmgr/lwlock.c\n storage/lmgr/s_lock.c\n utils/adt/pg_lzcompress.c\nHopefully intel will add inline assembly to icc....\n\n* I used the gcc frontend to ld instead of using icc directly. ld\nseems to hang (3+ hrs cpu time, no output) when invoked from icc on\npostgresql for some reason.\n\n\nResults:\n\nI produced two different backend executables. One with gcc and one\nwith icc. I ran each executable on the same database and benchmarked\nwith pgbench.\n\n(1) gcc 2.96 (yeah, RedHat 7.1) options:\n -Wall -O3 -fomit-frame-pointer -fforce-addr -fforce-mem\n -funroll-loops -malign-loops=2 -malign-functions=2 -malign-jumps=2\n(2) icc 5.0 options:\n -O3 -tpp6 -xK -unroll -ip\n\n pgbench{i}\n\n [-t 5000] [-t 500 -c 10] [-t 200 -c 25] [-t 25 -c 50]\n(1)gcc 94.06 94.74 86.77 149.38\n(2)icc 102.08 100.31 91.10 155.15\n\n{i}: results are tps excluding connection establishing, average of 3 runs.\n A full vacuum/analyze was performed between runs.\n\nThe results indicate up to a ~10% increase in transactions per second\nfor pgbench. I've seen improvement more like 20% on some very cpu\nintensive programs (ie- lame mp3 encoder).\n\nIf there are some other benchmarks easily run let me know and I'll\ngive them a go.\n\nSide note: it seems difficult to get consistent results out of\npgbench. I ended up dropping/recreating/repopulating the database\nbetween runs. I also modified pgbench to have a constant seed to the\nrandom number generator (attempting to get more consistent results).\n\n\nConclusion\n\nThe Intel compiler appears to produce code better than gcc 2.96 when\ntesting with pgbench. My experience has been that icc excels at\ncpu-intensive processes, which might not be reflected in the pgbench\nresults. Since postgresql can require lots of disk I/O, performance\nversus gcc will not be significant on processes already I/O bound.\n\nThe build process currently requires lots of hand tweaking and isn't\nentirely possible without gcc. Future versions of icc should improve\nupon this. As such, it may be currently obtuse to give postgres\nsupport for icc out of the box but it should be doable if their is\ninterest.\n\nMy understanding is that the intel evaluation license is ok for\nhobbyists, but without purchasing an actual license ($500) the\ncompiled code cannot be distributed.\n\nlink: http://www.intel.com/software/products/compilers/c50/linux/noncom.htm\n\n\nRegards,\nKyle\nkaf@_nwlink_._com_\n",
"msg_date": "Mon, 11 Mar 2002 16:25:51 -0800",
"msg_from": "Kyle <kaf@nwlink.com>",
"msg_from_op": true,
"msg_subject": "Promising results with Intel Linux x86 compiler"
},
{
"msg_contents": "Hi Kyle,\n\nWould you like to try the icc optimised version of PostgreSQL with the\nOSDB (Open Source Database Benchmark)?\n\nIt's based on the AS3AP database benchmark, which I feel is a lot more\nrecognised than pgbench.\n\nIt's URL is http://osdb.sourceforge.net\n\nThe latest released version (0.12) has a problem with hash indexes in\nPostgreSQL (a PostgreSQL bug which Neil Conway has put up his hand to\nfix), but the latest CVS commit of OSDB has a workaround for that.\n\n*If* you don't mind downloading the latest CVS version (it's not a real\nbig program) and compiling that, it would be interesting to see the\nthroughput differences between the gcc compiled and icc compiled\nversions of PostgreSQL.\n\nIf you need the dataset generation utility for OSDB, I have that too. \nJust ask me for it and I'll email it to you. It's a DOS executable, but\nruns fine with Wine (the windows emulator).\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nKyle wrote:\n> \n> I've been playing around with Intel's x86 C++ compiler (icc) for\n> linux. The compiler is very good for optimizing x86 code. With some\n> struggle, I managed to get postgresql compiled with it. I've listed\n> below what I had to do to get postgres compiled, along with some\n> results from pgbench.\n> \n> Compilation\n> \n> icc can compile C code as well as C++. WRT C, icc is binary\n> compatible with gcc. It aims to be compatible with gcc extensions,\n> but has a ways to go.\n> \n> Note: I only compiled the backend with icc. Targets such as psql/\n> pgbench were compiled with gcc.\n> \n> * The first problem I encountered was icc couldn't produce the\n> SUBSYS.o targets for the backend. The work around was to ignore the\n> SUBSYS.o targets and link with each individual object files.\n> \n> * Linking doesn't appear to work with icc's \"-ipo\" optimization. The\n> goal of ipo to perform inlining of functions between source files.\n> This is a bummer, since -ipo can produce very good code.\n> \n> * Since icc can't handle inline assembly, a number of files need to be\n> compiled with gcc. These include:\n> access/transam/xlog.c\n> storage/ipc/shmem.c\n> storage/lmgr/proc.c\n> storage/lmgr/lwlock.c\n> storage/lmgr/s_lock.c\n> utils/adt/pg_lzcompress.c\n> Hopefully intel will add inline assembly to icc....\n> \n> * I used the gcc frontend to ld instead of using icc directly. ld\n> seems to hang (3+ hrs cpu time, no output) when invoked from icc on\n> postgresql for some reason.\n> \n> Results:\n> \n> I produced two different backend executables. One with gcc and one\n> with icc. I ran each executable on the same database and benchmarked\n> with pgbench.\n> \n> (1) gcc 2.96 (yeah, RedHat 7.1) options:\n> -Wall -O3 -fomit-frame-pointer -fforce-addr -fforce-mem\n> -funroll-loops -malign-loops=2 -malign-functions=2 -malign-jumps=2\n> (2) icc 5.0 options:\n> -O3 -tpp6 -xK -unroll -ip\n> \n> pgbench{i}\n> \n> [-t 5000] [-t 500 -c 10] [-t 200 -c 25] [-t 25 -c 50]\n> (1)gcc 94.06 94.74 86.77 149.38\n> (2)icc 102.08 100.31 91.10 155.15\n> \n> {i}: results are tps excluding connection establishing, average of 3 runs.\n> A full vacuum/analyze was performed between runs.\n> \n> The results indicate up to a ~10% increase in transactions per second\n> for pgbench. I've seen improvement more like 20% on some very cpu\n> intensive programs (ie- lame mp3 encoder).\n> \n> If there are some other benchmarks easily run let me know and I'll\n> give them a go.\n> \n> Side note: it seems difficult to get consistent results out of\n> pgbench. I ended up dropping/recreating/repopulating the database\n> between runs. I also modified pgbench to have a constant seed to the\n> random number generator (attempting to get more consistent results).\n> \n> Conclusion\n> \n> The Intel compiler appears to produce code better than gcc 2.96 when\n> testing with pgbench. My experience has been that icc excels at\n> cpu-intensive processes, which might not be reflected in the pgbench\n> results. Since postgresql can require lots of disk I/O, performance\n> versus gcc will not be significant on processes already I/O bound.\n> \n> The build process currently requires lots of hand tweaking and isn't\n> entirely possible without gcc. Future versions of icc should improve\n> upon this. As such, it may be currently obtuse to give postgres\n> support for icc out of the box but it should be doable if their is\n> interest.\n> \n> My understanding is that the intel evaluation license is ok for\n> hobbyists, but without purchasing an actual license ($500) the\n> compiled code cannot be distributed.\n> \n> link: http://www.intel.com/software/products/compilers/c50/linux/noncom.htm\n> \n> Regards,\n> Kyle\n> kaf@_nwlink_._com_\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 12 Mar 2002 13:10:42 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Promising results with Intel Linux x86 compiler"
},
{
"msg_contents": "Kyle <kaf@nwlink.com> writes:\n> * Linking doesn't appear to work with icc's \"-ipo\" optimization. The\n> goal of ipo to perform inlining of functions between source files.\n> This is a bummer, since -ipo can produce very good code.\n\nYou should be quite wary of that one.\n\nThe reason is that accesses to shared memory are typically protected by\nLWLockAcquire/LWLockRelease call pairs. It's absolutely critical that\nno operations get relocated into or out of the code segments between\nsuch call pairs. With interprocedural optimizations turned on, I think\nit's quite likely for a compiler to blow this --- which would lead to\nextremely nasty, low-probability, hard-to-debug failures during\nconcurrent operation.\n\nHaving recently tracked down some similar nastiness *within*\nLWLockAcquire (AIX's compiler feels no compunction about rearranging\nvolatile-object operations w.r.t. non-volatile ones) the prospect of\nany compiler deciding to interleave LWLockAcquire/LWLockRelease code\nwith calling code scares me to death.\n\nAFAIK the only way we could prevent such problems is for *all* pointers\nto shared memory to be marked volatile --- which would doubtless blow a\ngood proportion of the speedup one might otherwise hope to get. Within\nan LWLockAcquire'd segment, shared memory is *not* volatile and we don't\nwant to completely defeat optimization of routines such as the lock and\nbuffer managers.\n\nPossibly you could avoid the issue by arranging for lwlock.c to be\ncompiled at a lower optimization level that doesn't expose its routines\nfor merging with callers.\n\n> Side note: it seems difficult to get consistent results out of\n> pgbench.\n\nYeah, I've noticed that too. You really have to do a complete vacuum\nbetween runs to get any semblance of stable results.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Mar 2002 21:11:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Promising results with Intel Linux x86 compiler "
}
] |
[
{
"msg_contents": "This is the c code section and sql section that I need for casting Object reference column \nIf anyone know there is an internal one is already build, please let me know. \nIf not I'm just wondering we can put that into the src. This will allow all \nobject reference column to cast back as oid like they assigned to and build \nindex on top like a reference key, so no schema needed anymore, pretty close to \nobject relation :) \nReplace the \"man\" with your table name in the sql file will work :)\n\ne.g \nCREATE TABLE man ( firstname text ); \nALTER TABLE man ADD son man;\n\nINSERT INTO man ( firstname ) VALUES ( 'alexbaby' );\nINSERT 12345 1;\nINSERT INTO man ( firstname, son ) VALUES ( 'alex', 12345::man );\nINSERT 12346 1;\nSELECT * FROM man WHERE son = (SELECT oid FROM man WHERE firstname='alexbaby');\n-- find a man his son's also is a man and his name is alexbaby;\nor\nCREATE UNIQUE INDEX man_oid_unique_key ON man ( oid );\nCREATE UNIQUE INDEX man_son_unique_key ON man ( firstname, son );\n\n\nfirstname | son \n-----------------\nalex | 12345\n\n:) I'm done on this part, I hope somebody in postgres still interested in building\nobject functionality. \nThanks \nAlex \n\n#include \"postgres.h\"\t\t\t/* general Postgres declarations */\n\n#include \"fmgr.h\"\t\t\t\t/* for argument/result macros */\n#include \"executor/execdebug.h\"\n#include \"executor/executor.h\"\t/* for GetAttributeByName() */\n#include \"utils/geo_decls.h\"\t/* for point type */\n\n\n/* These prototypes just prevent possible warnings from gcc. */\n\nDatum\t\tcastoid(PG_FUNCTION_ARGS);\nDatum\t\tcastoidbyname(PG_FUNCTION_ARGS);\n\nDatum\t\tmy_abs_lt(PG_FUNCTION_ARGS);\nDatum\t\tmy_abs_le(PG_FUNCTION_ARGS);\nDatum\t\tmy_abs_eq(PG_FUNCTION_ARGS);\nDatum\t\tmy_abs_ge(PG_FUNCTION_ARGS);\nDatum\t\tmy_abs_gt(PG_FUNCTION_ARGS);\nDatum\t\tmy_abs_cmp(PG_FUNCTION_ARGS);\n\n/* Composite types */\n\nPG_FUNCTION_INFO_V1(castoid);\n\nDatum\ncastoid(PG_FUNCTION_ARGS)\n{\n\tTupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);\n\tint32\t\tslot = PG_GETARG_INT32(1);\n\tint32\t\tmyid;\n\n\tbool\t\tisnull;\n\tmyid = DatumGetInt32(GetAttributeByNum(t, slot, &isnull));\n\tif (isnull)\n\t\tPG_RETURN_INT32(0);\n\n\t/*\n\t * Alternatively, we might prefer to do PG_RETURN_NULL() for null\n\t * salary\n\t */\n\n\tPG_RETURN_INT32(myid);\n}\n\nPG_FUNCTION_INFO_V1(castoidbyname);\n\nDatum\ncastoidbyname(PG_FUNCTION_ARGS)\n{\n\tTupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);\n\t/*char *slot = PG_GETARG_CSTRING(1);*/\n\ttext *slot = PG_GETARG_TEXT_P(1); \n\tint32\t\tmyid;\n\tbool\t\tisnull;\n\n\n\t/*printf( \"cast -->%s<----\", slot+VARHDRSZ );*/\n myid = DatumGetInt32(GetAttributeByName(t, slot->vl_dat, &isnull));\n\tif (isnull)\n\t\tPG_RETURN_INT32(0);\n\n\t/*\n\t * Alternatively, we might prefer to do PG_RETURN_NULL() for null\n\t * salary\n\t */\n\n\tPG_RETURN_INT32(myid);\n}\n\nPG_FUNCTION_INFO_V1(my_abs_lt);\nDatum\nmy_abs_lt(PG_FUNCTION_ARGS)\n{\n\tint32 left = PG_GETARG_OID(0), right=PG_GETARG_OID(1); \n\tPG_RETURN_BOOL(left < right);\n}\n\nPG_FUNCTION_INFO_V1(my_abs_le);\nDatum\nmy_abs_le(PG_FUNCTION_ARGS)\n{\n\tint32 left = PG_GETARG_OID(0), right=PG_GETARG_OID(1); \n\tPG_RETURN_BOOL(left <= right);\n}\n\nPG_FUNCTION_INFO_V1(my_abs_eq);\nDatum\nmy_abs_eq(PG_FUNCTION_ARGS)\n{\n\tint32 left = PG_GETARG_OID(0), right=PG_GETARG_OID(1); \n\tPG_RETURN_BOOL(left==right);\n}\n\nPG_FUNCTION_INFO_V1(my_abs_ge);\nDatum\nmy_abs_ge(PG_FUNCTION_ARGS)\n{\n\tint32 left = PG_GETARG_OID(0), right=PG_GETARG_OID(1); \n\tPG_RETURN_BOOL(left>=right);\n}\n\nPG_FUNCTION_INFO_V1(my_abs_gt);\nDatum\nmy_abs_gt(PG_FUNCTION_ARGS)\n{\n\tint32 left = PG_GETARG_OID(0), right=PG_GETARG_OID(1); \n\tPG_RETURN_BOOL(left>right);\n}\n\nPG_FUNCTION_INFO_V1(my_abs_cmp);\nDatum\nmy_abs_cmp(PG_FUNCTION_ARGS)\n{\n\tint32 left = PG_GETARG_OID(0), right=PG_GETARG_OID(1); \n\n\tif( left < right )\n\t PG_RETURN_INT32(-1); \n\telse if ( left > right ) \n\t PG_RETURN_INT32(1); \n\telse\n\t PG_RETURN_INT32(0);\n}\n\n\n",
"msg_date": "Mon, 11 Mar 2002 21:40:33 -0600",
"msg_from": "Alex Lau <alex@dpcgroup.com>",
"msg_from_op": true,
"msg_subject": "Get Object?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm working on implementing unique hash indexes. I've got most of the\ncode finished, but I'm stumped on how to implement the remainder. Since\nI'm still a newbie to the Postgres code, so any pointers or help would\nbe much appreciated.\n\nI've been able to borrow a fair amount of code from the btree unique\nindex implementation (where possible, I've tried to share code between\nhash and btree, I'll do this more in the final patch). The problem I'm\nhaving is the implementation of the _hash_check_unique() function. This\nis passed the Buffer which corresponds to the first page in the bucket\nchain for the key, the hash item itself, the ScanKey, as well as the\nindex Relation and the heap Relation. Given this, how does one scan\nthrough the hash bucket to determine if a matching key is present?\n\nI can probably figure out the MVCC related code (ensuring that the\ntuples we find aren't dead, etc); what I can't figure out is the basic\nmethodology required to search for matching tuples in the hash bucket.\n\nAny help would be appreciated. I've attached the current development\nversion of the patch, if that is of any help.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC",
"msg_date": "12 Mar 2002 01:41:05 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "help with a patch"
},
{
"msg_contents": "Neil Conway wrote:\n> Hi all,\n> \n> I'm working on implementing unique hash indexes. I've got most of the\n> code finished, but I'm stumped on how to implement the remainder. Since\n> I'm still a newbie to the Postgres code, so any pointers or help would\n> be much appreciated.\n> \n> I've been able to borrow a fair amount of code from the btree unique\n> index implementation (where possible, I've tried to share code between\n> hash and btree, I'll do this more in the final patch). The problem I'm\n> having is the implementation of the _hash_check_unique() function. This\n> is passed the Buffer which corresponds to the first page in the bucket\n> chain for the key, the hash item itself, the ScanKey, as well as the\n> index Relation and the heap Relation. Given this, how does one scan\n> through the hash bucket to determine if a matching key is present?\n> \n> I can probably figure out the MVCC related code (ensuring that the\n> tuples we find aren't dead, etc); what I can't figure out is the basic\n> methodology required to search for matching tuples in the hash bucket.\n> \n> Any help would be appreciated. I've attached the current development\n> version of the patch, if that is of any help.\n\nI am not totally sure of the question, but for hash don't you have to\nspin through the entire bucket and test each one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 14 Mar 2002 16:43:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: help with a patch"
}
] |
[
{
"msg_contents": "i read user manual, but i only find operators and types.\nWhere can i find examples,..?\n\nThanks\n\n",
"msg_date": "Tue, 12 Mar 2002 09:52:03 -0500",
"msg_from": "Manuel Martin <mmm876@yahoo.es>",
"msg_from_op": true,
"msg_subject": "how to get info about spatial extension of pgsql?"
}
] |
[
{
"msg_contents": "Here is my short hack for the object reference ( tuple ) return \nas int4 function. \n\n\nDatum spycastoid(PG_FUNCTION_ARGS);\nDatum spycastoidbyname(PG_FUNCTION_ARGS);\n\n/* Composite types */\n\nPG_FUNCTION_INFO_V1(spycastoid);\n\nDatum\nspycastoid(PG_FUNCTION_ARGS)\n{\n TupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);\n int32 slot = PG_GETARG_INT32(1);\n int32 myid;\n\n bool isnull;\n myid = DatumGetInt32(GetAttributeByNum(t, slot, &isnull));\n if (isnull)\n PG_RETURN_INT32(0);\n\n PG_RETURN_INT32(myid);\n}\n\nPG_FUNCTION_INFO_V1(spycastoidbyname);\n\nDatum\nspycastoidbyname(PG_FUNCTION_ARGS)\n{\n TupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);\n text *slot = PG_GETARG_TEXT_P(1);\n int32 myid;\n bool isnull;\n\n /*printf( \"cast -->%s<----\", slot+VARHDRSZ );*/\n\n myid = DatumGetInt32(GetAttributeByName(t, slot->vl_dat, &isnull));\n if (isnull)\n PG_RETURN_INT32(0);\n\n PG_RETURN_INT32(myid);\n}\n\n***************FUNCTION A******************************\nCREATE FUNCTION spycastoid( table_has_other_object, int4) RETURNS int4\n AS '/usr/lib/postgresql/lib/castfunc.so'\n LANGUAGE 'c';\n\n***************FUNCTION A2******************************\nCREATE FUNCTION castoid(table_has_other_object) RETURNS int4\n AS 'select spycastoid($1, 1);' <---colnum\n LANGUAGE 'sql';\n\n***************FUNCTION B******************************\nCREATE FUNCTION spycastoidbyname( table_has_other_object , text) RETURNS int4\n AS '/usr/lib/postgresql/lib/castfunc.so'\n LANGUAGE 'c';\n\n***************FUNCTION B2*****************************\nCREATE FUNCTION spycastoidbyname( table_has_other_object ) RETURNS int4\n AS 'select spycastoidbyname( $1, \\'colname\\')'\n LANGUAGE 'sql';\n\nso now at lease you can do \n\nselect * from child where spycastoid(child, 1)=178120 or preset A2\nselect * from child where spycastoid(child)=178120\nselect * from child where spycastoidbyname(child, 'myfather')=178120 or preset B2\nselect * from child where spycastoidbyname( child )=178120\n\nThere may be some bug in between. I'm not sure. When the tuple get by name, \nsomething the string compare is not correct. Let me know if that work.\nI really want to see some real object action in PostgreSQL\nI'm currently building a Java API for all database to enable object-relation\nmapping for rapid application development. Let me know if you want to talk\nabout this also. \n\nAlex \n\n\n",
"msg_date": "Tue, 12 Mar 2002 08:56:42 -0600",
"msg_from": "alex@AvengerGear.com (Debian User)",
"msg_from_op": true,
"msg_subject": "My Object able solution.....??"
},
{
"msg_contents": "Alex,\n\nThere are quite a few object-relation api's available for java that work\nwith postgres. The most popular being castor, but another lesser known\nis sourceforg.net/projects/player\n\nI'm still not sure how you intend to solve one to many object mapping\nwith this?\n\nDave\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Debian User\nSent: Tuesday, March 12, 2002 9:57 AM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] My Object able solution.....??\n\n\nHere is my short hack for the object reference ( tuple ) return \nas int4 function. \n\n\nDatum spycastoid(PG_FUNCTION_ARGS);\nDatum spycastoidbyname(PG_FUNCTION_ARGS);\n\n/* Composite types */\n\nPG_FUNCTION_INFO_V1(spycastoid);\n\nDatum\nspycastoid(PG_FUNCTION_ARGS)\n{\n TupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);\n int32 slot = PG_GETARG_INT32(1);\n int32 myid;\n\n bool isnull;\n myid = DatumGetInt32(GetAttributeByNum(t, slot, &isnull));\n if (isnull)\n PG_RETURN_INT32(0);\n\n PG_RETURN_INT32(myid);\n}\n\nPG_FUNCTION_INFO_V1(spycastoidbyname);\n\nDatum\nspycastoidbyname(PG_FUNCTION_ARGS)\n{\n TupleTableSlot *t = (TupleTableSlot *) PG_GETARG_POINTER(0);\n text *slot = PG_GETARG_TEXT_P(1);\n int32 myid;\n bool isnull;\n\n /*printf( \"cast -->%s<----\", slot+VARHDRSZ );*/\n\n myid = DatumGetInt32(GetAttributeByName(t, slot->vl_dat, &isnull));\n if (isnull)\n PG_RETURN_INT32(0);\n\n PG_RETURN_INT32(myid);\n}\n\n***************FUNCTION A******************************\nCREATE FUNCTION spycastoid( table_has_other_object, int4) RETURNS int4\n AS '/usr/lib/postgresql/lib/castfunc.so'\n LANGUAGE 'c';\n\n***************FUNCTION A2******************************\nCREATE FUNCTION castoid(table_has_other_object) RETURNS int4\n AS 'select spycastoid($1, 1);' <---colnum\n LANGUAGE 'sql';\n\n***************FUNCTION B******************************\nCREATE FUNCTION spycastoidbyname( table_has_other_object , text) RETURNS\nint4\n AS '/usr/lib/postgresql/lib/castfunc.so'\n LANGUAGE 'c';\n\n***************FUNCTION B2*****************************\nCREATE FUNCTION spycastoidbyname( table_has_other_object ) RETURNS int4\n AS 'select spycastoidbyname( $1, \\'colname\\')'\n LANGUAGE 'sql';\n\nso now at lease you can do \n\nselect * from child where spycastoid(child, 1)=178120 or preset A2\nselect * from child where spycastoid(child)=178120 select * from child\nwhere spycastoidbyname(child, 'myfather')=178120 or preset B2 select *\nfrom child where spycastoidbyname( child )=178120\n\nThere may be some bug in between. I'm not sure. When the tuple get by\nname, \nsomething the string compare is not correct. Let me know if that work. I\nreally want to see some real object action in PostgreSQL I'm currently\nbuilding a Java API for all database to enable object-relation mapping\nfor rapid application development. Let me know if you want to talk about\nthis also. \n\nAlex \n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Tue, 12 Mar 2002 10:41:30 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: My Object able solution.....??"
}
] |
[
{
"msg_contents": "I have been working on PostgreSQL tuning. Trying to come up with some reliable\nand repeatable tested settings. The idea is that with some reliable numbers,\nyou can measure actual proformance improvement. \n\npgbench is a pretty poor tool for doing this. It fluctuates a great deal. Just\nfrom run to run. It is virtualy impossible to get reasonable numbers out of it. \n\nI have my benchmarking database set to a scale of 100. Here is my script:\n\n#! /bin/sh\nHOST=postgres\nDB=bench\ntotxacts=10000\nexport PATH=$PATH:/usr/local/pgsql/bin\n\nfor c in 25 25 50 50 100 100\ndo\n ssh root@$HOST \"sync;sync;sync;sleep 1\"\n t=`expr $totxacts / $c`\n psql -h $HOST -c 'vacuum ' $DB\n psql -h $HOST -c 'checkpoint' $DB\n echo \"===== sync ======\" 1>&2\n ssh root@$HOST \"sync;sync;sync;sleep 1\"\n echo $c concurrent users... 1>&2\n ./pgbench -n -t $t -h $HOST -c $c $DB\ndone\n\nHere are my results:\nVACUUM\nCHECKPOINT\n===== sync ======\n25 concurrent users...\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 25\nnumber of transactions per client: 400\nnumber of transactions actually processed: 10000/10000\ntps = 88.409581(including connections establishing)\ntps = 88.617577(excluding connections establishing)\nVACUUM\nCHECKPOINT\n===== sync ======\n25 concurrent users...\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 25\nnumber of transactions per client: 400\nnumber of transactions actually processed: 10000/10000\ntps = 102.325257(including connections establishing)\ntps = 102.606724(excluding connections establishing)\nVACUUM\nCHECKPOINT\n===== sync ======\n50 concurrent users...\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 50\nnumber of transactions per client: 200\nnumber of transactions actually processed: 10000/10000\ntps = 116.379559(including connections establishing)\ntps = 117.103796(excluding connections establishing)\nVACUUM\nCHECKPOINT\n===== sync ======\n50 concurrent users...\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 50\nnumber of transactions per client: 200\nnumber of transactions actually processed: 10000/10000\ntps = 106.869515(including connections establishing)\ntps = 107.479233(excluding connections establishing)\nVACUUM\nCHECKPOINT\n===== sync ======\n100 concurrent users...\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 100\nnumber of transactions per client: 100\nnumber of transactions actually processed: 10000/10000\ntps = 129.923876(including connections establishing)\ntps = 131.757784(excluding connections establishing)\nVACUUM\nCHECKPOINT\n===== sync ======\n100 concurrent users...\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 100\nnumber of transactions per client: 100\nnumber of transactions actually processed: 10000/10000\ntps = 110.506228(including connections establishing)\ntps = 111.858151(excluding connections establishing)\n",
"msg_date": "Tue, 12 Mar 2002 10:29:17 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "pgbench consistency"
}
] |
[
{
"msg_contents": "As PostgreSQL uses the zlib library (for TOAST?), this is a headsup that a \nbug has been found in the zlib library that could cause data corruption or a \nsecurity breach.\n\nSee http://www.gzip.org/zlib/advisory-2002-03-11.txt for more details.\n\nUpdating zlib is strongly recommended by many sources, and a patch is \navailable.\n\nI have only posted this to HACKERS; if a cross-post to GENERAL or ADMIN is \nuseful, that can be arranged.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 12 Mar 2002 11:05:24 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Zlib vulnerability heads-up."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> As PostgreSQL uses the zlib library (for TOAST?), this is a headsup that a \n> bug has been found in the zlib library that could cause data corruption or a \n> security breach.\n> \n> See http://www.gzip.org/zlib/advisory-2002-03-11.txt for more details.\n> \n> Updating zlib is strongly recommended by many sources, and a patch is \n> available.\n> \n> I have only posted this to HACKERS; if a cross-post to GENERAL or ADMIN is \n> useful, that can be arranged.\n\nFWIW, I really doubt this is much of a problem for postgresql. It's\nmainly a problem for applications dealing with untrusted, compressed\ndata (webbrowsers, imageviewers, programs with skins downloaded from\nthe Internet) etc. \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "12 Mar 2002 11:24:10 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "Lamar Owen wrote:\n[Charset iso-8859-15 unsupported, filtering to ASCII...]\n> As PostgreSQL uses the zlib library (for TOAST?), this is a headsup that a\n> bug has been found in the zlib library that could cause data corruption or a\n> security breach.\n\n PostgreSQL does not use the zlib library for toast. The\n algorithm used in toast is based on Adisak Pochanayon's SLZ.\n\n\nJan\n\n>\n> See http://www.gzip.org/zlib/advisory-2002-03-11.txt for more details.\n>\n> Updating zlib is strongly recommended by many sources, and a patch is\n> available.\n>\n> I have only posted this to HACKERS; if a cross-post to GENERAL or ADMIN is\n> useful, that can be arranged.\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 12 Mar 2002 11:34:13 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "On Tuesday 12 March 2002 11:34 am, Jan Wieck wrote:\n> Lamar Owen wrote:\n> [Charset iso-8859-15 unsupported, filtering to ASCII...]\n> > As PostgreSQL uses the zlib library (for TOAST?), this is a headsup that\n> > a bug has been found in the zlib library that could cause data\n> > corruption or a security breach.\n\n> PostgreSQL does not use the zlib library for toast. The\n> algorithm used in toast is based on Adisak Pochanayon's SLZ.\n\nGood. I think.\n\nBut what _does_ use zlib in PostgreSQL?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 12 Mar 2002 11:45:31 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "On Tuesday 12 March 2002 11:24 am, Trond Eivind Glomsr�d wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Updating zlib is strongly recommended by many sources, and a patch is\n> > available.\n\n> FWIW, I really doubt this is much of a problem for postgresql. It's\n> mainly a problem for applications dealing with untrusted, compressed\n> data (webbrowsers, imageviewers, programs with skins downloaded from\n> the Internet) etc.\n\nIt's probably NOT a big problem; but it IS a bug in an underlying library.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 12 Mar 2002 11:46:49 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "Is there a way in PostgreSQL to have a client lock a row in a table for\nexclusive access?\nI need to be able to lock individual rows in a table for SELECT and UPDATE\nin the one client and deny all other clients from accessing those rows at\nall while the lock is being held. They do need to be able to access other\nrows that are not locked.\n\nThank you,\n\nLance Ellinghaus\n\n",
"msg_date": "Tue, 12 Mar 2002 11:12:29 -0600",
"msg_from": "\"Lance Ellinghaus\" <lellinghaus@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Exclusive Row access???"
},
{
"msg_contents": "On Tue, 2002-03-12 at 16:05, Lamar Owen wrote:\n> As PostgreSQL uses the zlib library (for TOAST?), this is a headsup that a \n> bug has been found in the zlib library that could cause data corruption or a \n> security breach.\n> \n\nTrue enough, ldd on my system says that postgres is linked against zlib,\nbut I knew that TOAST didn't use it (it uses\nsrc/backend/utils/adt/pg_lzcompress.c), so what does?\n\nAfter a quick look, I offer the following summary:\n\n\"zlib\" is listed as a loadable module in PL/Python (but I don't know\nwhether this is related to the same zlib at all)\n \nzlib.h *is* used by the \"custom\" format of pg_dump.\n\nMaybe I'm missing something, though - I just did a grep for \"zlib\" and\nHAVE_LIBZ through the source. \n\nThis also suggests that the postgres backend needn't be linked against\nzlib at all, if pg_dump is the only utility using it. \n\nThe risk from this vulnerability is that someone receiving a dump in\ncustom format and using pg_restore on it might be at risk of a trojan\nattack - but this seems like a very slim risk (how many people would\nattempt to load a data dump from an untrusted source into their DB?).\n\nNonetheless, it's useful to know this (and it also means I've spotted\nthe (possibly) unnecessary library link :)\n\nRegards\n\nJohn\n\n\n",
"msg_date": "12 Mar 2002 17:18:35 +0000",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "Lamar Owen wrote:\n> On Tuesday 12 March 2002 11:34 am, Jan Wieck wrote:\n> > Lamar Owen wrote:\n> > [Charset iso-8859-15 unsupported, filtering to ASCII...]\n> > > As PostgreSQL uses the zlib library (for TOAST?), this is a headsup that\n> > > a bug has been found in the zlib library that could cause data\n> > > corruption or a security breach.\n>\n> > PostgreSQL does not use the zlib library for toast. The\n> > algorithm used in toast is based on Adisak Pochanayon's SLZ.\n>\n> Good. I think.\n>\n> But what _does_ use zlib in PostgreSQL?\n\n On a quick search I can only see pg_backup using it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 12 Mar 2002 12:58:59 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "Lance Ellinghaus wrote:\n\n> Is there a way in PostgreSQL to have a client lock a row in a table for\n> exclusive access?\n\n\nMight want to start with section 9.6.2 of the PostgreSQL documentation, \n\"Row-Level Locks\". You may want SELECT FOR UPDATE, too.\n\n-Fran\n\n\n",
"msg_date": "Tue, 12 Mar 2002 13:03:15 -0500",
"msg_from": "Fran Fabrizio <ffabrizio@mmrd.com>",
"msg_from_op": false,
"msg_subject": "Re: Exclusive Row access???"
},
{
"msg_contents": "On Tue, Mar 12, 2002 at 11:45:31AM -0500, Lamar Owen wrote:\n> \n> But what _does_ use zlib in PostgreSQL?\n\nI thought it was only pg_dump (the binary output format).\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 12 Mar 2002 13:20:17 -0500",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "On Tue, 2002-03-12 at 11:46, Lamar Owen wrote:\n> On Tuesday 12 March 2002 11:24 am, Trond Eivind Glomsrød wrote:\n> > Lamar Owen <lamar.owen@wgcr.org> writes:\n> > > Updating zlib is strongly recommended by many sources, and a patch is\n> > > available.\n> \n> > FWIW, I really doubt this is much of a problem for postgresql. It's\n> > mainly a problem for applications dealing with untrusted, compressed\n> > data (webbrowsers, imageviewers, programs with skins downloaded from\n> > the Internet) etc.\n> \n> It's probably NOT a big problem; but it IS a bug in an underlying library.\n\nCan we just add an item to the 7.2.1 release notes suggesting that zlib\n1.1.4 or greater is installed? AFAICT that should be sufficient.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "12 Mar 2002 13:51:14 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "IIRC, the issue here is that it was a double free and that it was ONLY\nof possible concern in the even that a specific sequence of calls were\nmade AND a very cleaver hack was available to allow for\nuncontrolled/unvalidated input.\n\nWhile it may be worth noting, I seriously doubt this is a security issue\nfor PostgresSQL.\n\nGreg\n\n\n\nOn Tue, 2002-03-12 at 10:46, Lamar Owen wrote:\n> On Tuesday 12 March 2002 11:24 am, Trond Eivind Glomsrød wrote:\n> > Lamar Owen <lamar.owen@wgcr.org> writes:\n> > > Updating zlib is strongly recommended by many sources, and a patch is\n> > > available.\n> \n> > FWIW, I really doubt this is much of a problem for postgresql. It's\n> > mainly a problem for applications dealing with untrusted, compressed\n> > data (webbrowsers, imageviewers, programs with skins downloaded from\n> > the Internet) etc.\n> \n> It's probably NOT a big problem; but it IS a bug in an underlying library.\n> -- \n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html",
"msg_date": "12 Mar 2002 14:18:02 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "On 12 Mar 2002, Greg Copeland wrote:\n\n> IIRC, the issue here is that it was a double free and that it was ONLY\n> of possible concern in the even that a specific sequence of calls were\n> made AND a very cleaver hack was available to allow for\n> uncontrolled/unvalidated input.\n> \n> While it may be worth noting, I seriously doubt this is a security issue\n> for PostgresSQL.\n\nIt's an easy DOS for things like mozilla, netscape. For postgres, using \nit internally? Nah.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n",
"msg_date": "Tue, 12 Mar 2002 15:22:45 -0500 (EST)",
"msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <teg@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "On Tuesday 12 March 2002 03:22 pm, Trond Eivind Glomsr�d wrote:\n> On 12 Mar 2002, Greg Copeland wrote:\n> > While it may be worth noting, I seriously doubt this is a security issue\n> > for PostgresSQL.\n\n> It's an easy DOS for things like mozilla, netscape. For postgres, using\n> it internally? Nah.\n\nThus the subject line ending with the words 'heads-up' -- not a serious \nissue, but something to just take note of.\n\nNow, had it been that TOAST used it, it might have been possible, however \nremote it may seem, to craft something like a form item entry on a web page \nbackended by PostgreSQL that could end up being processed by that code. \nStranger things _have_ happened. And the non-script-kiddie malicious \ncrackers out there are that devious. You really can't be too careful.\n\nAnd maybe all of the people on HACKERS haven't seen the CERT advisory as yet; \na heads-up is just that.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 12 Mar 2002 15:50:01 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "Lamar Owen wrote:\n> On Tuesday 12 March 2002 11:24 am, Trond Eivind Glomsr�d wrote:\n> > Lamar Owen <lamar.owen@wgcr.org> writes:\n> > > Updating zlib is strongly recommended by many sources, and a patch is\n> > > available.\n>\n> > FWIW, I really doubt this is much of a problem for postgresql. It's\n> > mainly a problem for applications dealing with untrusted, compressed\n> > data (webbrowsers, imageviewers, programs with skins downloaded from\n> > the Internet) etc.\n>\n> It's probably NOT a big problem; but it IS a bug in an underlying library.\n\n If fact, it isn't a problem at all. The only data any\n PostgreSQL DBA would ever pump into a restore is something he\n built himself or something he got from a secure source,\n right? I mean, you don't feed some unknown script you found\n on the net into the DB as the PostgreSQL superuser. In that\n case, someone doesn't need to hand-craft such bad compressed\n data, he can simply use the \\! functionality of psql in his\n script to do whatever he wants as user postgres.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 12 Mar 2002 16:00:56 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n\n> Lamar Owen wrote:\n> > On Tuesday 12 March 2002 11:34 am, Jan Wieck wrote:\n> > > Lamar Owen wrote:\n> > > [Charset iso-8859-15 unsupported, filtering to ASCII...]\n> > > > As PostgreSQL uses the zlib library (for TOAST?), this is a headsup that\n> > > > a bug has been found in the zlib library that could cause data\n> > > > corruption or a security breach.\n> >\n> > > PostgreSQL does not use the zlib library for toast. The\n> > > algorithm used in toast is based on Adisak Pochanayon's SLZ.\n> >\n> > Good. I think.\n> >\n> > But what _does_ use zlib in PostgreSQL?\n> \n> On a quick search I can only see pg_backup using it.\n\nI see many more, including the postmaster:\n\n[teg@halden teg]$ ldd /usr/bin/postmaster \n\tlibssl.so.2 => /lib/libssl.so.2 (0x4002e000)\n\tlibcrypto.so.2 => /lib/libcrypto.so.2 (0x4005c000)\n\tlibkrb5.so.3 => /usr/kerberos/lib/libkrb5.so.3 (0x4011f000)\n\tlibk5crypto.so.3 => /usr/kerberos/lib/libk5crypto.so.3 (0x40176000)\n\tlibcom_err.so.3 => /usr/kerberos/lib/libcom_err.so.3 (0x40186000)\n\tlibz.so.1 => /usr/lib/libz.so.1 (0x40188000)\n\tlibcrypt.so.1 => /lib/libcrypt.so.1 (0x40196000)\n\tlibresolv.so.2 => /lib/libresolv.so.2 (0x401c4000)\n\tlibnsl.so.1 => /lib/libnsl.so.1 (0x401d5000)\n\tlibdl.so.2 => /lib/libdl.so.2 (0x401ea000)\n\tlibm.so.6 => /lib/i686/libm.so.6 (0x401ed000)\n\tlibreadline.so.4 => /usr/lib/libreadline.so.4 (0x4020f000)\n\tlibtermcap.so.2 => /lib/libtermcap.so.2 (0x40235000)\n\tlibc.so.6 => /lib/i686/libc.so.6 (0x4023a000)\n\t/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)\n[teg@halden teg]$ \n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "12 Mar 2002 18:30:42 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up."
},
{
"msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n>>> But what _does_ use zlib in PostgreSQL?\n>> \n>> On a quick search I can only see pg_backup using it.\n\n> I see many more, including the postmaster:\n\nAFAIK the only actual *use* of zlib is in pg_dump/pg_restore.\n\nPretty much all our executables will be *linked* to it, however.\nThis is because Autoconf doesn't conveniently support making different\nLIBS lists for every executable, and so we just use one one-size-fits-\nall list for all of 'em. (Perhaps AC 2.5* will make this better?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Mar 2002 09:50:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up. "
},
{
"msg_contents": "Tom Lane writes:\n\n> This is because Autoconf doesn't conveniently support making different\n> LIBS lists for every executable, and so we just use one one-size-fits-\n> all list for all of 'em. (Perhaps AC 2.5* will make this better?)\n\nAutoconf has no knowledge of what your build system looks like. It merely\ntests what libraries exist and stores that knowledge in a list. It's up\nto you what you do with that list.\n\nWe could probably replace $(LIBS) with $(filter {the libraries you really\nwant}, $(LIBS)) everywhere (see libpq Makefile). But it might be hard to\nmaintain. Not sure.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 13 Mar 2002 14:46:00 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up. "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> We could probably replace $(LIBS) with $(filter {the libraries you really\n> want}, $(LIBS)) everywhere (see libpq Makefile). But it might be hard to\n> maintain. Not sure.\n\nI'm concerned about cross-library dependencies with that sort of thing.\neg, on some platforms maybe -lcurses requires -ltermcap, on others no.\nThe existing process for building the LIBS list gets this right, but\nextracting a subset of the LIBS list isn't guaranteed to be right.\n\nI'm all for trimming the LIBS list for particular executables if we\ncan do it ... but as Peter says, it might be a headache.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Mar 2002 15:01:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Zlib vulnerability heads-up. "
}
] |
[
{
"msg_contents": "\n> Index Scan using foo_f1_key on foo (cost=0.00..17.08 rows=1 width=12)\n> indxqual: (f1 = 11)\n> qual: (f3 = 44)\n\nWow, that looks really nice. \nThe field headers could probably be more verbose, like:\n\nIndex Scan using foo_f1_key on foo (cost=0.00..17.08 rows=1 width=12)\n Index Filter: (f1 = 11)\n Filter: (f3 = 44)\n\nand for btree ranges: \n Lower Index Filter:\n Upper Index Filter:\n\n> Question for the group: does this seem valuable enough to put into the\n> standard EXPLAIN output, or should it be a special option? I can\n\nImho make it standard for EXPLAIN. Simply too useful to not show it :-)\n\nAndreas\n",
"msg_date": "Tue, 12 Mar 2002 17:23:09 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Adding qualification conditions to EXPLAIN output"
}
] |
[
{
"msg_contents": "> > > > > Personally, I think that Tom's code should go into standard EXPLAIN.\n> > > > \n> > > > I am confused. Which grammar do you like?\n> > > \n> > > Neither =).\n> > \n> > OK, would you suggest one?\n> \n> I don't think there needs to be a grammar change. I think that Tom's\n> qualification changes should go into non-verbose EXPLAIN and that pretty\n> vs. non-pretty debug just gets handled via debug_print_pretty.\n\ncount me in :-)\n\nAnd if I want it verbose I want it verbose (== gimme all you can tell).\nI would not really see a logic to different levels, what goes in which \nlevel ? Seems I would always want to see some detail in each of the\nlevels.\n\nAndreas\n",
"msg_date": "Tue, 12 Mar 2002 17:42:43 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Rationalizing EXPLAIN VERBOSE output"
}
] |
[
{
"msg_contents": "we're running on sgi powerchallenge, 8 r10000 4-way smp, and we're getting bad performance from postgres, throughput increases from 1 to 5 streams, but from 5 and above there is no further increase, performance analysis show high sleep waiting for resources an example:\n\nrunning(user mode) =32.57\nrunning(system mode)=7.15\nrunning(graphics mode)=0.21\nwaiting(for block I/O)=0.03\nwaiting(paging)=0.00\nwaiting(for memory)=0.00\nwaiting(in select)=17.13\nwaiting(in cpu queue)=0.26\nsleep(for resource)=42.87\n\nany tip?\n\nthanks and regards\n\nalmost forget postgres7.2\n\n\n\n\n\n\n\nwe're running on sgi powerchallenge, 8 r10000 4-way \nsmp, and we're getting bad performance from postgres, throughput increases from \n1 to 5 streams, but from 5 and above there is no further increase, performance \nanalysis show high sleep waiting for resources an example:\n \nrunning(user mode) =32.57\nrunning(system mode)=7.15\nrunning(graphics mode)=0.21\nwaiting(for block I/O)=0.03\nwaiting(paging)=0.00\nwaiting(for memory)=0.00\nwaiting(in select)=17.13\nwaiting(in cpu queue)=0.26\nsleep(for resource)=42.87\n \nany tip?\n \nthanks and regards\n \nalmost forget \npostgres7.2",
"msg_date": "Tue, 12 Mar 2002 18:45:12 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "bad performance on irix"
},
{
"msg_contents": "thanks for your answer, I send three mails detailing, you must have missed\none, briefing:\nwe're running a like tpch on an 8 processor machine, performance grows from\n1 to 5 streams, but in 5 streams and up results get steady, I mean, the\nresults are measured on query-per-hour, it is obtained multiplying for the\nnumber of streams and dividing for real time, so when your up of 5 streams\nit is supposed than performance must grow up to 7 or 8 streams, but it gets\nsteady.\nthank and regard\n\n",
"msg_date": "Wed, 13 Mar 2002 08:49:18 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "part of postgres.conf\n\nfsync on\nwal_files=6\nwal_buffers=64\nshared_buffers=81940\nsort_mem=16384\ncheckpoint segments=10\n\n\nthanks and regards\n\n",
"msg_date": "Wed, 13 Mar 2002 09:01:13 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "lamigo@atc.unican.es (\"Luis Alberto Amigo Navarro\") writes:\n\n > we're running on sgi powerchallenge, 8 r10000 4-way smp, and we're\n > getting bad performance from postgres, throughput increases from 1\n > to 5 streams, but from 5 and above there is no further increase,\n > performance analysis show high sleep waiting for resources\n\nIIRC there is a bottleneck on calls to sleep() or similar on IRIX\nSMP. All requests are dealt with on just one of the CPUs. I don't\nrecollect whether there is a way to work around that or whether\nprograms need to be rewritten.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\npete.forman@westerngeco.com -./\\.- opinion of Schlumberger, Baker\nhttp://petef.port5.com (new) -./\\.- Hughes or their divisions.\n",
"msg_date": "25 Mar 2002 15:04:04 +0000",
"msg_from": "Pete Forman <pete.forman@westerngeco.com>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on irix"
}
] |
[
{
"msg_contents": "here is a graph with the output for 5 streams, remember it has 8 cpus\nRegards",
"msg_date": "Tue, 12 Mar 2002 19:21:11 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "again on bad performance"
},
{
"msg_contents": "yes, it is done with cvd, this tool is part of speedshop utilities that are\nincluded on Irix.\nThey use hardware counters and stack registers to obtain data, it drops\nabout 5% performance (all marked as system)\nthanks and regards\n\n",
"msg_date": "Wed, 13 Mar 2002 18:48:32 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: again on bad performance"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems that the Linux kernel will very shortly acquire a lightweight\nuserlevel locking primitive (called futexes), thanks primarily to Rusty\nand Hubertus. It looks to be very useful for the sort of locking that\ndatabases of various types need to do.\n\nThey have a bunch of rather nice properties:\n\na) low overhead in the no-contention case - a single locked\n instruction on i386\nb) no kernel overhead for non-contended locks - make as\n many as you like, the kernel memory cost is only\n O(number of locks with waiters)\nc) are interruptible / restartable across signals\nd) the name :-)\n\nThey don't do:\n\na) deadlock detection\nb) cleanup on process exit -- the kernel doesn't know who\n had the lock, so it can't help here\n\nA reader/writer version is available, though it's currently implemented\nwith two futexes. Spin-for-a-while-before-sleeping versions are planned.\n\nThe API looks like this:\n\n\t/* these can be stored anywhere -- mmapped file,\n\t * sysv shm, shared anonymous memory */\n\tstruct futex lock;\n\n\t/* this does mprotect(.., ...|PROT_SEM, ...);\n\t * seemingly some architectures need to do odd\n\t * things to get atomic/coherent memory */\n\tif(futex_region(lock, sizeof(lock)))\n\t\tfail(\"futexes not available on this kernel\");\n\n\tfutex_init(&lock);\n\n\t...\n\n\t/* grab the lock -- we also have a futex_trydown */\n\tif(futex_down(&lock))\n\t\tfail(\"couldn't get lock\");\n\n\t/* critical section */\n\n\t/* release lock */\n\tfutex_up(&lock);\n\n\nWe're looking for interesting applications to try this stuff out on.\nAre there:\n\na) parts of postgresql which would like this, or\nb) changes to the interface (or feature set) which\n would make it more suited\n\n?\n\nThe LWLocks look not unlike this, but my impression is that they are\nlow-contention, so any improvement would be small.\n\nAny pointers into appropriate bits of source would be greatly appreciated.\n\nThanks,\n\nMatthew.\n\n",
"msg_date": "Tue, 12 Mar 2002 21:25:12 +0000 (GMT)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": true,
"msg_subject": "Lightweight locking primitive"
},
{
"msg_contents": "Matthew Kirkwood wrote:\n> \n> Hi,\n> \n> It seems that the Linux kernel will very shortly acquire a lightweight\n> userlevel locking primitive (called futexes), thanks primarily to Rusty\n> and Hubertus. It looks to be very useful for the sort of locking that\n> databases of various types need to do.\n> \n> They have a bunch of rather nice properties:\n\nI am curious how 'futexes' are different/better than POSIX (pthread\nstyle) mutexes?\n\n> \n> a) low overhead in the no-contention case - a single locked\n> instruction on i386\n\nshould be same for pthread_mutex_lock()\n\n> b) no kernel overhead for non-contended locks - make as\n> many as you like, the kernel memory cost is only\n> O(number of locks with waiters)\n\nWell it can't have kernel overhead for non-contended locks if a\nnon-contended lock is one opcode, can it?\n\n> c) are interruptible / restartable across signals\n\nNot sure what 'restartable' means? Do you mean locking primitives would\nrestarted by kernel when interrupted by signals? Like kernel calls with\nSA_RESTART set? How that would be possible if kernel does not even know\nabout non-contended locks?\n\n> d) the name :-)\n> \n> They don't do:\n> \n> a) deadlock detection\n> b) cleanup on process exit -- the kernel doesn't know who\n> had the lock, so it can't help here\n> \n> A reader/writer version is available, though it's currently implemented\n> with two futexes. Spin-for-a-while-before-sleeping versions are planned.\n> \n\nRW locks are defined by POSIX too and can be implemented by mutex +\ncondvar. I wonder what is wrong with those... At the same time Linux has\nPOSIX semaphores which can not be shared across processes, making them\nquite useless. Fixing that could help postgres quite a bit more I\nthink...\n\n-- igor\n",
"msg_date": "Tue, 12 Mar 2002 15:48:23 -0600",
"msg_from": "Igor Kovalenko <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Lightweight locking primitive"
},
{
"msg_contents": "Igor Kovalenko <Igor.Kovalenko@motorola.com> writes:\n\n> Matthew Kirkwood wrote:\n> > \n> > Hi,\n> > \n> > It seems that the Linux kernel will very shortly acquire a lightweight\n> > userlevel locking primitive (called futexes), thanks primarily to Rusty\n> > and Hubertus. It looks to be very useful for the sort of locking that\n> > databases of various types need to do.\n> > \n> > They have a bunch of rather nice properties:\n> \n> I am curious how 'futexes' are different/better than POSIX (pthread\n> style) mutexes?\n\nThey're basically the same thing. Currently, pthread_mutexes on Linux\n(implemented in glibc) are fairly gross in the contended case, since\nthere is no clean way to wait for lock release, and they interact\nfairly nastily with signal semantics. The futex patches create a new\nsystem call which lets you cleanly wait for a locked futex (an\nunlocking task checks for waiting lockers and calls into the kernel\nfor wakeups if it finds any).\n\nThere's no reason that POSIX mutextes and semaphores couldn't be\nimplemented on top of futexes, usable both in threaded and\nmultiprocess shared-memory environments. \n\n> Not sure what 'restartable' means? Do you mean locking primitives would\n> restarted by kernel when interrupted by signals? Like kernel calls with\n> SA_RESTART set? How that would be possible if kernel does not even know\n> about non-contended locks?\n\nI interpret the above as meaning: contended case (blocked in\nfutex_wait syscall or whatever it's called) can be cleanly interrupted\nand by a signal and restarted automatically.\n\n> RW locks are defined by POSIX too and can be implemented by mutex +\n> condvar. I wonder what is wrong with those...\n\nThere's no conflict between POSIX locks and futexes; the latter are\njust a good, new way to implement the former.\n\n> At the same time Linux has\n> POSIX semaphores which can not be shared across processes, making them\n> quite useless. Fixing that could help postgres quite a bit more I\n> think...\n\nYes, having mutexes and semaphores shareable by different processes\nis one of the benefits of the new locks as I understand them.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "12 Mar 2002 17:18:16 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Lightweight locking primitive"
},
{
"msg_contents": "Doug McNaught wrote:\n> They're basically the same thing. Currently, pthread_mutexes on Linux\n> (implemented in glibc) are fairly gross in the contended case, since\n> there is no clean way to wait for lock release, and they interact\n> fairly nastily with signal semantics. The futex patches create a new\n> system call which lets you cleanly wait for a locked futex (an\n> unlocking task checks for waiting lockers and calls into the kernel\n> for wakeups if it finds any).\n\nStrange that it doesn't wait for the lock. BSD/OS has:\n\n The pthread_mutex_lock() function locks the mutex pointed to by mutex. If\n mutex is already locked, the calling thread will block until the mutex\n becomes available. Upon success the pthread_mutex_lock() function re-\n turns with the mutex locked and the calling thread as its owner.\n\nIn fact, they have a pthread_mutex_trylock() version that doesn't block:\n\n The pthread_mutex_trylock() function performs a non-blocking mutex lock\n operation. It behaves exactly like pthread_mutex_lock() except that if\n any thread (including the calling thread) currently owns the mutex, an\n immediate error return is performed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Mar 2002 19:08:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lightweight locking primitive"
},
{
"msg_contents": "On Tue, 12 Mar 2002, Bruce Momjian wrote:\n\n> > They're basically the same thing. Currently, pthread_mutexes on Linux\n> > (implemented in glibc) are fairly gross in the contended case, since\n> > there is no clean way to wait for lock release,\n\n> Strange that it doesn't wait for the lock.\n[..]\n\nIt does wait, in that the call will not return before or unless\nthe thread has acquired the lock. However, it waits in an ugly\nway, via spin-and-yield or some evil signal or pipe hackery via\na manager thread.\n\npthread_mutexes are fairly ugly, but they should still be\nlightweight. Until now, there was no way to do that under\nLinux. (I don't know how the other free Unixes do it, but I\nsuspect it is not much better.)\n\nMatthew.\n\n",
"msg_date": "Wed, 13 Mar 2002 00:51:37 +0000 (GMT)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": true,
"msg_subject": "Re: Lightweight locking primitive"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Doug McNaught wrote:\n> > They're basically the same thing. Currently, pthread_mutexes on Linux\n> > (implemented in glibc) are fairly gross in the contended case, since\n> > there is no clean way to wait for lock release, and they interact\n> > fairly nastily with signal semantics. The futex patches create a new\n> > system call which lets you cleanly wait for a locked futex (an\n> > unlocking task checks for waiting lockers and calls into the kernel\n> > for wakeups if it finds any).\n> \n> Strange that it doesn't wait for the lock. BSD/OS has:\n\nIt does wait. If the lock is already locked (atomic test in\nuserspace) the process makes a futex_wait system call, which puts the\nprocess on a kernel waitqueue. \n\n From what I can see, the new Linux locks are not a replacement for\nPOSIX locks/semaphores, they're simply a fairly clean way of\nimplementing them. They also just went into the development kernel,\nso any appearance in production systems may take a few months at\nleast. \n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "12 Mar 2002 23:29:22 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Lightweight locking primitive"
},
{
"msg_contents": "Matthew Kirkwood wrote:\n> \n> On Tue, 12 Mar 2002, Bruce Momjian wrote:\n> \n> > > They're basically the same thing. Currently, pthread_mutexes on Linux\n> > > (implemented in glibc) are fairly gross in the contended case, since\n> > > there is no clean way to wait for lock release,\n> \n> > Strange that it doesn't wait for the lock.\n> [..]\n> \n> It does wait, in that the call will not return before or unless\n> the thread has acquired the lock. However, it waits in an ugly\n> way, via spin-and-yield or some evil signal or pipe hackery via\n> a manager thread.\n> \n> pthread_mutexes are fairly ugly, but they should still be\n> lightweight. Until now, there was no way to do that under\n> Linux. (I don't know how the other free Unixes do it, but I\n> suspect it is not much better.)\n\nIf all free Unixes do it in such an ugly way then well, what you get is\nwhat you paid for ;) \nI still would be surprized if all implementations were as bad as Linux\none is. Pthread mutexes are very lightweight and fast on Solaris and QNX\n(I mostly work with those). They can be shared across processes on both.\nImplementation-wise, QNX has corresponding blocking state so when some\nthread locks a mutex other contenders get blocked by kernel. They are\n(one of them) unblocked by kernel when mutex is released.\n\nSpeaking about ugliness, the only issue I see with pthread mutexes is\nthat they can get orphaned. There is no portable way to deal with that,\nbut again both Solaris and QNX have extended API which allows some\nthread to aqcuire ownership of an orphaned mutex. I guess that\neventually will make its way into POSIX.\n\n-- igor\n",
"msg_date": "Tue, 12 Mar 2002 22:41:05 -0600",
"msg_from": "Igor Kovalenko <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Lightweight locking primitive"
}
] |
[
{
"msg_contents": "\nBiggest difference, FUTEX work across address spaces, pthread_mutexes don't\n!\n\nHubertus Franke\nEnterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)\n, OS-PIC (Chair)\nemail: frankeh@us.ibm.com\n(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003\n\n\n\nIgor Kovalenko <Igor.Kovalenko@motorola.com> on 03/12/2002 04:48:23 PM\n\nTo:\ncc: pgsql-hackers@postgresql.org, Hubertus Franke/Watson/IBM@IBMUS,\n rusty@rustcorp.com.au\nSubject: Re: [HACKERS] Lightweight locking primitive\n\n\n\nMatthew Kirkwood wrote:\n>\n> Hi,\n>\n> It seems that the Linux kernel will very shortly acquire a lightweight\n> userlevel locking primitive (called futexes), thanks primarily to Rusty\n> and Hubertus. It looks to be very useful for the sort of locking that\n> databases of various types need to do.\n>\n> They have a bunch of rather nice properties:\n\nI am curious how 'futexes' are different/better than POSIX (pthread\nstyle) mutexes?\n\n>\n> a) low overhead in the no-contention case - a single locked\n> instruction on i386\n\nshould be same for pthread_mutex_lock()\n\n> b) no kernel overhead for non-contended locks - make as\n> many as you like, the kernel memory cost is only\n> O(number of locks with waiters)\n\nWell it can't have kernel overhead for non-contended locks if a\nnon-contended lock is one opcode, can it?\n\n> c) are interruptible / restartable across signals\n\nNot sure what 'restartable' means? Do you mean locking primitives would\nrestarted by kernel when interrupted by signals? Like kernel calls with\nSA_RESTART set? How that would be possible if kernel does not even know\nabout non-contended locks?\n\n> d) the name :-)\n>\n> They don't do:\n>\n> a) deadlock detection\n> b) cleanup on process exit -- the kernel doesn't know who\n> had the lock, so it can't help here\n>\n> A reader/writer version is available, though it's currently implemented\n> with two futexes. Spin-for-a-while-before-sleeping versions are planned.\n>\n\nRW locks are defined by POSIX too and can be implemented by mutex +\ncondvar. I wonder what is wrong with those... At the same time Linux has\nPOSIX semaphores which can not be shared across processes, making them\nquite useless. Fixing that could help postgres quite a bit more I\nthink...\n\n-- igor\n\n\n\n",
"msg_date": "Tue, 12 Mar 2002 17:55:46 -0500",
"msg_from": "\"Hubertus Franke\" <frankeh@us.ibm.com>",
"msg_from_op": true,
"msg_subject": "Re: Lightweight locking primitive"
},
{
"msg_contents": "You should take a look at pthread_mutex_setpshared(). May be they don't\nin Linux, but that's just consequence of braindead implementation.\n\n-- igor\n\nHubertus Franke wrote:\n> \n> Biggest difference, FUTEX work across address spaces, pthread_mutexes don't\n> !\n> \n> Hubertus Franke\n> Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)\n> , OS-PIC (Chair)\n> email: frankeh@us.ibm.com\n> (w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003\n> \n> Igor Kovalenko <Igor.Kovalenko@motorola.com> on 03/12/2002 04:48:23 PM\n> \n> To:\n> cc: pgsql-hackers@postgresql.org, Hubertus Franke/Watson/IBM@IBMUS,\n> rusty@rustcorp.com.au\n> Subject: Re: [HACKERS] Lightweight locking primitive\n> \n> Matthew Kirkwood wrote:\n> >\n> > Hi,\n> >\n> > It seems that the Linux kernel will very shortly acquire a lightweight\n> > userlevel locking primitive (called futexes), thanks primarily to Rusty\n> > and Hubertus. It looks to be very useful for the sort of locking that\n> > databases of various types need to do.\n> >\n> > They have a bunch of rather nice properties:\n> \n> I am curious how 'futexes' are different/better than POSIX (pthread\n> style) mutexes?\n> \n> >\n> > a) low overhead in the no-contention case - a single locked\n> > instruction on i386\n> \n> should be same for pthread_mutex_lock()\n> \n> > b) no kernel overhead for non-contended locks - make as\n> > many as you like, the kernel memory cost is only\n> > O(number of locks with waiters)\n> \n> Well it can't have kernel overhead for non-contended locks if a\n> non-contended lock is one opcode, can it?\n> \n> > c) are interruptible / restartable across signals\n> \n> Not sure what 'restartable' means? Do you mean locking primitives would\n> restarted by kernel when interrupted by signals? Like kernel calls with\n> SA_RESTART set? How that would be possible if kernel does not even know\n> about non-contended locks?\n> \n> > d) the name :-)\n> >\n> > They don't do:\n> >\n> > a) deadlock detection\n> > b) cleanup on process exit -- the kernel doesn't know who\n> > had the lock, so it can't help here\n> >\n> > A reader/writer version is available, though it's currently implemented\n> > with two futexes. Spin-for-a-while-before-sleeping versions are planned.\n> >\n> \n> RW locks are defined by POSIX too and can be implemented by mutex +\n> condvar. I wonder what is wrong with those... At the same time Linux has\n> POSIX semaphores which can not be shared across processes, making them\n> quite useless. Fixing that could help postgres quite a bit more I\n> think...\n> \n> -- igor\n",
"msg_date": "Tue, 12 Mar 2002 17:52:37 -0600",
"msg_from": "Igor Kovalenko <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Lightweight locking primitive"
}
] |
[
{
"msg_contents": "The bad performance in Irix appears to be a lack of resources, most\nlikely system buffers for sockets and I/O. Try increasing the system\nparameter, nbuf, using systune, reboot, and see if it helps. Also,\nuse the \"par\" program with options \"-s -SS -i -u -p <pid>\"\nto monitor activity in the backend -- that may provide some clues.\n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n",
"msg_date": "Tue, 12 Mar 2002 19:05:33 -0500 (EST)",
"msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "nbuf is set to 6653, here is a excerpt from par, thanks and regards\n\n\n\n 0.000mS(+ 0uS)[ 6] postgres(54373): END-semctl() = 0\n 0.038mS(+ 37uS)[ 6] postgres(54373): semop(606, 0x7fff1b50, 1)\nOK\n 20.122mS(+20084uS)[ 6] postgres(54373): semop(606, 0x7fff1b10, 1)\n 27.747mS(+ 7624uS)[ 6] postgres(54373): END-semop() OK\n 27.772mS(+ 24uS)[ 6] postgres(54373): semop(606, 0x7fff1b50, 1)\nOK\n 30.772mS(+ 3000uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\n 35.681mS(+ 4908uS)[ 6] postgres(54373): END-semop() OK\n 35.703mS(+ 21uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 40.219mS(+ 4516uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\n 58.859mS(+18640uS)[ 6] postgres(54373): END-semop() OK\n 58.882mS(+ 23uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\n 61.475mS(+ 2592uS)[ 6] postgres(54373): END-semop() OK\n 61.495mS(+ 20uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 61.967mS(+ 471uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 62.839mS(+ 871uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 63.063mS(+ 224uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 65.175mS(+ 2112uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\n 83.060mS(+17884uS)[ 6] postgres(54373): END-semop() OK\n 83.083mS(+ 22uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\n 85.848mS(+ 2764uS)[ 6] postgres(54373): END-semop() OK\n 85.869mS(+ 21uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 87.775mS(+ 1906uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 87.898mS(+ 122uS)[ 6] postgres(54373): semop(606, 0x7fff1b10, 1)\nOK\n 89.822mS(+ 1924uS)[ 6] postgres(54373): semop(606, 0x7fff1b50, 1)\nOK\n 91.676mS(+ 1853uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\n 100.127mS(+ 8450uS)[ 6] postgres(54373): END-semop() OK\n 100.152mS(+ 25uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 110.706mS(+10553uS)[ 6] postgres(54373): semop(606, 0x7fff1b10, 1)\nOK\n 111.109mS(+ 403uS)[ 6] postgres(54373): semop(606, 0x7fff1b50, 1)\nOK\n 112.860mS(+ 1750uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 113.292mS(+ 432uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 118.938mS(+ 5646uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 119.440mS(+ 502uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 120.410mS(+ 969uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 120.553mS(+ 142uS)[ 6] postgres(54373): semop(606, 0x7fff1b50, 1)\nOK\n 126.386mS(+ 5833uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 126.919mS(+ 533uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 127.574mS(+ 654uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 128.011mS(+ 436uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 128.489mS(+ 477uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 128.895mS(+ 405uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 128.990mS(+ 95uS)[ 6] postgres(54373): semop(606, 0x7fff1b50, 1)\nOK\n 149.407mS(+20416uS)[ 6] postgres(54373): semop(606, 0x7fff1b10, 1)\nOK\n 149.969mS(+ 561uS)[ 6] postgres(54373): semop(606, 0x7fff1b10, 1)\nOK\n 150.364mS(+ 395uS)[ 6] postgres(54373): semop(606, 0x7fff1b50, 1)\nOK\n 151.462mS(+ 1097uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\n 156.185mS(+ 4723uS)[ 6] postgres(54373): END-semop() OK\n 156.204mS(+ 18uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 156.876mS(+ 671uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 158.145mS(+ 1269uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 158.873mS(+ 728uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n 159.773mS(+ 899uS)[ 6] postgres(54373): semop(606, 0x7fff1a10, 1)\nOK\n 160.309mS(+ 535uS)[ 6] postgres(54373): semop(606, 0x7fff1a00, 1)\nOK\n\n\n",
"msg_date": "Wed, 13 Mar 2002 08:54:50 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "Dear Luis,\n> \n> nbuf is set to 6653, here is a excerpt from par, thanks and regards\n\nWhat kind of SGI are you using, and how much memory does it have?\n\nI don't know what to make out of this par output. If this is from a running\nPostgres, then it's waiting for a lock. Try the following:\n\necho where | dbx -p <pid>\n\nwhere <pid> is for the Postgres backend.\n\n--Bob\n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n",
"msg_date": "Wed, 13 Mar 2002 08:19:18 -0500 (EST)",
"msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "sorry, I thought i 've posted it before:\n\nProcessor 0: 196 MHZ IP25 \nCPU: MIPS R10000 Processor Chip Revision: 2.5\nFPU: MIPS R10010 Floating Point Chip Revision: 2.5\nProcessor 1: 196 MHZ IP25 \nCPU: MIPS R10000 Processor Chip Revision: 2.5\nFPU: MIPS R10010 Floating Point Chip Revision: 2.5\nProcessor 2: 196 MHZ IP25 \nCPU: MIPS R10000 Processor Chip Revision: 2.5\nFPU: MIPS R10010 Floating Point Chip Revision: 2.5\nProcessor 3: 196 MHZ IP25 \nCPU: MIPS R10000 Processor Chip Revision: 2.5\nFPU: MIPS R10010 Floating Point Chip Revision: 2.5\nProcessor 4: 196 MHZ IP25 \nCPU: MIPS R10000 Processor Chip Revision: 2.6\nFPU: MIPS R10010 Floating Point Chip Revision: 2.6\nProcessor 5: 196 MHZ IP25 \nCPU: MIPS R10000 Processor Chip Revision: 2.6\nFPU: MIPS R10010 Floating Point Chip Revision: 2.6\nProcessor 6: 196 MHZ IP25 \nCPU: MIPS R10000 Processor Chip Revision: 2.6\nFPU: MIPS R10010 Floating Point Chip Revision: 2.6\nProcessor 7: 196 MHZ IP25 \nCPU: MIPS R10000 Processor Chip Revision: 2.6\nFPU: MIPS R10010 Floating Point Chip Revision: 2.6\nMain memory size: 1024 Mbytes, 2-way interleaved\nInstruction cache size: 32 Kbytes\nData cache size: 32 Kbytes\nSecondary unified instruction/data cache size: 2 Mbytes\nIntegral SCSI controller 0: Version WD33C95A, single ended, revision 0\n Tape drive: unit 4 on SCSI controller 0: DAT\n CDROM: unit 5 on SCSI controller 0\nIntegral SCSI controller 1: Version WD33C95A, differential, revision 0\n Disk drive: unit 1 on SCSI controller 1\n Disk drive: unit 2 on SCSI controller 1\n Disk drive: unit 3 on SCSI controller 1\n Disk drive: unit 4 on SCSI controller 1\nIntegral EPC serial ports: 4\nIntegral EPC parallel port: Ebus slot 5\nIntegral Ethernet controller: et0, Ebus slot 5\nI/O board, Ebus slot 5: IO4 revision 1\nVME bus: adapter 21\nVME bus: adapter 0 mapped to adapter 21\nEPC external interrupts\n\nthanks and regards\n\n",
"msg_date": "Wed, 13 Mar 2002 16:43:42 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "Yes, its waiting for locks, almost all orange area in the grafic is due to\nlock contention\nthanks and regards\n\n",
"msg_date": "Wed, 13 Mar 2002 16:45:17 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "Dear Luis,\n\tAfter looking at your system configuration, I would recommend\nbuying more RAM (it's very inexpensive for older systems like yours),\nand then allocating much more buffer space for PostgreSQL. It will\nhave a profound effect on overall performance, although not for this\nparticular problem where lock contention is an issue.\n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n",
"msg_date": "Wed, 13 Mar 2002 11:04:35 -0500 (EST)",
"msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>",
"msg_from_op": true,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "hi robert:\npostgres is not using all the ram it has allocated, our database is about\n100Mb, it grows up to 300 - 400 Mb on an execution, so i don't think it\nshould be lack of memory.\nthanks and regards\n\n",
"msg_date": "Wed, 13 Mar 2002 17:08:51 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on irix"
},
{
"msg_contents": "if you are interested, here is what dbx gives out, they are from 4 different\nbackends\nthanks and regards",
"msg_date": "Wed, 13 Mar 2002 17:10:26 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: bad performance on irix"
}
] |
[
{
"msg_contents": "I have been looking for a RDBMS to deploy with our solar energy\nsimulation.\n\nMySQL looks like it's free until you want to bundle it with a commercial\napplication, at which time license fees are required. What is the\nsituation with PostgreSQL? \n\nGiven that we have a Java application and don't have the manpower to\nport a DBMS, what are the platforms on which PostgreSQL is currently\nstable?\n\nIs it possible to deploy PostgreSQL via an installer program, then\nautomatically run a script to generate the tables and populate them with\ndata from flat files?\n\nThanks for any comments.\n-- \nRichard Chrenko, Informatik\nInstitut f�r Solartechnik SPF\nHochschule f�r Technik Rapperswil, Oberseestr.10, CH-8640 Rapperswil\nTel +41 55 222 48 33, Fax +41 55 222 48 44, http://www.solarenergy.ch\n",
"msg_date": "Wed, 13 Mar 2002 07:55:19 +0100",
"msg_from": "Richard Chrenko <richard@solarenergy.ch>",
"msg_from_op": true,
"msg_subject": "PostgreSQL the right choice?"
},
{
"msg_contents": "\"Richard Chrenko\" <richard@solarenergy.ch> wrote in message news:3C8EF7D7.1003ECC3@solarenergy.ch...\n>\n> MySQL looks like it's free until you want to bundle it with a commercial\n> application, at which time license fees are required. What is the\n> situation with PostgreSQL?\n\nThat's not why you want to avoid MySQL. You want to avoid MySQL\nbecause it's underpowered. No transactions, no foreign keys, no subselects,\netc. etc.\n\n\n> Given that we have a Java application and don't have the manpower to\n> port a DBMS, what are the platforms on which PostgreSQL is currently\n> stable?\n\nI think the short answer is \"all of them.\" It runs great on Windows (under cygwin)\nand many unix and linux.\n\nTry it out.\n\n\nMarshall\n\n\n\n",
"msg_date": "Tue, 12 Mar 2002 23:50:17 -0800",
"msg_from": "\"Marshall Spight\" <marshall@meetstheeye.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL the right choice?"
},
{
"msg_contents": "Marshall Spight wrote:\n> \"Richard Chrenko\" <richard@solarenergy.ch> wrote in message news:3C8EF7D7.1003ECC3@solarenergy.ch...\n> \n>>MySQL looks like it's free until you want to bundle it with a commercial\n>>application, at which time license fees are required. What is the\n>>situation with PostgreSQL?\n> \n> \n> That's not why you want to avoid MySQL. You want to avoid MySQL\n> because it's underpowered. No transactions, no foreign keys, no subselects,\n> etc. etc.\n> \n\n\n\nWhy is there so much mysql bashing?\nI like postgresql too but I don't have to put down mysql\nto justify using it.\nMysql is a simple, fast database that is quite solid from my experience.\nIt is usually my first choice as a web backend for php/jsp based websites.\n\nI usually use postgresql on system based projects (usually in perl )\nwhere I really need to use foreign keys and tranactions and the\nloss of data would be a very expensive thing.\n\nThe current version of mysql does have foreign keys and transactions\navailable via the new innodb table type. I have not used it production\nyet, but I am testing them.\n\n\n\n\n-- \nVincent Stoessel vincent@xaymaca.com\nLinux and Java Application Developer\n(301) 362-1750\nAIM, MSN: xaymaca2020 , Yahoo Messenger: vks_jamaica\n\n",
"msg_date": "Wed, 13 Mar 2002 22:38:13 -0500",
"msg_from": "Vincent Stoessel <vincent@xaymaca.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL the right choice?"
},
{
"msg_contents": "> That's not why you want to avoid MySQL. You want to avoid MySQL\n> because it's underpowered. No transactions, no foreign keys, no subselects,\n> etc. etc.\n\nNo Unicode, no large nested queries, no views, no triggers, no server-side \nlanguage : no PLpgSQL, no PLperl, etc... Poor ODBC support...\n\nIf your database is involved in a business, you need to migrate to \nPostgreSQL. One day or another, MySQL limited features will hinder you.\n\n/Jean-Michel\n",
"msg_date": "Thu, 14 Mar 2002 09:18:57 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL the right choice?"
},
{
"msg_contents": "Le Mercredi 13 Mars 2002 07:55, Richard Chrenko a écrit :\n> MySQL looks like it's free until you want to bundle it with a commercial\n> application, at which time license fees are required. What is the\n> situation with PostgreSQL?\n\nPostgreSQL is completely free for commercial and non-commercial use. pgAdmin2 \n(http://pgadmin.postgresql.org), PostgreSQL Windows administration interface \nis completely free.\n\n> Given that we have a Java application and don't have the manpower to\n> port a DBMS, what are the platforms on which PostgreSQL is currently\n> stable?\n\nPostgreSQL is the most stable Open-source database available.\n\n> Is it possible to deploy PostgreSQL via an installer program, then\n> automatically run a script to generate the tables and populate them with\n> data from flat files?\n\nLinux : PostgreSQL 7.2can be deployed via RPM (see PostgreSQL FTP in \n/binaries).\nWindows : PostgreSQL 7.2 is included in Cygwin installer \n(http://www.cygwin.com).\n\nAn interactive doc is available from http://www.postgresql.org/idocs/.\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Thu, 14 Mar 2002 09:25:33 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL the right choice?"
},
{
"msg_contents": "Do you really need an SQL Database for this? I am trying to figure out why\nyou would need a database server with a simulation? I have heard of a\ntinySQL server that is written all in java and might do what you need\nwithout having to install and setup a database server just to run a\nsimulation. How big is the data set you are using? couldn't you use a flat\nfile for this?\n\n-----Original Message-----\nFrom: pgsql-general-owner@postgresql.org\n[mailto:pgsql-general-owner@postgresql.org]On Behalf Of Richard Chrenko\nSent: Wednesday, March 13, 2002 1:55 AM\nTo: pgsql-general@postgresql.org\nSubject: [GENERAL] PostgreSQL the right choice?\n\n\nI have been looking for a RDBMS to deploy with our solar energy\nsimulation.\n\nMySQL looks like it's free until you want to bundle it with a commercial\napplication, at which time license fees are required. What is the\nsituation with PostgreSQL?\n\nGiven that we have a Java application and don't have the manpower to\nport a DBMS, what are the platforms on which PostgreSQL is currently\nstable?\n\nIs it possible to deploy PostgreSQL via an installer program, then\nautomatically run a script to generate the tables and populate them with\ndata from flat files?\n\nThanks for any comments.\n--\nRichard Chrenko, Informatik\nInstitut f�r Solartechnik SPF\nHochschule f�r Technik Rapperswil, Oberseestr.10, CH-8640 Rapperswil\nTel +41 55 222 48 33, Fax +41 55 222 48 44, http://www.solarenergy.ch\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Thu, 14 Mar 2002 14:42:36 -0500",
"msg_from": "\"David Siebert\" <david@eclipsecat.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL the right choice?"
},
{
"msg_contents": "Checkout Firebird as well - which appears to be the other alternative.\n\n\thttp://firebird.sourceforge.net/index.php\n\nq: How does open source affect the costs for companies which use\nInterbase or Firebird as an embedded server?\n\na: Firebird server and client are free of all licensing fees,\nregardless of whether you download a binary or build it yourself from\nthe source code.\n\n\nI agree that mySQL is too limited for serious use.\n\nOn Tue, 12 Mar 2002 23:50:17 -0800, \"Marshall Spight\"\n<marshall@meetstheeye.com> wrote:\n\n> \"Richard Chrenko\" <richard@solarenergy.ch> wrote in message news:3C8EF7D7.1003ECC3@solarenergy.ch...\n> >\n> > MySQL looks like it's free until you want to bundle it with a commercial\n> > application, at which time license fees are required. What is the\n> > situation with PostgreSQL?\n> \n> That's not why you want to avoid MySQL. You want to avoid MySQL\n> because it's underpowered. No transactions, no foreign keys, no subselects,\n> etc. etc.\n> \n> \n> > Given that we have a Java application and don't have the manpower to\n> > port a DBMS, what are the platforms on which PostgreSQL is currently\n> > stable?\n> \n> I think the short answer is \"all of them.\" It runs great on Windows (under cygwin)\n> and many unix and linux.\n> \n> Try it out.\n> \n> \n> Marshall\n> \n> \n\n",
"msg_date": "Fri, 15 Mar 2002 12:43:24 +0000",
"msg_from": "Jasbir D <jasbird@hushmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL the right choice?"
},
{
"msg_contents": "Jasbir D wrote:\n> Checkout Firebird as well - which appears to be the other alternative.\n> \n> \thttp://firebird.sourceforge.net/index.php\n> \n> q: How does open source affect the costs for companies which use\n> Interbase or Firebird as an embedded server?\n> \n> a: Firebird server and client are free of all licensing fees,\n> regardless of whether you download a binary or build it yourself from\n> the source code.\n\nI found it interesting that Firebird is moving to C++ for their base\ncode in 2.0. They have already started porting since releasing 1.0\nrecently:\n\t\n\tWhat Happens after Firebird 1.0?\n\t\n\tWell, work certainly won't stop. At the moment we have a Firebird 2.0\n\ttree within CVS. These tree has ported the original code to C++ and\n\tadded much improved exception handling and memory management.\n\nThere are clearly some C++ constructs that would be nice to use in the\nbackend code, but the extra baggage and inability to limit people to\njust a subset of the C++ features make such a move very questionable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 20 Mar 2002 14:35:43 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Firebird 2.0 moving to C++"
}
] |
[
{
"msg_contents": "Hello! \n\nI have all my data in KoI8-R ecnoding and need to see this data in the MS Access Through PgSQL ODBC driver. \nBut Win doesn't know KoI8-R encoding and MS Access doesn't know how to recode it). May be there is another way\nto do it on the server side ??\n\nPGSQL 7.2 on solaris, all databases in koi8-r locale. \n\n--------------------\nregards\nkorshunov\n",
"msg_date": "Wed, 13 Mar 2002 10:30:59 +0300",
"msg_from": "Korshunov Ilya <kosha@kp.ru>",
"msg_from_op": true,
"msg_subject": "PgSQL & WIn ODBC driver"
}
] |
[
{
"msg_contents": "Hi everybody, and many thanks for the work you do!\n\n\nI would like to use PostgreSQL as an embedded database, but when I just\ncreate a new database, its size is already 20 MB and it is empty!!\nI have already asked in other goups but no good answer.\n\nSo, I ask the Hacking team. Why the db is so big at creating time, what are\nthe value I can play with, without risk. May be some features can be disable\nto gain some space?\n\nThanks\n\nYannick\n\n",
"msg_date": "Wed, 13 Mar 2002 00:19:02 -0800",
"msg_from": "\"Yannick ALLUSSE\" <yannick.allusse@kisbv.com>",
"msg_from_op": true,
"msg_subject": "Embedded PostgreSQL"
}
] |
[
{
"msg_contents": "I have been unable to successfully search the mailing list archives off of\narchives.postgresql.org for a few days now. Just me, or does anyone else\nexperience this behaviour as well?\n\nGavin\n\n",
"msg_date": "Thu, 14 Mar 2002 01:30:46 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Archives"
},
{
"msg_contents": "On Wed, 2002-03-13 at 09:30, Gavin Sherry wrote:\n> I have been unable to successfully search the mailing list archives off of\n> archives.postgresql.org for a few days now. Just me, or does anyone else\n> experience this behaviour as well?\n\nYes, it's extremely slow for me as well. I commented on this a couple\ndays ago, and Ian Barwick suggested Google:\n\nhttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql.hackers\n\nIt seems to work quite well.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "13 Mar 2002 11:07:14 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Archives"
},
{
"msg_contents": "Gavin Sherry wrote:\n> I have been unable to successfully search the mailing list archives off of\n> archives.postgresql.org for a few days now. Just me, or does anyone else\n> experience this behaviour as well?\n\nYes, it has been down for a long time, months. Marc knows and is\nworking on it. I use:\n\n\thttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql\n\nIn fact, I recommended to Vince that we point to this as our official\narchives until we get ours fixed. Haven't heard back from him.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Mar 2002 15:46:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Archives"
},
{
"msg_contents": "On Wednesday 13 March 2002 17:07, Neil Conway wrote:\n> On Wed, 2002-03-13 at 09:30, Gavin Sherry wrote:\n> > I have been unable to successfully search the mailing list archives off\n> > of archives.postgresql.org for a few days now. Just me, or does anyone\n> > else experience this behaviour as well?\n>\n> Yes, it's extremely slow for me as well. I commented on this a couple\n> days ago, and Ian Barwick suggested Google:\n>\n> http://groups.google.com/groups?hl=en&group=comp.databases.postgresql.hacke\n>rs\n>\n> It seems to work quite well.\n\nI should just mention that all the comp.databases.postgresql groups are listed\nhere:\n\nhttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql\n\nalthough the newsgroups and mailing lists are not always in synch,\nso some posts may be missing or very late.\n\nA much better alternative which just occurred to me:\n\nhttp://geocrawler.com/lists/3/Databases/\n\n(if you can bear sharing the same page as some MySQL lists ;-)\n\n\nIan Barwick\n",
"msg_date": "Wed, 13 Mar 2002 22:33:56 +0100",
"msg_from": "Ian Barwick <barwick@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Archives"
},
{
"msg_contents": "On Wednesday 13 March 2002 21:46, you wrote:\n> Gavin Sherry wrote:\n> > I have been unable to successfully search the mailing list archives off\n> > of archives.postgresql.org for a few days now. Just me, or does anyone\n> > else experience this behaviour as well?\n>\n> Yes, it has been down for a long time, months. Marc knows and is\n> working on it. I use:\n>\n> \thttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql\n>\n> In fact, I recommended to Vince that we point to this as our official\n> archives until we get ours fixed. Haven't heard back from him.\n\nViable alternative:\n\nhttp://geocrawler.com/lists/3/Databases/\n\nArchives the lists and not the newsgroups, so should be\nmore comprehensive and seems pretty up-to-date.\n\n(BTW, geocrawler seems to use PostgreSQL, see:\nhttp://www.phpbuilder.com/columns/tim20001112.php3 )\n\n\nIan Barwick\n",
"msg_date": "Wed, 13 Mar 2002 23:23:08 +0100",
"msg_from": "Ian Barwick <barwick@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Archives"
},
{
"msg_contents": "Hi Bruce,\n\nThanks for the tip. Just updated the links on the techdocs site with\nthe new url.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nBruce Momjian wrote:\n> \n> Gavin Sherry wrote:\n> > I have been unable to successfully search the mailing list archives off of\n> > archives.postgresql.org for a few days now. Just me, or does anyone else\n> > experience this behaviour as well?\n> \n> Yes, it has been down for a long time, months. Marc knows and is\n> working on it. I use:\n> \n> http://groups.google.com/groups?hl=en&group=comp.databases.postgresql\n> \n> In fact, I recommended to Vince that we point to this as our official\n> archives until we get ours fixed. Haven't heard back from him.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 14 Mar 2002 11:38:11 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Archives"
},
{
"msg_contents": "On Wed, 13 Mar 2002, Bruce Momjian wrote:\n\n> Gavin Sherry wrote:\n> > I have been unable to successfully search the mailing list archives off of\n> > archives.postgresql.org for a few days now. Just me, or does anyone else\n> > experience this behaviour as well?\n>\n> Yes, it has been down for a long time, months. Marc knows and is\n> working on it. I use:\n>\n> \thttp://groups.google.com/groups?hl=en&group=comp.databases.postgresql\n>\n> In fact, I recommended to Vince that we point to this as our official\n> archives until we get ours fixed. Haven't heard back from him.\n\nOops! Sorry 'bout that. Just updated it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 13 Mar 2002 20:51:48 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Archives"
}
] |
[
{
"msg_contents": "Bruce Momjian wrote:\n\n Digging into it now, I remember why it is there. In the\n Oracle world, someone can declare a trigger that references\n to NEW or OLD by other names. This RENAME was a workaround so\n one doesn't need to change the whole trigger body, but just\n adds a line in the DECLARE section doing the job.\n\n Therefore, I think removal is not such a good idea. Fixing it\n properly will take a little longer as I am a little busy at\n the moment.\n\n\nJan\n\n> Jan, seems no one has commented on this. Patch?\n>\n> Jan Wieck wrote:\n> > Tom Lane wrote:\n> > > \"Command Prompt, Inc.\" <pgsql-hackers@commandprompt.com> writes:\n> > > > Mainly, the existing documentation on the RENAME statement seems\n> > > > inaccurate; it states that you can re-name variables, records, or\n> > > > rowtypes. However, in practice, our tests show that attempting to RENAME\n> > > > valid variables with:\n> > > > RENAME varname TO newname;\n> > > > ...yeilds a PL/pgSQL parse error, inexplicably. If I try the same syntax\n> > > > on a non-declared variable, it actually says \"there is no variable\" with\n> > > > that name in the current block, so...I think something odd is happening. :)\n> > >\n> > > Yup, this is a bug. The plpgsql grammar expects varname to be a T_WORD,\n> > > but in fact the scanner will only return T_WORD for a name that is not\n> > > any known variable name. Thus RENAME cannot possibly work, and probably\n> > > never has worked.\n> > >\n> > > Looks like it should accept T_VARIABLE, T_RECORD, T_ROW (at least).\n> > > T_WORD ought to draw \"no such variable\". Jan, I think this is your turf...\n> >\n> > Sounds pretty much like that. Will take a look.\n> >\n> > >\n> > > > The RENAME statement seems kind of odd, since it seems that you could just\n> > > > as easily declare a general variable with the right name to begin with,\n> > >\n> > > It seems pretty useless to me too. Perhaps it's there because Oracle\n> > > has one?\n> >\n> > And I don't even remember why I've put it in. Maybe because\n> > it's an Oracle thing. This would be a cool fix, removing the\n> > damned thing completely. I like that solution :-)\n> >\n> > Anyone against removal?\n> >\n> >\n> > Jan\n> >\n> > --\n> >\n> > #======================================================================#\n> > # It's easier to get forgiveness for being wrong than for being right. #\n> > # Let's break this rule - forgive me. #\n> > #================================================== JanWieck@Yahoo.com #\n> >\n> >\n> >\n> > _________________________________________________________\n> > Do You Yahoo!?\n> > Get your free @yahoo.com address at http://mail.yahoo.com\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 13 Mar 2002 09:59:21 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL RENAME bug?"
},
{
"msg_contents": "\nAdded to TODO:\n\n o Fix PL/pgSQL RENAME to work on on variable names\n\n\n---------------------------------------------------------------------------\n\nJan Wieck wrote:\n> Bruce Momjian wrote:\n> \n> Digging into it now, I remember why it is there. In the\n> Oracle world, someone can declare a trigger that references\n> to NEW or OLD by other names. This RENAME was a workaround so\n> one doesn't need to change the whole trigger body, but just\n> adds a line in the DECLARE section doing the job.\n> \n> Therefore, I think removal is not such a good idea. Fixing it\n> properly will take a little longer as I am a little busy at\n> the moment.\n> \n> \n> Jan\n> \n> > Jan, seems no one has commented on this. Patch?\n> >\n> > Jan Wieck wrote:\n> > > Tom Lane wrote:\n> > > > \"Command Prompt, Inc.\" <pgsql-hackers@commandprompt.com> writes:\n> > > > > Mainly, the existing documentation on the RENAME statement seems\n> > > > > inaccurate; it states that you can re-name variables, records, or\n> > > > > rowtypes. However, in practice, our tests show that attempting to RENAME\n> > > > > valid variables with:\n> > > > > RENAME varname TO newname;\n> > > > > ...yeilds a PL/pgSQL parse error, inexplicably. If I try the same syntax\n> > > > > on a non-declared variable, it actually says \"there is no variable\" with\n> > > > > that name in the current block, so...I think something odd is happening. :)\n> > > >\n> > > > Yup, this is a bug. The plpgsql grammar expects varname to be a T_WORD,\n> > > > but in fact the scanner will only return T_WORD for a name that is not\n> > > > any known variable name. Thus RENAME cannot possibly work, and probably\n> > > > never has worked.\n> > > >\n> > > > Looks like it should accept T_VARIABLE, T_RECORD, T_ROW (at least).\n> > > > T_WORD ought to draw \"no such variable\". Jan, I think this is your turf...\n> > >\n> > > Sounds pretty much like that. Will take a look.\n> > >\n> > > >\n> > > > > The RENAME statement seems kind of odd, since it seems that you could just\n> > > > > as easily declare a general variable with the right name to begin with,\n> > > >\n> > > > It seems pretty useless to me too. Perhaps it's there because Oracle\n> > > > has one?\n> > >\n> > > And I don't even remember why I've put it in. Maybe because\n> > > it's an Oracle thing. This would be a cool fix, removing the\n> > > damned thing completely. I like that solution :-)\n> > >\n> > > Anyone against removal?\n> > >\n> > >\n> > > Jan\n> > >\n> > > --\n> > >\n> > > #======================================================================#\n> > > # It's easier to get forgiveness for being wrong than for being right. #\n> > > # Let's break this rule - forgive me. #\n> > > #================================================== JanWieck@Yahoo.com #\n> > >\n> > >\n> > >\n> > > _________________________________________________________\n> > > Do You Yahoo!?\n> > > Get your free @yahoo.com address at http://mail.yahoo.com\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> \n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== JanWieck@Yahoo.com #\n> \n> \n> \n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 23:55:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL RENAME bug?"
}
] |
[
{
"msg_contents": "\n> On Thur, March 07, 2002, Thomas Zehetbauer spewed:\n> \n> I think you all should really buy the book 'Database \n> Development for Dummies'.\n> Postgresql is for sure the only database on this planet that \n> cannot optimize a select(max) using an index. Not even\n> Microsoft has implemented such a design deficiency yet and\n> even MySQL which you like to talk so bad about uses an\n> index to optimize select max() queries. Some of you should \n> really consider attending a programming course and all of\n> you should consider to stop working on this totally screwed\n> up monster!\n\nAnd perhaps you should pick up \"Open Source Development for Dummies\"...\n\nThe wonderful thing about open-source projects is that all-talk, no-action\ncritics such as yourself can be told:\n\n\"If you don't like the way the program works, feel free to download the src\nand put that programming course you attended to good use by implementing it\nyourself.\"\n\nI'm sure all of the postgres community welcome your patch for this\noptimization, Mr. Zehetbauer.\n\nDarren\n\n",
"msg_date": "Wed, 13 Mar 2002 11:15:11 -0500",
"msg_from": "Darren King <DarrenK@Routescape.com>",
"msg_from_op": true,
"msg_subject": "Re: select max(column) not using index"
}
] |
[
{
"msg_contents": "Hi all,\n\n\nHere are the results of our survey on a migration from Oracle 8.0 / W$\nNT4 SP5 to PostgreSQL 7.2 / Red Hat 7.2.\n\nYou'll probably remember of a thread I initiated in this list a couple\nof weeks ago, this is the same survey for the same customer. Now, the\nsurvey is finished.\n\nSo, we migrated all Oracle's specific syntaxes succesfully, including\nCONNECT BY statements (thanks to all hackers at OpenACS project (visit\nhttp://www.openacs.org) for the good code!).\n\nWe migrated succesfully Oracle Pro*C thanks to fantastic ECPG (Thanks\nMichael).\n\nThe overall performance of PostgreSQL, is 33% slower than the Oracle/Nt\nsolution. One must say we faced a well tuned Oracle, tuned for best\nperformance. Even SQL queries were very well tuned, using Oracle\npragmas for example (ex: %USE HASH foobar%).\n\nSince our customer accepted up to 50%, this is a success for us,\ntechnicaly on this point.\n\nBUT, we faced a real problem. On some batches, in ECPG, Pro*C\nstructures uses intensively CURSORs loops. In Oracle, CURSORs\ncan be PREPARED. Thus, it seems Oracle only computes once the query plan\nfor the cursor, even if it is closed and re-opened. Maybe some kind of\nstored query plan / caching / whatever makes it possible.\n\nThis seems not be the case in ECPG. In each COMMIT, the cursors are\nclosed (they dont even need to close cursors in Oracle!). And at each\nBEGIN TRANSACTION PostgreSQL seems to compute again parsing and query\nplan..\n\nSo this finaly makes the batch work taking 300% the time Oracle needs.\nWe clearly see our ECPG programs waits for PostgreSQL in the functions\nwere CURSORs are opened. Then, we know the problem is not in ECPG but in\nPG backend.\n\nThis is unaceptable for our customer. Many batches are launched during\nthe night and have to be completed in 5h (between 0h and 5h). With a\nratio of 3, this is not worth think about migration anymore :-(\n\nWe know we could have much better performances with something else than\nECPG, for example, using C or TCL stored procedures, placing the\nSQL work wuch closer from the PG backend, using SPI, etc... But this is \nnot possible. We have to make it under ECPG, there are tons of Pro*C \ncode to migrate, and we must make it the same. With ECPG/Pro*C compiled \nprograms, we can stop executions, renice programs, etc, what we would \nloose putting work in stored procedures.\n\nSo, I'd really like some of you validate this thing about cursor. We\nhave a strange feeling blended of pride for only a 1,33 ratio face to\nthe giant Oracle, and a feeling of something unfinished, because only \nof a feature not yet implemented...\n\nI read many times the current TODO list. I think our problem is\nsomewhere between the CURSOR thread and the CACHE thread in the TODO.\n\nWe would really appreciate some of you validate this behaviour about\nCURSORs, this would validate we didn't spent 40 day/man for nothing, and\nthat we reached a certain good explanation of the problem, that we have\nnot dug just next to the treasure.\n\nThanks a lot anyway for such good database.\n\nPS: Bad english, I know... :-)\n I hope the customer will accept put the survey in GNU/Linuxdoc, as \n all of PG folks can read it. It is still under a NDA.\n\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Wed, 13 Mar 2002 17:18:29 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": true,
"msg_subject": "Survey results on Oracle/M$NT4 to PG72/RH72 migration"
},
{
"msg_contents": "On Wed, 2002-03-13 at 21:18, Jean-Paul ARGUDO wrote:\n> Hi all,\n> \n> \n> Here are the results of our survey on a migration from Oracle 8.0 / W$\n> NT4 SP5 to PostgreSQL 7.2 / Red Hat 7.2.\n> \n> You'll probably remember of a thread I initiated in this list a couple\n> of weeks ago, this is the same survey for the same customer. Now, the\n> survey is finished.\n> \n> So, we migrated all Oracle's specific syntaxes succesfully, including\n> CONNECT BY statements (thanks to all hackers at OpenACS project (visit\n> http://www.openacs.org) for the good code!).\n\nCould you elaborate here ?\n\nI know they do some of it using triggers and bitmap indexes, do you mean\nthis ?\n\n> We migrated succesfully Oracle Pro*C thanks to fantastic ECPG (Thanks\n> Michael).\n> \n> The overall performance of PostgreSQL, is 33% slower than the Oracle/Nt\n> solution. One must say we faced a well tuned Oracle, tuned for best\n> performance. Even SQL queries were very well tuned, using Oracle\n> pragmas for example (ex: %USE HASH foobar%).\n> \n> Since our customer accepted up to 50%, this is a success for us,\n> technicaly on this point.\n> \n> BUT, we faced a real problem. On some batches, in ECPG, Pro*C\n> structures uses intensively CURSORs loops. In Oracle, CURSORs\n> can be PREPARED. Thus, it seems Oracle only computes once the query plan\n> for the cursor, even if it is closed and re-opened. Maybe some kind of\n> stored query plan / caching / whatever makes it possible.\n\nWhat kind of work do you do in these cursors ?\n\nIs it inserts, updates, deletes, complicated selects ...\n\n> This seems not be the case in ECPG. In each COMMIT, the cursors are\n> closed (they dont even need to close cursors in Oracle!). And at each\n> BEGIN TRANSACTION PostgreSQL seems to compute again parsing and query\n> plan..\n> \n> So this finaly makes the batch work taking 300% the time Oracle needs.\n> We clearly see our ECPG programs waits for PostgreSQL in the functions\n> were CURSORs are opened. Then, we know the problem is not in ECPG but in\n> PG backend.\n\nCould you make ona sample test case with minimal schema/data that\ndemonstrates this behaviour so I can try to optimise it ?\n\n> This is unaceptable for our customer. Many batches are launched during\n> the night and have to be completed in 5h (between 0h and 5h). With a\n> ratio of 3, this is not worth think about migration anymore :-(\n> \n> We know we could have much better performances with something else than\n> ECPG, for example, using C or TCL stored procedures, placing the\n> SQL work wuch closer from the PG backend, using SPI, etc...\n\nDid you do any tests ?\n\nHow much faster did it get ?\n\n> But this is \n> not possible. We have to make it under ECPG, there are tons of Pro*C \n> code to migrate, and we must make it the same. With ECPG/Pro*C compiled \n> programs, we can stop executions, renice programs, etc, what we would \n> loose putting work in stored procedures.\n\nAFAIK some SQL/C type precompilers and other frontend tools for other\ndatabases do generate stored procedures for PREPAREd CURSORs.\n\nI'm afraid ECPG does not :(\n\nBut making ECPG do it might be one way to fix this until real prepared\nqueries will be available to frontend.\n\n> So, I'd really like some of you validate this thing about cursor. We\n> have a strange feeling blended of pride for only a 1,33 ratio face to\n> the giant Oracle, and a feeling of something unfinished, because only \n> of a feature not yet implemented...\n> \n> I read many times the current TODO list. I think our problem is\n> somewhere between the CURSOR thread and the CACHE thread in the TODO.\n> \n> We would really appreciate some of you validate this behaviour about\n> CURSORs, this would validate we didn't spent 40 day/man for nothing, and\n> that we reached a certain good explanation of the problem, that we have\n> not dug just next to the treasure.\n\nThe treasure is currently locked up in backend behind FE/BE protocol \n\n---------------------\nHannu\n\n\n\n",
"msg_date": "14 Mar 2002 09:08:55 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
},
{
"msg_contents": "On Thu, Mar 14, 2002 at 09:08:55AM +0500, Hannu Krosing wrote:\n> AFAIK some SQL/C type precompilers and other frontend tools for other\n> databases do generate stored procedures for PREPAREd CURSORs.\n\nYou mean ECPG should/could replace a PEPARE statement with a CREATE\nFUNCTION and then the usage of the cursor with the usage of that\nfunction?\n\nShould be possible, but needs some work.\n\n> I'm afraid ECPG does not :(\n\nThat's correct of course.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 15 Mar 2002 08:57:02 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
},
{
"msg_contents": "> > AFAIK some SQL/C type precompilers and other frontend tools for other\n> > databases do generate stored procedures for PREPAREd CURSORs.\n> \n> You mean ECPG should/could replace a PEPARE statement with a CREATE\n> FUNCTION and then the usage of the cursor with the usage of that\n> function?\n> \n> Should be possible, but needs some work.\n\nWow Michael, this would be much much much appreciated. :-)\n \n> > I'm afraid ECPG does not :(\n> \n> That's correct of course.\n> Michael\n\nThanks. Then we know our conclusions on the survey are right.\n\nWe hope functionality about prepared cursors, bind variables, etc will\ncome soon in PG :-)\n\nWe actually think about solutions to patch PostgreSQL and contribute\nthis way, adding a feature we need for business. \n\nThanks.\n\n-- \nJean-Paul ARGUDO IDEALX S.A.S\nConsultant bases de donn�es 15-17, av. de S�gur\nhttp://www.idealx.com F-75007 PARIS\n",
"msg_date": "Fri, 15 Mar 2002 10:25:09 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@IDEALX.com>",
"msg_from_op": false,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
},
{
"msg_contents": "On Fri, Mar 15, 2002 at 10:25:09AM +0100, Jean-Paul ARGUDO wrote:\n> > You mean ECPG should/could replace a PEPARE statement with a CREATE\n> > FUNCTION and then the usage of the cursor with the usage of that\n> > function?\n> > \n> > Should be possible, but needs some work.\n> \n> Wow Michael, this would be much much much appreciated. :-)\n\nProblem is I have no idea when I will find time to care about such an\naddition. It certainly won't be possible prior May or so. Sorry.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 15 Mar 2002 11:34:37 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
}
] |
[
{
"msg_contents": "> AFAICS the only way that we could make the one-WAL-record-every-32-\n> nextvals idea really work would be if CHECKPOINT could nullify the\n> logged-in-advance state of each sequence (so that the first nextval\n> after a checkpoint would always generate a fresh WAL record, but\n> subsequent ones wouldn't have to). But I don't see any practical\n> way for CHECKPOINT to do that, especially not for sequences whose\n> disk block isn't even in memory at the instant of the CHECKPOINT.\n\nBut sequences can force WAL record if sequence page LSN is <= than\nsystem RedoRecPtr (XLogCtlInsert.RedoRecPtr), ie previously made\nsequence WAL record is \"too old\" and would not be played during\nrestart. It seems safe to do NOT write WAL record if sequence\nLSN > system RedoRecPtr because of checkpoint started after our\ncheck would finish only after writing to disk sequence buffer with\nproper last_value and log_cnt (nextval keeps lock on sequence buffer).\n\nWhat is not good is that to read system RedoRecPtr backend has to\nacquire XLogInsertLock but probably we can change system RedoRecPtr\nread/write rules:\n\n- to update RedoRecPtr one has to keep not only XLogInsertLock\n but also acquire XLogInfoLock (this is only CheckPointer who\n updates RedoRecPtr);\n- to read RedoRecPtr one has to keep either XLogInsertLock or\n XLogInfoLock.\n\nThis way nextval would only acquire XLogInfoLock to check\nsystem RedoRecPtr.\n\n?\n\nVadim\n",
"msg_date": "Wed, 13 Mar 2002 13:03:44 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> It seems safe to do NOT write WAL record if sequence\n> LSN > system RedoRecPtr because of checkpoint started after our\n> check would finish only after writing to disk sequence buffer with\n> proper last_value and log_cnt (nextval keeps lock on sequence buffer).\n\nMmm ... maybe. Is this safe if a checkpoint is currently in progress?\nSeems like you could look at RedoRecPtr and decide you are okay, but you\nreally are not if checkpointer has already dumped sequence' disk\nbuffer and will later set RedoRecPtr to a value beyond the old LSN.\nIn that case you should have emitted a WAL record ... but you didn't.\n\nConsidering that we've found two separate bugs in this stuff in the past\nweek, I think that we ought to move in the direction of making it\nsimpler and more reliable, not even-more-complicated. Is it really\nworth all this trouble to avoid making a WAL record for each nextval()\ncall?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Mar 2002 17:00:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec kpointed "
},
{
"msg_contents": "I said:\n> Mmm ... maybe. Is this safe if a checkpoint is currently in progress?\n> Seems like you could look at RedoRecPtr and decide you are okay, but you\n> really are not if checkpointer has already dumped sequence' disk\n> buffer and will later set RedoRecPtr to a value beyond the old LSN.\n\nOh, wait, I take that back: the checkpointer advances RedoRecPtr\n*before* it starts to dump disk buffers.\n\nI'm still worried about whether we shouldn't try to simplify, rather\nthan add complexity.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Mar 2002 17:29:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec kpointed "
}
] |
[
{
"msg_contents": "\n> This seems not be the case in ECPG. In each COMMIT, the cursors are\n> closed (they dont even need to close cursors in Oracle!). And at each\n> BEGIN TRANSACTION PostgreSQL seems to compute again parsing and query\n> plan..\n\nI am still convinced that there is room for interpretation of the \nstandard here, Michael.\n\nSince we have \"begin work\" all cursors that were opened outside\na tx block (before \"begin work\") should imho in no way be affected by a commit.\n(e.g. Informix does it like that)\nSomeone who wants more conformant behavior would need to use the mode\nof operation where you are always in a tx anyway, thus loosing the\nabove feature :-)\n\nUnfortunately I think the backend currently would lack the necessary \nsupport for this, since commit does the cleanup work for the cursor ? \nSuch a cursor would need an explicit close or open on the prepared \nstatement to be cleaned up.\n\nAndreas\n",
"msg_date": "Wed, 13 Mar 2002 22:19:48 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
}
] |
[
{
"msg_contents": "\n> So this finaly makes the batch work taking 300% the time Oracle needs.\n> We clearly see our ECPG programs waits for PostgreSQL in the functions\n> were CURSORs are opened. Then, we know the problem is not in ECPG but in\n> PG backend.\n\n> This is unaceptable for our customer. Many batches are launched during\n> the night and have to be completed in 5h (between 0h and 5h). With a\n> ratio of 3, this is not worth think about migration anymore :-(\n\nSo why exactly can you not simply do the whole batch in one transaction ?\n\nUnless you need to run concurrent vacuums, or are low on disk space, or need \nto concurrently update the affected rows (thus fear deadlocks or locking out\ninteractive clients that update), there is no need to do frequent commits in \nPostgreSQL for batch work.\n\nAndreas\n\nPS: I know that coming from other DB's one fears \"snapshot too old\", filling\nrollback segments, or other \"deficiencies\" like long transaction aborted :-)\n",
"msg_date": "Wed, 13 Mar 2002 22:30:59 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
},
{
"msg_contents": "On Thu, 2002-03-14 at 02:30, Zeugswetter Andreas SB SD wrote:\n> \n> > So this finaly makes the batch work taking 300% the time Oracle needs.\n> > We clearly see our ECPG programs waits for PostgreSQL in the functions\n> > were CURSORs are opened. Then, we know the problem is not in ECPG but in\n> > PG backend.\n> \n> > This is unaceptable for our customer. Many batches are launched during\n> > the night and have to be completed in 5h (between 0h and 5h). With a\n> > ratio of 3, this is not worth think about migration anymore :-(\n> \n> So why exactly can you not simply do the whole batch in one transaction ?\n> \n> Unless you need to run concurrent vacuums,\n\nI ran some tests based on their earlier description and concurrent\nvacuums (the new, non-locking ones) are a must, best run every few\nseconds, as without them the ratio of dead/live tuples will be huge and\nthat will bog down the whole process.\n\n> or are low on disk space, or need \n> to concurrently update the affected rows (thus fear deadlocks or locking out\n> interactive clients that update), there is no need to do frequent commits in \n> PostgreSQL for batch work.\n\nI also suspect (from reading their description) that the main problem of\nparsing/optimising each and every similar query will remain even if they\ndo run in one transaction.\n\nIn my tests of simple updates I got 3/2 speed increase (from 1050 to\n1500 updates/sec) by using prepared statements inside a stored procedure\n--------------------\nHannu\n\n",
"msg_date": "14 Mar 2002 09:32:19 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
},
{
"msg_contents": "Le Thursday Mar 14, 2002 at 09:32:19AM +0500, Hannu Krosing a �crit :\n> On Thu, 2002-03-14 at 02:30, Zeugswetter Andreas SB SD wrote:\n> > \n> > > So this finaly makes the batch work taking 300% the time Oracle needs.\n> > > We clearly see our ECPG programs waits for PostgreSQL in the functions\n> > > were CURSORs are opened. Then, we know the problem is not in ECPG but in\n> > > PG backend.\n> > \n> > > This is unaceptable for our customer. Many batches are launched during\n> > > the night and have to be completed in 5h (between 0h and 5h). With a\n> > > ratio of 3, this is not worth think about migration anymore :-(\n> > \n> > So why exactly can you not simply do the whole batch in one transaction ?\n\nAh! Sorry forgot to say this work has double use. In fact, it can be\nused in batch and it is also used in a kind of \"daemon\".\n\nThis daemon wakes up every 5 seconds. It scans (SELECT...) for new\ninsert in a table (lika trigger). When new tuples are found, it launches\nthe work. The work consist in computing total sales of a big store...\n\nEach receipt as many items. The batch computes total sales for each\nsection/sector of the store.\n\nThe \"daemon\" mode permit having a total sales in \"real time\"...\n\nThe batch mode is here to compute final total sales in the end of the\nday. It can be also use to compute back previous days (up to 5).\n\nSo, putting \"the whole batch in one transaction\" is not possible, due to\ndaemon mode. A commit by receipt also permit to not loose previous\ndatabase work in the case the daemon goes down, for example.\n\n> > Unless you need to run concurrent vacuums,\n\nForgot to say too that de x3 ratio is based only on batch mode. Daemon\nmode is as faster as Oracle (wow!).\n\nForgot to say too that in batch mode we launch concurrent vacuum analyze\non the 2 tables constantly accessed (update/inserts only : updating\ntotal sale by sector/ sub-sector/ sub-sub-sector, etc.. the total sales\nhas a tree structure then).\n\nThe vacuum analyze on those 2 tables has a sleep of 10 s, in a \nwhile [ 1 ] loop in a .sh\n \n> I ran some tests based on their earlier description and concurrent\n> vacuums (the new, non-locking ones) are a must, best run every few\n> seconds, as without them the ratio of dead/live tuples will be huge and\n> that will bog down the whole process.\n\nYes, concurrent vaccums is really *GREAT* without it, the batch work is\ngoing slower and slower with time. Concurrent vaccum allows constant\nperformances.\n \n> I also suspect (from reading their description) that the main problem of\n> parsing/optimising each and every similar query will remain even if they\n> do run in one transaction.\n\nExactly.\n \nTo answer a question in this thread: the batch has really basic SQL\nstatments! CURSORS are really simple too, based on 1 to 2 \"bind\nvariables\" that unfortunately are not processed the same way has Oracle.\n:-(\n\nThanks for your support, much appreciated :-))\n\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Thu, 14 Mar 2002 10:20:53 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
},
{
"msg_contents": "On Thu, 2002-03-14 at 11:20, Jean-Paul ARGUDO wrote:\n> > > Unless you need to run concurrent vacuums,\n> \n> Forgot to say too that de x3 ratio is based only on batch mode. Daemon\n> mode is as faster as Oracle (wow!).\n> \n> Forgot to say too that in batch mode we launch concurrent vacuum analyze\n> on the 2 tables constantly accessed (update/inserts only : updating\n> total sale by sector/ sub-sector/ sub-sub-sector, etc.. the total sales\n> has a tree structure then).\n> \n> The vacuum analyze on those 2 tables has a sleep of 10 s, in a \n> while [ 1 ] loop in a .sh\n\nIf the general distribution of values does not drastically change in\nthese tables then you can save some time by running just VACUUM, not\nVACUUM ANALYZE.\n\nVACUUM does all the old tuple removing work\n\nVACUUM ANALYZE does that + also gathers statistics which make it slower.\n \n> > I ran some tests based on their earlier description and concurrent\n> > vacuums (the new, non-locking ones) are a must, best run every few\n> > seconds, as without them the ratio of dead/live tuples will be huge and\n> > that will bog down the whole process.\n> \n> Yes, concurrent vaccums is really *GREAT* without it, the batch work is\n> going slower and slower with time. Concurrent vaccum allows constant\n> performances.\n> \n> > I also suspect (from reading their description) that the main problem of\n> > parsing/optimising each and every similar query will remain even if they\n> > do run in one transaction.\n> \n> Exactly.\n> \n> To answer a question in this thread: the batch has really basic SQL\n> statments! CURSORS are really simple too, based on 1 to 2 \"bind\n> variables\" that unfortunately are not processed the same way has Oracle.\n> :-(\n\ncan you give me a small made-up example and then tell me what\nperformance you get on Oracle/NT and what on PostgreSQL/Linux ?\n\nI'd like to try to move cursor -> backend proc and see \n\n1) if it is big enough gain to warrant further work\n\n2) if it can be done automatically, either by preprocessing ECPG or just\n changing it\n\n--------------\nHannu\n\n\n",
"msg_date": "14 Mar 2002 15:53:57 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
},
{
"msg_contents": "On Thu, 14 Mar 2002, Jean-Paul ARGUDO wrote:\n\n> This daemon wakes up every 5 seconds. It scans (SELECT...) for new\n> insert in a table (lika trigger). When new tuples are found, it\n> launches the work. The work consist in computing total sales of a big\n> store...\n\nYou might find it worthwhile to investigate \"listen\" and\n\"notify\" -- combined with a rule or trigger, you can get\nthis effect in near-real-time\n\nYou'll probably still want a sleep(5) at the end of the\nloop so you can batch a reasonable number of updates if\nthere's a lot going on.\n\nMatthew.\n\n",
"msg_date": "Thu, 14 Mar 2002 16:15:25 +0000 (GMT)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Survey results on Oracle/M$NT4 to PG72/RH72 migration"
}
] |
[
{
"msg_contents": "\n> select max(foo) from bar where x = 'y';\n> \n> How is the index used in this query?\n\nInformix would use an index created on (x, foo) and I guess others too.\n\nBut I too usually find the \"select first 1 * from y order by x desc\" much more \nuseful than an optimized max, since it can also return other columns from that row\n(And is more performant than an optimally optimized subselect for this).\n\nAndreas\n",
"msg_date": "Wed, 13 Mar 2002 22:39:49 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: select max(column) not using index"
}
] |
[
{
"msg_contents": "> > It seems safe to do NOT write WAL record if sequence\n> > LSN > system RedoRecPtr because of checkpoint started after our\n> > check would finish only after writing to disk sequence buffer with\n> > proper last_value and log_cnt (nextval keeps lock on \n> > sequence buffer).\n> \n> Mmm ... maybe. Is this safe if a checkpoint is currently in\n> progress? Seems like you could look at RedoRecPtr and decide\n> you are okay, but you really are not if checkpointer has already\n> dumped sequence' disk buffer and will later set RedoRecPtr to a\n> value beyond the old LSN.\n\nCheckPointer updates system RedoRecPtr before doing anything else.\nSystem RedoRecPtr was introduced to force data buffers backup\nby future XLogInsert-s once CheckPointer started and it *must* be\nupdated *before* buffer flushing.\n\n> In that case you should have emitted a WAL record ... but you didn't.\n> \n> Considering that we've found two separate bugs in this stuff\n> in the past week, I think that we ought to move in the direction\n> of making it simpler and more reliable, not even-more-complicated.\n\nIsn't it too late, considering we have fixes for both bugs already? -:)\n(And it's not very-more-complicated - just simple check.)\n\n> Is it really worth all this trouble to avoid making a WAL record\n> for each nextval() call?\n\nIt's doable... why not do this?\n(Though I have no strong objection.)\n\nVadim\n",
"msg_date": "Wed, 13 Mar 2002 14:34:41 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
}
] |
[
{
"msg_contents": "One of the reasons why I originally stated following the hackers list is\nbecause I wanted to implement bitmap indexes. I found in the archives,\nthe follow link, http://www.it.iitb.ernet.in/~rvijay/dbms/proj/, which\nwas extracted from this,\nhttp://groups.google.com/groups?hl=en&threadm=01C0EF67.5105D2E0.mascarm%40mascari.com&rnum=1&prev=/groups%3Fq%3Dbitmap%2Bindex%2Bgroup:comp.databases.postgresql.hackers%26hl%3Den%26selm%3D01C0EF67.5105D2E0.mascarm%2540mascari.com%26rnum%3D1, archive thread.\n\nAt any rate, that was some number of months ago. I've started looking\nat the results posted from their bitmap GiST efforts and found that they\nwere being tested rather poorly to be of real value (I also found it\nannoying that the project seems to quote other people's work without\ngiving credit). Nonetheless, I thought I'd post to find out if anyone\nfeels there is still a need for this? That is, I'm not really sure that\ndata warehousing or DDS systems are currently very common with Postgres.\n\nIf the group here still see value in adding various types of bitmap\nsupport, can someone please point me to some documentation. I had\nseveral bookmarked but lost then when X crashed. Anything that outlines\ncache strategy, index support, am overview, and any other documentation\nthat would help excel my understanding of the code as well as the\nvarious structure relationships would be wonderful?\n\nOh yes, one last question, is the required method for adding index\nsupport via GiST? I ask because it seems to me that inserts could be\nexceptionally expensive, though as usual, I still have more to look at.\n\nThanks,\n\tGreg\n\n\nP.S. And yes, I have been reading lots of code...including\nwalk-throughs! :P",
"msg_date": "13 Mar 2002 17:07:32 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Bitmap indexes?"
},
{
"msg_contents": "Greg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> One of the reasons why I originally stated following the hackers list is\n> because I wanted to implement bitmap indexes. I found in the archives,\n> the follow link, http://www.it.iitb.ernet.in/~rvijay/dbms/proj/, which\n> was extracted from this,\n> http://groups.google.com/groups?hl=en&threadm=01C0EF67.5105D2E0.mascarm%40mascari.com&rnum=1&prev=/groups%3Fq%3Dbitmap%2Bindex%2Bgroup:comp.databases.postgresql.hackers%26hl%3Den%26selm%3D01C0EF67.5105D2E0.mascarm%2540mascari.com%26rnum%3D1, archive thread.\n> \n> At any rate, that was some number of months ago. I've started looking\n> at the results posted from their bitmap GiST efforts and found that they\n> were being tested rather poorly to be of real value (I also found it\n> annoying that the project seems to quote other people's work without\n> giving credit). Nonetheless, I thought I'd post to find out if anyone\n> feels there is still a need for this? That is, I'm not really sure that\n> data warehousing or DDS systems are currently very common with Postgres.\n> \n> If the group here still see value in adding various types of bitmap\n> support, can someone please point me to some documentation. I had\n> several bookmarked but lost then when X crashed. Anything that outlines\n> cache strategy, index support, am overview, and any other documentation\n> that would help excel my understanding of the code as well as the\n> various structure relationships would be wonderful?\n\nThe only thing I know is that there is discussion of bitmap indexes on\nthe TODO list linked to from the 'bitmap index' item. I also remember\nthat the intarray code in /contrib sort of simulates bitmapped indexes,\nor something like that. :-)\n\n> Oh yes, one last question, is the required method for adding index\n> support via GiST? I ask because it seems to me that inserts could be\n> exceptionally expensive, though as usual, I still have more to look at.\n\nI think we would recommend GIST because it is easier.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Mar 2002 21:07:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap indexes?"
},
{
"msg_contents": "> > Oh yes, one last question, is the required method for adding index\n> > support via GiST? I ask because it seems to me that inserts could be\n> > exceptionally expensive, though as usual, I still have more to look at.\n>\n> I think we would recommend GIST because it is easier.\n\nWould someone be able to explain to me exactly what GIST is? I thought it\nwas just a _type_ of index, but is it actually a generalised index-creating\nframework? Do other DMBSs use it? Is it a cool thing I could talk about\nwhen I give my talk at UWA tommorrow?\n\nChris\n\n",
"msg_date": "Thu, 14 Mar 2002 10:48:09 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "GIST"
},
{
"msg_contents": "Here's a good spring board for information. They actually have some\npretty cool tools available to help develop. Using the libgist stuff\nmakes it pretty easy to prototype and play with various index schemes of\nyour own creation and the debugger lets you rapidly view the storage\nresults.\n\nhttp://gist.cs.berkeley.edu/\n\n\n\nOn Wed, 2002-03-13 at 20:48, Christopher Kings-Lynne wrote:\n> > > Oh yes, one last question, is the required method for adding index\n> > > support via GiST? I ask because it seems to me that inserts could be\n> > > exceptionally expensive, though as usual, I still have more to look at.\n> >\n> > I think we would recommend GIST because it is easier.\n> \n> Would someone be able to explain to me exactly what GIST is? I thought it\n> was just a _type_ of index, but is it actually a generalised index-creating\n> framework? Do other DMBSs use it? Is it a cool thing I could talk about\n> when I give my talk at UWA tommorrow?\n> \n> Chris\n>",
"msg_date": "13 Mar 2002 21:08:50 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: GIST"
},
{
"msg_contents": "Read a collect of articles at bottom of http://www.sai.msu.su/~megera/postgres/gist/\n\nChristopher Kings-Lynne wrote:\n>>>Oh yes, one last question, is the required method for adding index\n>>>support via GiST? I ask because it seems to me that inserts could be\n>>>exceptionally expensive, though as usual, I still have more to look at.\n>>>\n>>I think we would recommend GIST because it is easier.\n>>\n> \n> Would someone be able to explain to me exactly what GIST is? I thought it\n> was just a _type_ of index, but is it actually a generalised index-creating\n> framework? Do other DMBSs use it? Is it a cool thing I could talk about\n> when I give my talk at UWA tommorrow?\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Thu, 14 Mar 2002 11:47:58 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: GIST"
},
{
"msg_contents": "Greg,\n\nif you're still in bitmap indices you may take a look on our\ncontrib/intarray module.\n\n\tRegards,\n\n\t\tOleg\n\nOn 13 Mar 2002, Greg Copeland wrote:\n\n> One of the reasons why I originally stated following the hackers list is\n> because I wanted to implement bitmap indexes. I found in the archives,\n> the follow link, http://www.it.iitb.ernet.in/~rvijay/dbms/proj/, which\n> was extracted from this,\n> http://groups.google.com/groups?hl=en&threadm=01C0EF67.5105D2E0.mascarm%40mascari.com&rnum=1&prev=/groups%3Fq%3Dbitmap%2Bindex%2Bgroup:comp.databases.postgresql.hackers%26hl%3Den%26selm%3D01C0EF67.5105D2E0.mascarm%2540mascari.com%26rnum%3D1, archive thread.\n>\n> At any rate, that was some number of months ago. I've started looking\n> at the results posted from their bitmap GiST efforts and found that they\n> were being tested rather poorly to be of real value (I also found it\n> annoying that the project seems to quote other people's work without\n> giving credit). Nonetheless, I thought I'd post to find out if anyone\n> feels there is still a need for this? That is, I'm not really sure that\n> data warehousing or DDS systems are currently very common with Postgres.\n>\n> If the group here still see value in adding various types of bitmap\n> support, can someone please point me to some documentation. I had\n> several bookmarked but lost then when X crashed. Anything that outlines\n> cache strategy, index support, am overview, and any other documentation\n> that would help excel my understanding of the code as well as the\n> various structure relationships would be wonderful?\n>\n> Oh yes, one last question, is the required method for adding index\n> support via GiST? I ask because it seems to me that inserts could be\n> exceptionally expensive, though as usual, I still have more to look at.\n>\n> Thanks,\n> \tGreg\n>\n>\n> P.S. And yes, I have been reading lots of code...including\n> walk-throughs! :P\n>\n>\n>\n\n",
"msg_date": "Tue, 19 Mar 2002 14:41:31 +0300 (MSK)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap indexes?"
},
{
"msg_contents": "Christopher,\n\nI'm sorry it's too late, but I haven't receive any messages from\npostgres mailing lists for a month (don't know why). Just found your\nmessage in archives.\n\nGiST is a great thing, it's generalised search tree invented by\nHellerstein in 1995. It allows you to define custom data type,\nindex access to them and custom queries.\nI have a little intro about GiST (not finished yest)\nhttp://www.sai.msu.su/~megera/postgres/gist/doc/intro.html\nAlso, at the bottom of http://www.sai.msu.su/~megera/postgres/gist/\nthere are several seminal papers for reading.\n\n\tRegards,\n\n\t\tOleg\n\n\n\nOn Thu, 14 Mar 2002, Christopher Kings-Lynne wrote:\n\n> > > Oh yes, one last question, is the required method for adding index\n> > > support via GiST? I ask because it seems to me that inserts could be\n> > > exceptionally expensive, though as usual, I still have more to look at.\n> >\n> > I think we would recommend GIST because it is easier.\n>\n> Would someone be able to explain to me exactly what GIST is? I thought it\n> was just a _type_ of index, but is it actually a generalised index-creating\n> framework? Do other DMBSs use it? Is it a cool thing I could talk about\n> when I give my talk at UWA tommorrow?\n>\n> Chris\n>\n>\n\n",
"msg_date": "Tue, 19 Mar 2002 14:46:54 +0300 (MSK)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: GIST"
},
{
"msg_contents": "On Tue, 19 Mar 2002, Oleg Bartunov wrote:\n\nSorry to reply over you, Oleg.\n\n> On 13 Mar 2002, Greg Copeland wrote:\n>\n> > One of the reasons why I originally stated following the hackers list is\n> > because I wanted to implement bitmap indexes. I found in the archives,\n> > the follow link, http://www.it.iitb.ernet.in/~rvijay/dbms/proj/, which\n> > was extracted from this,\n> > http://groups.google.com/groups?hl=en&threadm=01C0EF67.5105D2E0.mascarm%40mascari.com&rnum=1&prev=/groups%3Fq%3Dbitmap%2Bindex%2Bgroup:comp.databases.postgresql.hackers%26hl%3Den%26selm%3D01C0EF67.5105D2E0.mascarm%2540mascari.com%26rnum%3D1, archive thread.\n\nFor every case I have used a bitmap index on Oracle, a\npartial index[0] made more sense (especialy since it\ncould usefully be compound).\n\nOur troublesome case (on Oracle) is a table of \"events\"\nwhere maybe fifty to a couple of hundred are \"published\"\n(ie. web-visible) at any time. The events are categorised\nby sport (about a dozen) and by \"event type\" (about five).\nWe never really query events except by PK or by sport/type/\npublished.\n\nWe make a bitmap index on \"published\", and trust Oracle to\nuse it correctly, and hope that our other indexes are also\nuseful.\n\nOn Postgres[1] we would make a partial compound index:\n\ncreate index ... on events(sport_id,event_type_id)\nwhere published='Y';\n\nMatthew.\n\n[0] Is this a postgres-only feature; my tame Oracle and\n Sybase DBAs had never heard of such a thing, but\n were rather impressed at the idea.\n[1] Disclaimer. Our system doesn't run on PG, though I\n do have a nearly equivalent prototype system which\n does. I'd love to hear any success (or otherwise)\n stories about PG partial indexes.\n\n",
"msg_date": "Tue, 19 Mar 2002 21:30:36 +0000 (GMT)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap indexes?"
},
{
"msg_contents": "On Tue, 2002-03-19 at 15:30, Matthew Kirkwood wrote:\n> On Tue, 19 Mar 2002, Oleg Bartunov wrote:\n> \n> Sorry to reply over you, Oleg.\n> \n> > On 13 Mar 2002, Greg Copeland wrote:\n> >\n> > > One of the reasons why I originally stated following the hackers list is\n> > > because I wanted to implement bitmap indexes. I found in the archives,\n> > > the follow link, http://www.it.iitb.ernet.in/~rvijay/dbms/proj/, which\n> > > was extracted from this,\n> > > http://groups.google.com/groups?hl=en&threadm=01C0EF67.5105D2E0.mascarm%40mascari.com&rnum=1&prev=/groups%3Fq%3Dbitmap%2Bindex%2Bgroup:comp.databases.postgresql.hackers%26hl%3Den%26selm%3D01C0EF67.5105D2E0.mascarm%2540mascari.com%26rnum%3D1, archive thread.\n> \n> For every case I have used a bitmap index on Oracle, a\n> partial index[0] made more sense (especialy since it\n> could usefully be compound).\n\nThat's very true, however, often bitmap indexes are used where partial\nindexes may not work well. It maybe you were trying to apply the cure\nfor the wrong disease. ;)\n\n> \n> Our troublesome case (on Oracle) is a table of \"events\"\n> where maybe fifty to a couple of hundred are \"published\"\n> (ie. web-visible) at any time. The events are categorised\n> by sport (about a dozen) and by \"event type\" (about five).\n> We never really query events except by PK or by sport/type/\n> published.\n\nThe reason why bitmap indexes are primarily used for DSS and data\nwherehousing applications is because they are best used on extremely\nlarge to very large tables which have low cardinality (e.g, 10,000,000\nrows having 200 distinct values). On top of that, bitmap indexes also\ntend to be much smaller than their \"standard\" cousins. On large and\nvery tables tables, this can sometimes save gigs in index space alone\n(serious space benefit). Plus, their small index size tends to result\nin much less I/O (serious speed benefit). This, of course, can result\nin several orders of magnitude speed improvements when index scans are\nrequired. As an added bonus, using AND, OR, XOR and NOT predicates are\nexceptionally fast and if implemented properly, can even take advantage\nof some 64-bit hardware for further speed improvements. This, of\ncourse, further speeds look ups. The primary down side is that inserts\nand updates to bitmap indexes are very costly (comparatively) which is,\nyet again, why they excel in read-only environments (DSS & data\nwherehousing).\n\nIt should also be noted that RDMS's, such as Oracle, often use multiple\ntypes of bitmap indexes. This further impedes insert/update\nperformance, however, the additional bitmap index types usually allow\nfor range predicates while still making use of the bitmap index. If I'm\nnot mistaken, several other types of bitmaps are available as well as\nmany ways to encode and compress (rle, quad compression, etc) bitmap\nindexes which further save on an already compact indexing scheme.\n\nGiven the proper problem domain, index bitmaps can be a big win.\n\n> \n> We make a bitmap index on \"published\", and trust Oracle to\n> use it correctly, and hope that our other indexes are also\n> useful.\n> \n> On Postgres[1] we would make a partial compound index:\n> \n> create index ... on events(sport_id,event_type_id)\n> where published='Y';\n\n\nGenerally speaking, bitmap indexes will not serve you very will on\ntables having a low row counts, high cardinality or where they are\nattached to tables which are primarily used in an OLTP capacity. \nSituations where you have a low row count and low cardinality or high\nrow count and high cardinality tend to be better addressed by partial\nindexes; which seem to make much more sense. In your example, it sounds\nlike you did \"the right thing\"(tm). ;)\n\n\nGreg",
"msg_date": "19 Mar 2002 17:00:53 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Bitmap indexes?"
}
] |
[
{
"msg_contents": "\nI'm trying to use latest version of PostNuke which uses adodb for the\ndatabase layer. The problem is in the insert and update statements.\nFor example:\n\ninsert into foo(foo.a) values(1);\n\nfails because the table name is used. Update statements also include the\ntable name. Both fail. Does anyone know of a workaround?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 13 Mar 2002 19:32:33 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "insert statements"
},
{
"msg_contents": "Vince Vielhaber writes:\n\n> For example:\n>\n> insert into foo(foo.a) values(1);\n>\n> fails because the table name is used. Update statements also include the\n> table name. Both fail. Does anyone know of a workaround?\n\nCompletely loudly to whomever wrote that SQL. It's completely\nnon-standard.\n\n(The implication I'm trying to make is that there's no way to make\nPostgreSQL accept that statement. Adding this as an extension has been\nrejected in the past.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 13 Mar 2002 21:14:25 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "On Wed, 13 Mar 2002, Peter Eisentraut wrote:\n\n> Vince Vielhaber writes:\n>\n> > For example:\n> >\n> > insert into foo(foo.a) values(1);\n> >\n> > fails because the table name is used. Update statements also include the\n> > table name. Both fail. Does anyone know of a workaround?\n>\n> Completely loudly to whomever wrote that SQL. It's completely\n> non-standard.\n>\n> (The implication I'm trying to make is that there's no way to make\n> PostgreSQL accept that statement. Adding this as an extension has been\n> rejected in the past.)\n\nYeah, that's kinda what I expected. There's just under 1700 insert and\nupdate statements but only 1200 selects. Neither option sounds good at\nthis point.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 13 Mar 2002 21:25:39 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "On Wed, 13 Mar 2002, Peter Eisentraut wrote:\n\n> Vince Vielhaber writes:\n>\n> > For example:\n> >\n> > insert into foo(foo.a) values(1);\n> >\n> > fails because the table name is used. Update statements also include the\n> > table name. Both fail. Does anyone know of a workaround?\n>\n> Completely loudly to whomever wrote that SQL. It's completely\n> non-standard.\n>\n> (The implication I'm trying to make is that there's no way to make\n> PostgreSQL accept that statement. Adding this as an extension has been\n> rejected in the past.)\n\nI'm now wondering why it was rejected. I couldn't try this last nite\nso I just tried it now. Here's with Sybase 11.0.3.3 :\n\n1> create table foo(a int)\n2> go\n1> insert into foo(a) values(1)\n2> go\n(1 row affected)\n1> insert into foo(foo.a) values(2)\n2> go\n(1 row affected)\n1>\n\nAnd I suspect more than just mysql and sybase accept either syntax.\nRight now I'm modifying postnuke but that's only a short term solution,\nand I wouldn't want to add it to PostgreSQL either 'cuze if it remains\nrejected that would hamper upgrades. ROCK --> ME <-- HARD PLACE :)\nThere are really no other decent CMSs available that support PostgreSQL.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 14 Mar 2002 08:29:27 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "Why not send in your changes to PostNuke along with the appropriate\nsection from the SQL specs?\n\nSurely they'll apply a reasoned patch which improves conformance to\nthe SQL standard and doesn't break anything in the process. I'd\nsuspect both SyBase, and MySQL can also take insert into foo (a) as\nwell.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Vince Vielhaber\" <vev@michvhf.com>\nTo: \"Peter Eisentraut\" <peter_e@gmx.net>\nCc: <pgsql-hackers@postgreSQL.org>\nSent: Thursday, March 14, 2002 8:29 AM\nSubject: Re: [HACKERS] insert statements\n\n\n> On Wed, 13 Mar 2002, Peter Eisentraut wrote:\n>\n> > Vince Vielhaber writes:\n> >\n> > > For example:\n> > >\n> > > insert into foo(foo.a) values(1);\n> > >\n> > > fails because the table name is used. Update statements also\ninclude the\n> > > table name. Both fail. Does anyone know of a workaround?\n> >\n> > Completely loudly to whomever wrote that SQL. It's completely\n> > non-standard.\n> >\n> > (The implication I'm trying to make is that there's no way to make\n> > PostgreSQL accept that statement. Adding this as an extension has\nbeen\n> > rejected in the past.)\n>\n> I'm now wondering why it was rejected. I couldn't try this last\nnite\n> so I just tried it now. Here's with Sybase 11.0.3.3 :\n>\n> 1> create table foo(a int)\n> 2> go\n> 1> insert into foo(a) values(1)\n> 2> go\n> (1 row affected)\n> 1> insert into foo(foo.a) values(2)\n> 2> go\n> (1 row affected)\n> 1>\n>\n> And I suspect more than just mysql and sybase accept either syntax.\n> Right now I'm modifying postnuke but that's only a short term\nsolution,\n> and I wouldn't want to add it to PostgreSQL either 'cuze if it\nremains\n> rejected that would hamper upgrades. ROCK --> ME <-- HARD PLACE\n:)\n> There are really no other decent CMSs available that support\nPostgreSQL.\n>\n> Vince.\n> --\n>\n======================================================================\n====\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com\nhttp://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n>\n======================================================================\n====\n>\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Thu, 14 Mar 2002 09:01:20 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "On Thu, 14 Mar 2002, Rod Taylor wrote:\n\n> Why not send in your changes to PostNuke along with the appropriate\n> section from the SQL specs?\n>\n> Surely they'll apply a reasoned patch which improves conformance to\n> the SQL standard and doesn't break anything in the process. I'd\n> suspect both SyBase, and MySQL can also take insert into foo (a) as\n> well.\n\nLook below, I showed both syntaxes with Sybase. Since I don't have a\ncopy of the SQL specs I can't send them the appropriate section or I\nwould have already. Care to forward that appropriate section?\n\n\n> --\n> Rod Taylor\n>\n> This message represents the official view of the voices in my head\n>\n> ----- Original Message -----\n> From: \"Vince Vielhaber\" <vev@michvhf.com>\n> To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> Cc: <pgsql-hackers@postgreSQL.org>\n> Sent: Thursday, March 14, 2002 8:29 AM\n> Subject: Re: [HACKERS] insert statements\n>\n>\n> > On Wed, 13 Mar 2002, Peter Eisentraut wrote:\n> >\n> > > Vince Vielhaber writes:\n> > >\n> > > > For example:\n> > > >\n> > > > insert into foo(foo.a) values(1);\n> > > >\n> > > > fails because the table name is used. Update statements also\n> include the\n> > > > table name. Both fail. Does anyone know of a workaround?\n> > >\n> > > Completely loudly to whomever wrote that SQL. It's completely\n> > > non-standard.\n> > >\n> > > (The implication I'm trying to make is that there's no way to make\n> > > PostgreSQL accept that statement. Adding this as an extension has\n> been\n> > > rejected in the past.)\n> >\n> > I'm now wondering why it was rejected. I couldn't try this last\n> nite\n> > so I just tried it now. Here's with Sybase 11.0.3.3 :\n> >\n> > 1> create table foo(a int)\n> > 2> go\n> > 1> insert into foo(a) values(1)\n> > 2> go\n> > (1 row affected)\n> > 1> insert into foo(foo.a) values(2)\n> > 2> go\n> > (1 row affected)\n> > 1>\n> >\n> > And I suspect more than just mysql and sybase accept either syntax.\n> > Right now I'm modifying postnuke but that's only a short term\n> solution,\n> > and I wouldn't want to add it to PostgreSQL either 'cuze if it\n> remains\n> > rejected that would hamper upgrades. ROCK --> ME <-- HARD PLACE\n> :)\n> > There are really no other decent CMSs available that support\n> PostgreSQL.\n> >\n> > Vince.\n> > --\n> >\n> ======================================================================\n> ====\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com\n> http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> >\n> ======================================================================\n> ====\n> >\n> >\n> >\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n>\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 14 Mar 2002 09:08:21 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": " As snipped from:\nhttp://archives.postgresql.org/pgsql-bugs/2000-10/msg00030.php (All\nmy stuff is in paper form)\nWhat's your definition of \"other dbs\"? The above statement is quite\nclearly in violation of the SQL92 and SQL99 specifications:\n\n <insert statement> ::=\n INSERT INTO <table name>\n <insert columns and source>\n\n <insert columns and source> ::=\n [ <left paren> <insert column list> <right paren> ]\n <query expression>\n | DEFAULT VALUES\n\n <insert column list> ::= <column name list>\n\n <column name list> ::=\n <column name> [ { <comma> <column name> }... ]\n\n <column name> ::= <identifier>\n\nI'm not particularly excited about supporting non-SQL variant syntaxes\nthat add no functionality.\n\n\t\t\tregards, tom lane\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Vince Vielhaber\" <vev@michvhf.com>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Peter Eisentraut\" <peter_e@gmx.net>;\n<pgsql-hackers@postgreSQL.org>\nSent: Thursday, March 14, 2002 9:08 AM\nSubject: Re: [HACKERS] insert statements\n\n\n> On Thu, 14 Mar 2002, Rod Taylor wrote:\n>\n> > Why not send in your changes to PostNuke along with the\nappropriate\n> > section from the SQL specs?\n> >\n> > Surely they'll apply a reasoned patch which improves conformance\nto\n> > the SQL standard and doesn't break anything in the process. I'd\n> > suspect both SyBase, and MySQL can also take insert into foo (a)\nas\n> > well.\n>\n> Look below, I showed both syntaxes with Sybase. Since I don't have\na\n> copy of the SQL specs I can't send them the appropriate section or I\n> would have already. Care to forward that appropriate section?\n>\n>\n> > --\n> > Rod Taylor\n> >\n> > This message represents the official view of the voices in my head\n> >\n> > ----- Original Message -----\n> > From: \"Vince Vielhaber\" <vev@michvhf.com>\n> > To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> > Cc: <pgsql-hackers@postgreSQL.org>\n> > Sent: Thursday, March 14, 2002 8:29 AM\n> > Subject: Re: [HACKERS] insert statements\n> >\n> >\n> > > On Wed, 13 Mar 2002, Peter Eisentraut wrote:\n> > >\n> > > > Vince Vielhaber writes:\n> > > >\n> > > > > For example:\n> > > > >\n> > > > > insert into foo(foo.a) values(1);\n> > > > >\n> > > > > fails because the table name is used. Update statements\nalso\n> > include the\n> > > > > table name. Both fail. Does anyone know of a workaround?\n> > > >\n> > > > Completely loudly to whomever wrote that SQL. It's completely\n> > > > non-standard.\n> > > >\n> > > > (The implication I'm trying to make is that there's no way to\nmake\n> > > > PostgreSQL accept that statement. Adding this as an extension\nhas\n> > been\n> > > > rejected in the past.)\n> > >\n> > > I'm now wondering why it was rejected. I couldn't try this last\n> > nite\n> > > so I just tried it now. Here's with Sybase 11.0.3.3 :\n> > >\n> > > 1> create table foo(a int)\n> > > 2> go\n> > > 1> insert into foo(a) values(1)\n> > > 2> go\n> > > (1 row affected)\n> > > 1> insert into foo(foo.a) values(2)\n> > > 2> go\n> > > (1 row affected)\n> > > 1>\n> > >\n> > > And I suspect more than just mysql and sybase accept either\nsyntax.\n> > > Right now I'm modifying postnuke but that's only a short term\n> > solution,\n> > > and I wouldn't want to add it to PostgreSQL either 'cuze if it\n> > remains\n> > > rejected that would hamper upgrades. ROCK --> ME <-- HARD PLACE\n> > :)\n> > > There are really no other decent CMSs available that support\n> > PostgreSQL.\n> > >\n> > > Vince.\n> > > --\n> > >\n> >\n======================================================================\n> > ====\n> > > Vince Vielhaber -- KA8CSH email: vev@michvhf.com\n> > http://www.pop4.net\n> > > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > > Online Campground Directory\nhttp://www.camping-usa.com\n> > > Online Giftshop Superstore\nhttp://www.cloudninegifts.com\n> > >\n> >\n======================================================================\n> > ====\n> > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> > >\n> >\n> >\n>\n>\n> Vince.\n> --\n>\n======================================================================\n====\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com\nhttp://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n>\n======================================================================\n====\n>\n>\n>\n>\n\n",
"msg_date": "Thu, 14 Mar 2002 09:24:28 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "On Thu, 14 Mar 2002, Rod Taylor wrote:\n\n> As snipped from:\n> http://archives.postgresql.org/pgsql-bugs/2000-10/msg00030.php (All\n> my stuff is in paper form)\n> What's your definition of \"other dbs\"? The above statement is quite\n> clearly in violation of the SQL92 and SQL99 specifications:\n\nAnd nowhere does it say that <column name> cannot be qualified with\nthe table name in front of it. Looking at the entire message noted\nabove the list of other dbs that support it is now Oracle, Sybase,\nMS-SQL and mysql. If \"other dbs\" ends up the equivilent of \"everything\nbut PostgreSQL\" then which one is non-standard?\n\n\n\n\n>\n> <insert statement> ::=\n> INSERT INTO <table name>\n> <insert columns and source>\n>\n> <insert columns and source> ::=\n> [ <left paren> <insert column list> <right paren> ]\n> <query expression>\n> | DEFAULT VALUES\n>\n> <insert column list> ::= <column name list>\n>\n> <column name list> ::=\n> <column name> [ { <comma> <column name> }... ]\n>\n> <column name> ::= <identifier>\n>\n> I'm not particularly excited about supporting non-SQL variant syntaxes\n> that add no functionality.\n>\n> \t\t\tregards, tom lane\n> --\n> Rod Taylor\n>\n> This message represents the official view of the voices in my head\n>\n> ----- Original Message -----\n> From: \"Vince Vielhaber\" <vev@michvhf.com>\n> To: \"Rod Taylor\" <rbt@zort.ca>\n> Cc: \"Peter Eisentraut\" <peter_e@gmx.net>;\n> <pgsql-hackers@postgreSQL.org>\n> Sent: Thursday, March 14, 2002 9:08 AM\n> Subject: Re: [HACKERS] insert statements\n>\n>\n> > On Thu, 14 Mar 2002, Rod Taylor wrote:\n> >\n> > > Why not send in your changes to PostNuke along with the\n> appropriate\n> > > section from the SQL specs?\n> > >\n> > > Surely they'll apply a reasoned patch which improves conformance\n> to\n> > > the SQL standard and doesn't break anything in the process. I'd\n> > > suspect both SyBase, and MySQL can also take insert into foo (a)\n> as\n> > > well.\n> >\n> > Look below, I showed both syntaxes with Sybase. Since I don't have\n> a\n> > copy of the SQL specs I can't send them the appropriate section or I\n> > would have already. Care to forward that appropriate section?\n> >\n> >\n> > > --\n> > > Rod Taylor\n> > >\n> > > This message represents the official view of the voices in my head\n> > >\n> > > ----- Original Message -----\n> > > From: \"Vince Vielhaber\" <vev@michvhf.com>\n> > > To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> > > Cc: <pgsql-hackers@postgreSQL.org>\n> > > Sent: Thursday, March 14, 2002 8:29 AM\n> > > Subject: Re: [HACKERS] insert statements\n> > >\n> > >\n> > > > On Wed, 13 Mar 2002, Peter Eisentraut wrote:\n> > > >\n> > > > > Vince Vielhaber writes:\n> > > > >\n> > > > > > For example:\n> > > > > >\n> > > > > > insert into foo(foo.a) values(1);\n> > > > > >\n> > > > > > fails because the table name is used. Update statements\n> also\n> > > include the\n> > > > > > table name. Both fail. Does anyone know of a workaround?\n> > > > >\n> > > > > Completely loudly to whomever wrote that SQL. It's completely\n> > > > > non-standard.\n> > > > >\n> > > > > (The implication I'm trying to make is that there's no way to\n> make\n> > > > > PostgreSQL accept that statement. Adding this as an extension\n> has\n> > > been\n> > > > > rejected in the past.)\n> > > >\n> > > > I'm now wondering why it was rejected. I couldn't try this last\n> > > nite\n> > > > so I just tried it now. Here's with Sybase 11.0.3.3 :\n> > > >\n> > > > 1> create table foo(a int)\n> > > > 2> go\n> > > > 1> insert into foo(a) values(1)\n> > > > 2> go\n> > > > (1 row affected)\n> > > > 1> insert into foo(foo.a) values(2)\n> > > > 2> go\n> > > > (1 row affected)\n> > > > 1>\n> > > >\n> > > > And I suspect more than just mysql and sybase accept either\n> syntax.\n> > > > Right now I'm modifying postnuke but that's only a short term\n> > > solution,\n> > > > and I wouldn't want to add it to PostgreSQL either 'cuze if it\n> > > remains\n> > > > rejected that would hamper upgrades. ROCK --> ME <-- HARD PLACE\n> > > :)\n> > > > There are really no other decent CMSs available that support\n> > > PostgreSQL.\n> > > >\n> > > > Vince.\n> > > > --\n> > > >\n> > >\n> ======================================================================\n> > > ====\n> > > > Vince Vielhaber -- KA8CSH email: vev@michvhf.com\n> > > http://www.pop4.net\n> > > > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > > > Online Campground Directory\n> http://www.camping-usa.com\n> > > > Online Giftshop Superstore\n> http://www.cloudninegifts.com\n> > > >\n> > >\n> ======================================================================\n> > > ====\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > > broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > >\n> > > > http://archives.postgresql.org\n> > > >\n> > >\n> > >\n> >\n> >\n> > Vince.\n> > --\n> >\n> ======================================================================\n> ====\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com\n> http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> >\n> ======================================================================\n> ====\n> >\n> >\n> >\n> >\n>\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 14 Mar 2002 09:39:26 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "Out of curiosity, does SyBase allow you to qualify it with\nschema.table.column?\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Vince Vielhaber\" <vev@michvhf.com>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Peter Eisentraut\" <peter_e@gmx.net>;\n<pgsql-hackers@postgreSQL.org>\nSent: Thursday, March 14, 2002 9:39 AM\nSubject: Re: [HACKERS] insert statements\n\n\n> On Thu, 14 Mar 2002, Rod Taylor wrote:\n>\n> > As snipped from:\n> > http://archives.postgresql.org/pgsql-bugs/2000-10/msg00030.php\n(All\n> > my stuff is in paper form)\n> > What's your definition of \"other dbs\"? The above statement is\nquite\n> > clearly in violation of the SQL92 and SQL99 specifications:\n>\n> And nowhere does it say that <column name> cannot be qualified with\n> the table name in front of it. Looking at the entire message noted\n> above the list of other dbs that support it is now Oracle, Sybase,\n> MS-SQL and mysql. If \"other dbs\" ends up the equivilent of\n\"everything\n> but PostgreSQL\" then which one is non-standard?\n>\n>\n>\n>\n> >\n> > <insert statement> ::=\n> > INSERT INTO <table name>\n> > <insert columns and source>\n> >\n> > <insert columns and source> ::=\n> > [ <left paren> <insert column list> <right\nparen> ]\n> > <query expression>\n> > | DEFAULT VALUES\n> >\n> > <insert column list> ::= <column name list>\n> >\n> > <column name list> ::=\n> > <column name> [ { <comma> <column name> }... ]\n> >\n> > <column name> ::= <identifier>\n> >\n> > I'm not particularly excited about supporting non-SQL variant\nsyntaxes\n> > that add no functionality.\n> >\n> > regards, tom lane\n> > --\n> > Rod Taylor\n> >\n> > This message represents the official view of the voices in my head\n> >\n> > ----- Original Message -----\n> > From: \"Vince Vielhaber\" <vev@michvhf.com>\n> > To: \"Rod Taylor\" <rbt@zort.ca>\n> > Cc: \"Peter Eisentraut\" <peter_e@gmx.net>;\n> > <pgsql-hackers@postgreSQL.org>\n> > Sent: Thursday, March 14, 2002 9:08 AM\n> > Subject: Re: [HACKERS] insert statements\n> >\n> >\n> > > On Thu, 14 Mar 2002, Rod Taylor wrote:\n> > >\n> > > > Why not send in your changes to PostNuke along with the\n> > appropriate\n> > > > section from the SQL specs?\n> > > >\n> > > > Surely they'll apply a reasoned patch which improves\nconformance\n> > to\n> > > > the SQL standard and doesn't break anything in the process.\nI'd\n> > > > suspect both SyBase, and MySQL can also take insert into foo\n(a)\n> > as\n> > > > well.\n> > >\n> > > Look below, I showed both syntaxes with Sybase. Since I don't\nhave\n> > a\n> > > copy of the SQL specs I can't send them the appropriate section\nor I\n> > > would have already. Care to forward that appropriate section?\n> > >\n> > >\n> > > > --\n> > > > Rod Taylor\n> > > >\n> > > > This message represents the official view of the voices in my\nhead\n> > > >\n> > > > ----- Original Message -----\n> > > > From: \"Vince Vielhaber\" <vev@michvhf.com>\n> > > > To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> > > > Cc: <pgsql-hackers@postgreSQL.org>\n> > > > Sent: Thursday, March 14, 2002 8:29 AM\n> > > > Subject: Re: [HACKERS] insert statements\n> > > >\n> > > >\n> > > > > On Wed, 13 Mar 2002, Peter Eisentraut wrote:\n> > > > >\n> > > > > > Vince Vielhaber writes:\n> > > > > >\n> > > > > > > For example:\n> > > > > > >\n> > > > > > > insert into foo(foo.a) values(1);\n> > > > > > >\n> > > > > > > fails because the table name is used. Update statements\n> > also\n> > > > include the\n> > > > > > > table name. Both fail. Does anyone know of a\nworkaround?\n> > > > > >\n> > > > > > Completely loudly to whomever wrote that SQL. It's\ncompletely\n> > > > > > non-standard.\n> > > > > >\n> > > > > > (The implication I'm trying to make is that there's no way\nto\n> > make\n> > > > > > PostgreSQL accept that statement. Adding this as an\nextension\n> > has\n> > > > been\n> > > > > > rejected in the past.)\n> > > > >\n> > > > > I'm now wondering why it was rejected. I couldn't try this\nlast\n> > > > nite\n> > > > > so I just tried it now. Here's with Sybase 11.0.3.3 :\n> > > > >\n> > > > > 1> create table foo(a int)\n> > > > > 2> go\n> > > > > 1> insert into foo(a) values(1)\n> > > > > 2> go\n> > > > > (1 row affected)\n> > > > > 1> insert into foo(foo.a) values(2)\n> > > > > 2> go\n> > > > > (1 row affected)\n> > > > > 1>\n> > > > >\n> > > > > And I suspect more than just mysql and sybase accept either\n> > syntax.\n> > > > > Right now I'm modifying postnuke but that's only a short\nterm\n> > > > solution,\n> > > > > and I wouldn't want to add it to PostgreSQL either 'cuze if\nit\n> > > > remains\n> > > > > rejected that would hamper upgrades. ROCK --> ME <-- HARD\nPLACE\n> > > > :)\n> > > > > There are really no other decent CMSs available that support\n> > > > PostgreSQL.\n> > > > >\n> > > > > Vince.\n> > > > > --\n> > > > >\n> > > >\n> >\n======================================================================\n> > > > ====\n> > > > > Vince Vielhaber -- KA8CSH email: vev@michvhf.com\n> > > > http://www.pop4.net\n> > > > > 56K Nationwide Dialup from $16.00/mo at Pop4\nNetworking\n> > > > > Online Campground Directory\n> > http://www.camping-usa.com\n> > > > > Online Giftshop Superstore\n> > http://www.cloudninegifts.com\n> > > > >\n> > > >\n> >\n======================================================================\n> > > > ====\n> > > > >\n> > > > >\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > > > broadcast)---------------------------\n> > > > > TIP 6: Have you searched our list archives?\n> > > > >\n> > > > > http://archives.postgresql.org\n> > > > >\n> > > >\n> > > >\n> > >\n> > >\n> > > Vince.\n> > > --\n> > >\n> >\n======================================================================\n> > ====\n> > > Vince Vielhaber -- KA8CSH email: vev@michvhf.com\n> > http://www.pop4.net\n> > > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > > Online Campground Directory\nhttp://www.camping-usa.com\n> > > Online Giftshop Superstore\nhttp://www.cloudninegifts.com\n> > >\n> >\n======================================================================\n> > ====\n> > >\n> > >\n> > >\n> > >\n> >\n> >\n>\n>\n> Vince.\n> --\n>\n======================================================================\n====\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com\nhttp://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n>\n======================================================================\n====\n>\n>\n>\n>\n\n",
"msg_date": "Thu, 14 Mar 2002 09:51:40 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "On Thu, 14 Mar 2002, Rod Taylor wrote:\n\n> Out of curiosity, does SyBase allow you to qualify it with\n> schema.table.column?\n\nJust tried it... Yes.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 14 Mar 2002 09:57:24 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "\nOn Thu, 14 Mar 2002, Vince Vielhaber wrote:\n\n> On Thu, 14 Mar 2002, Rod Taylor wrote:\n>\n> > As snipped from:\n> > http://archives.postgresql.org/pgsql-bugs/2000-10/msg00030.php (All\n> > my stuff is in paper form)\n> > What's your definition of \"other dbs\"? The above statement is quite\n> > clearly in violation of the SQL92 and SQL99 specifications:\n>\n> And nowhere does it say that <column name> cannot be qualified with\n> the table name in front of it. Looking at the entire message noted\n\nAFAICS periods are not valid in identifiers that are not double\nquoted (section 5.2 has the rules on regular identifiers and delimited\nones)\n\n <regular identifier> ::= <identifier body>\n\n <identifier body> ::=\n <identifier start> [ { <underscore> | <identifier part> }... ]\n\n\n <identifier start> ::= !! See the Syntax Rules\n\n <identifier part> ::=\n <identifier start>\n | <digit>\nidentifier start is a simple latin letter, a letter in the character\nrepertoire that's in use, a syllable in the repertoire or an ideograph in\nthe repertoire.\n\nidentifier is defined as either a regular identifier or a delimited one\n(ie double quoted). So column name cannot contain periods.\n\nThat being said, is this something that's worth adding due to general\nusage by other systems?\n\n\n",
"msg_date": "Thu, 14 Mar 2002 08:18:35 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "On Thu, 14 Mar 2002, Stephan Szabo wrote:\n\n>\n> <identifier start> ::= !! See the Syntax Rules\n>\n> <identifier part> ::=\n> <identifier start>\n> | <digit>\n> identifier start is a simple latin letter, a letter in the character\n> repertoire that's in use, a syllable in the repertoire or an ideograph in\n> the repertoire.\n>\n> identifier is defined as either a regular identifier or a delimited one\n> (ie double quoted). So column name cannot contain periods.\n>\n> That being said, is this something that's worth adding due to general\n> usage by other systems?\n\nIn an odd way, I guess that's what I'm asking. At what point is it us\nthat's non-standard?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 14 Mar 2002 11:27:42 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n>> What's your definition of \"other dbs\"? The above statement is quite\n>> clearly in violation of the SQL92 and SQL99 specifications:\n\n> And nowhere does it say that <column name> cannot be qualified with\n> the table name in front of it.\n\nAu contraire, that is EXACTLY what that bit of BNF is saying. If\nthey'd meant to allow this construction then the BNF would refer to\n<qualified name>, not just <identifier>.\n\n> Looking at the entire message noted\n> above the list of other dbs that support it is now Oracle, Sybase,\n> MS-SQL and mysql. If \"other dbs\" ends up the equivilent of \"everything\n> but PostgreSQL\" then which one is non-standard?\n\nOut of curiosity, what do these guys do if I try the obvious\n\n\tinsert into foo (bar.col) ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Mar 2002 12:39:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: insert statements "
},
{
"msg_contents": "On Thu, 14 Mar 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> >> What's your definition of \"other dbs\"? The above statement is quite\n> >> clearly in violation of the SQL92 and SQL99 specifications:\n>\n> > And nowhere does it say that <column name> cannot be qualified with\n> > the table name in front of it.\n>\n> Au contraire, that is EXACTLY what that bit of BNF is saying. If\n> they'd meant to allow this construction then the BNF would refer to\n> <qualified name>, not just <identifier>.\n>\n> > Looking at the entire message noted\n> > above the list of other dbs that support it is now Oracle, Sybase,\n> > MS-SQL and mysql. If \"other dbs\" ends up the equivilent of \"everything\n> > but PostgreSQL\" then which one is non-standard?\n>\n> Out of curiosity, what do these guys do if I try the obvious\n>\n> \tinsert into foo (bar.col) ...\n\nLooks like Sybase ignores the bar:\n\n1> create table foo(a int)\n2> go\n1> insert into foo(bar.a) values(1)\n2> go\n(1 row affected)\n1> select * from foo\n2> go\n a\n -----------\n 1\n\n(1 row affected)\n1>\n\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 14 Mar 2002 13:20:06 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements "
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> There are really no other decent CMSs available that support\n> PostgreSQL.\n\nbricolage.thepirtgroup.com/\n\nMike.\n",
"msg_date": "14 Mar 2002 21:33:27 -0500",
"msg_from": "Michael Alan Dorman <mdorman@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "> > > Looking at the entire message noted\n> > > above the list of other dbs that support it is now Oracle, Sybase,\n> > > MS-SQL and mysql. If \"other dbs\" ends up the equivilent of \"everything\n> > > but PostgreSQL\" then which one is non-standard?\n\nThe one(s) that intentionally violate or gratuitously extend published\nlanguage standards? ;)\n\n> Looks like Sybase ignores the bar:\n\n:)\n\nSo would you like to write the specification for this \"standard\nbehavior\"? We'll submit it for SQL200x :)\n\n - Thomas\n",
"msg_date": "Fri, 15 Mar 2002 09:36:50 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "Sorry for the previous sarcastic response.\n\nBut I *really* don't see the benefit of that <table>(<table>.<col>)\nsyntax. Especially when it cannot (?? we need a counterexample) lead to\nany additional interesting beneficial behavior.\n\n - Thomas\n",
"msg_date": "Fri, 15 Mar 2002 11:45:01 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> Looks like Sybase ignores the bar:\n> \n> 1> create table foo(a int)\n> 2> go\n> 1> insert into foo(bar.a) values(1)\n> 2> go\n> (1 row affected)\n> 1> select * from foo\n> 2> go\n> a\n> -----------\n> 1\n> \n> (1 row affected)\n> 1>\n> \n\nThis looks like a parser error to me. It probably only takes the\nlast bit of the name and ignores all the qualifiers...\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 18 Mar 2002 11:53:32 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> On Thu, 14 Mar 2002, Rod Taylor wrote:\n> \n> > Out of curiosity, does SyBase allow you to qualify it with\n> > schema.table.column?\n> \n> Just tried it... Yes.\n> \n\nWhat if you give it a bogus schema name? Does it error out or just\nignore it?\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 18 Mar 2002 12:00:10 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
},
{
"msg_contents": "On Mon, 18 Mar 2002, Fernando Nasser wrote:\n\n> Vince Vielhaber wrote:\n> >\n> > On Thu, 14 Mar 2002, Rod Taylor wrote:\n> >\n> > > Out of curiosity, does SyBase allow you to qualify it with\n> > > schema.table.column?\n> >\n> > Just tried it... Yes.\n> >\n>\n> What if you give it a bogus schema name? Does it error out or just\n> ignore it?\n\nIf I get a few mins before I leave I'll try it, but I would guess\nthat it ignores it because when I tried INSERT INTO foo(bar.a), bar\ndidn't exist and Sybase still accepted it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 18 Mar 2002 12:09:11 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements"
}
] |
[
{
"msg_contents": "Hi All,\n\nI remember Tom doing a benchmark that showed postgres spends a lot of \ntime parsing SQL statements. There was some mention of implementing a \nsort of ociparse and allowing the pre-parsing and binding of values to \nplaceholders in the SQL statement.\n\nI can't seem to find this on the TODO list. Is anyone working on such a \nthing? I seem to remember some saying that the FE-BE protocol would \nhave to change, so maybe the discussion was stopped there? (*shurg*)\n\nThank you\n\nAshley Cambrell\n\n",
"msg_date": "Thu, 14 Mar 2002 16:16:19 +1100",
"msg_from": "Ashley Cambrell <ash@freaky-namuh.com>",
"msg_from_op": true,
"msg_subject": "Pre-preparing / parsing SQL statements"
},
{
"msg_contents": "> I remember Tom doing a benchmark that showed postgres spends a lot of \n> time parsing SQL statements. There was some mention of implementing a \n> sort of ociparse and allowing the pre-parsing and binding of values to \n> placeholders in the SQL statement.\n\nI'm very interrested to read more about this. Anyone has URL for this?\nThanks.\n \n> I can't seem to find this on the TODO list. Is anyone working on such a \n> thing? I seem to remember some saying that the FE-BE protocol would \n> have to change, so maybe the discussion was stopped there? (*shurg*)\n\nSame remark as above.\n\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Thu, 14 Mar 2002 10:23:11 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: Pre-preparing / parsing SQL statements"
}
] |
[
{
"msg_contents": "Hi, all.\n\nI am experimenting on performance evaluation for some queries based on\nPostgreSQL.\nTo give fair conditions to each queries, I try to clear buffer of PostgreSQL\nbefore running each queries.\nI think the following function in .../backend/storage/buffer/bufmgr.c seems\nto be designed\nfor such a purpose.\nBut the function seems to have a logical error in my opinion.\n\nvoid BufferPoolBlowaway()\n{\n1: int i;\n\n2: BufferSync();\n3: for (i = 1; i <= NBuffers; i++)\n4: {\n5: if (BufferIsValid(i))\n6: {\n7: while (BufferIsValid(i)) ReleaseBuffer(i);\n8: }\n9: BufTableDelete(&BufferDescriptors[i - 1]);\n }\n}\n\nThe line 7 causes an infinite loop, I think.\nSo, what I did instead is the following:\n\nvoid BufferPoolBlowaway()\n{\n1: BufferDesc *bufHdr;\n2: int i;\n\n3: BufferSync();\n4: for (i = 1; i <= NBuffers; i++)\n5: {\n6: if (BufferIsValid(i))\n7: {\n8: bufHdr = &BufferDescriptors[i - 1];\n9: while (bufHdr->refcount > 0) ReleaseBuffer(i);\n10: }\n11: BufTableDelete(&BufferDescriptors[i - 1]);\n12: }\n}\n\nLine 1, 8, and 9 are added instead of the original to release buffers.\nIt works without any infinite loop, but I am not quite sure that my\nmodification is reasonable.\nCan anybody advise me about the modification?\n\nIn addition, I wonder that the disk read/write operations via buffer manager\nin PostgreSQL\nare free from linux system buffer cache.\nIf not, does anyone know how to flush and initialize the linux system buffer\ncache?\n\nCheers.\n\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 14 Mar 2002 11:59:48 -0000",
"msg_from": "\"Seung Hyun Jeong\" <jeongs@cs.man.ac.uk>",
"msg_from_op": true,
"msg_subject": "about BufferPoolBlowaway()"
},
{
"msg_contents": "\"Seung Hyun Jeong\" <jeongs@cs.man.ac.uk> writes:\n> I am experimenting on performance evaluation for some queries based on\n> PostgreSQL.\n> To give fair conditions to each queries, I try to clear buffer of PostgreSQL\n> before running each queries.\n> I think the following function in .../backend/storage/buffer/bufmgr.c seems\n> to be designed\n> for such a purpose.\n> But the function seems to have a logical error in my opinion.\n\nActually, BufferPoolBlowaway is so completely wrong-headed that it\nshould be removed entirely. You can't go around arbitrarily releasing\npins on buffers. The holder of the pin is going to crash or corrupt\ndata if you do.\n\nI'm not convinced that starting from an empty disk cache is a\nparticularly interesting performance measurement, but if you insist\non it: reboot and start the postmaster for each measurement. (Anything\nless than a reboot is an exercise in self-deception, since Postgres\nrelies on the kernel's disk cache quite as much as its own buffers.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Mar 2002 10:54:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: about BufferPoolBlowaway() "
}
] |
[
{
"msg_contents": "With '\\d table' I get the columns, types and modifiers. Also\nthe Primary key, Indexes etc are shown.\n\nBut if I want to know WHAT the primary key 'is pointing to',\nhow would I do that (ie, what is the primary key)?\n\nI saw an example now to show a foreign key on the 'Net but that's\nnot applicable here...\n-- \ncryptographic Albanian Nazi killed Ortega SDI $400 million in gold\nbullion NORAD [Hello to all my fans in domestic surveillance] Delta\nForce Treasury Uzi FSF nitrate quiche\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "14 Mar 2002 14:00:32 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "'Following' the Primary key"
},
{
"msg_contents": "On Thu, 2002-03-14 at 13:00, Turbo Fredriksson wrote:\n> With '\\d table' I get the columns, types and modifiers. Also\n> the Primary key, Indexes etc are shown.\n> \n> But if I want to know WHAT the primary key 'is pointing to',\n> how would I do that (ie, what is the primary key)?\n\nJust do \\d again on the key index name:\n\nbray=# \\d org_contact\n Table \"org_contact\"\n Column | Type | Modifiers \n---------+-----------------------+-----------\n org | character varying(10) | not null\n contact | character varying(10) | not null\n role | text | not null\n address | integer | \nPrimary key: org_contact_pkey\nTriggers: RI_ConstraintTrigger_6933120,\n RI_ConstraintTrigger_6933114,\n RI_ConstraintTrigger_6933108\n\nbray=# \\d org_contact_pkey\n Index \"org_contact_pkey\"\n Column | Type \n---------+-----------------------\n org | character varying(10)\n contact | character varying(10)\nunique btree (primary key)\n\n\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Let your light so shine before men, that they may see \n your good works, and glorify your Father which is in \n heaven.\" Matthew 5:16 \n\n",
"msg_date": "14 Mar 2002 13:19:19 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: 'Following' the Primary key"
},
{
"msg_contents": ">>>>> \"Oliver\" == Oliver Elphick <olly@lfix.co.uk> writes:\n\n Oliver> On Thu, 2002-03-14 at 13:00, Turbo Fredriksson wrote:\n >> With '\\d table' I get the columns, types and modifiers. Also\n >> the Primary key, Indexes etc are shown.\n >> \n >> But if I want to know WHAT the primary key 'is pointing to',\n >> how would I do that (ie, what is the primary key)?\n\n Oliver> Just do \\d again on the key index name:\n\n Oliver> bray=# \\d org_contact\n Oliver> bray=# \\d org_contact_pkey\n\nCool. Works fine in 7.2, but not 7.1.3 (which we're running on our\nproduction systems)...\n\nAny idea how to do this on 7.1.3?\n-- \njihad iodine subway arrangements 767 Cocaine 747 Waco, Texas [Hello to\nall my fans in domestic surveillance] terrorist security radar North\nKorea plutonium Semtex\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "14 Mar 2002 14:28:37 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Re: 'Following' the Primary key"
},
{
"msg_contents": "> Cool. Works fine in 7.2, but not 7.1.3 (which we're running on our\n> production systems)...\n> Any idea how to do this on 7.1.3?\n\ncontact=# \\d t_operation\n Table \"t_operation\"\n Attribute | Type | Modifier \n-----------+------------------------+----------------------------------------------------\n op_id | integer | not null default nextval('operation_id_seq'::text)\n op_date | date | not null\n op_dpt | character varying(50) | \n op_typ | character varying(50) | \n op_dsc | character varying(500) | \n cnx_id | integer | not null\nIndex: t_operation_pkey\n ^^^^^\n Default primary key index\n\n\ncontact=# \\d t_operation_pkey\nIndex \"t_operation_pkey\"\n Attribute | Type \n-----------+---------\n op_id | integer\nunique btree (primary key)\n^^^^^^ ^^^^^^^^^^^^^\n \nWatch for unique indices created with CREATE UNIQUE INDEX ...\n\nCheers,\n\n-- \nJean-Paul ARGUDO\n",
"msg_date": "Thu, 14 Mar 2002 14:57:10 +0100",
"msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>",
"msg_from_op": false,
"msg_subject": "Re: 'Following' the Primary key"
},
{
"msg_contents": "Quoting Oliver Elphick <olly@lfix.co.uk>:\n\n> On Thu, 2002-03-14 at 13:28, Turbo Fredriksson wrote:\n> > Oliver> Just do \\d again on the key index name:\n> > \n> > Oliver> bray=# \\d org_contact\n> > Oliver> bray=# \\d org_contact_pkey\n> > \n> > Cool. Works fine in 7.2, but not 7.1.3 (which we're running on our\n> > production systems)...\n> > \n> > Any idea how to do this on 7.1.3?\n> \n> psql -E tells me that the queries include this:\n\nI thought it was '-e', and that didn't give any output,\nso I never figured out this my self...\n\n> SELECT a.attname, format_type(a.atttypid, a.atttypmod),\n> a.attnotnull, a.atthasdef, a.attnum\n> FROM pg_class c, pg_attribute a\n> WHERE c.relname = 'org_contact_pkey'\n> AND a.attnum > 0 AND a.attrelid = c.oid\n> ORDER BY a.attnum;\n\nWorks like a charm, thanx!!\n-- \n$400 million in gold bullion Soviet Saddam Hussein supercomputer Waco,\nTexas Iran munitions PLO explosion Cuba congress Semtex BATF Treasury\nNSA\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "14 Mar 2002 16:00:36 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Re: 'Following' the Primary key"
},
{
"msg_contents": "Turbo Fredriksson wrote:\n>>>>>>\"Oliver\" == Oliver Elphick <olly@lfix.co.uk> writes:\n>>>>>>\n> \n> Oliver> On Thu, 2002-03-14 at 13:00, Turbo Fredriksson wrote:\n> >> With '\\d table' I get the columns, types and modifiers. Also\n> >> the Primary key, Indexes etc are shown.\n> >> \n> >> But if I want to know WHAT the primary key 'is pointing to',\n> >> how would I do that (ie, what is the primary key)?\n> \n> Oliver> Just do \\d again on the key index name:\n> \n> Oliver> bray=# \\d org_contact\n> Oliver> bray=# \\d org_contact_pkey\n> \n> Cool. Works fine in 7.2, but not 7.1.3 (which we're running on our\n> production systems)...\n> \n> Any idea how to do this on 7.1.3?\n> \n\nSee:\nhttp://www.brasileiro.net/postgres/cookbook/view-one-recipe.adp?recipe_id=36\n\nfor one possible way.\n\nJoe\n\n",
"msg_date": "Thu, 14 Mar 2002 09:40:36 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: 'Following' the Primary key"
}
] |
[
{
"msg_contents": "Just curious, and honestly I haven't looked, but is there any form of\ncompression between clients and servers? Has this been looked at?\n\nGreg",
"msg_date": "14 Mar 2002 08:43:58 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Client/Server compression?"
},
{
"msg_contents": "Greg Copeland wrote:\n> Just curious, and honestly I haven't looked, but is there any form of\n> compression between clients and servers? Has this been looked at?\n\nThis issues has never come up before. It is sort of like compressing an\nFTP session. No one really does that. Is there value in trying it with\nPostgreSQL?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 14 Mar 2002 13:20:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "Well, it occurred to me that if a large result set were to be identified\nbefore transport between a client and server, a significant amount of\nbandwidth may be saved by using a moderate level of compression. \nEspecially with something like result sets, which I tend to believe may\nlend it self well toward compression.\n\nUnlike FTP which may be transferring (and often is) previously\ncompressed data, raw result sets being transfered between the server and\na remote client, IMOHO, would tend to compress rather well as I doubt\nmuch of it would be true random data.\n\nThis may be of value for users with low bandwidth connectivity to their\nservers or where bandwidth may already be at a premium.\n\nThe zlib exploit posting got me thinking about this.\n\nGreg\n\n\nOn Thu, 2002-03-14 at 12:20, Bruce Momjian wrote:\n> Greg Copeland wrote:\n> > Just curious, and honestly I haven't looked, but is there any form of\n> > compression between clients and servers? Has this been looked at?\n> \n> This issues has never come up before. It is sort of like compressing an\n> FTP session. No one really does that. Is there value in trying it with\n> PostgreSQL?\n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026",
"msg_date": "14 Mar 2002 12:32:04 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "Greg Copeland wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> Well, it occurred to me that if a large result set were to be identified\n> before transport between a client and server, a significant amount of\n> bandwidth may be saved by using a moderate level of compression. \n> Especially with something like result sets, which I tend to believe may\n> lend it self well toward compression.\n> \n> Unlike FTP which may be transferring (and often is) previously\n> compressed data, raw result sets being transfered between the server and\n> a remote client, IMOHO, would tend to compress rather well as I doubt\n> much of it would be true random data.\n> \n\nI should have said compressing the HTTP protocol, not FTP.\n\n> This may be of value for users with low bandwidth connectivity to their\n> servers or where bandwidth may already be at a premium.\n\nBut don't slow links do the compression themselves, like PPP over a\nmodem?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 14 Mar 2002 14:35:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "Bruce Momjian wrote:\n>\n> Greg Copeland wrote:\n> > Well, it occurred to me that if a large result set were to be identified\n> > before transport between a client and server, a significant amount of\n> > bandwidth may be saved by using a moderate level of compression.\n> > Especially with something like result sets, which I tend to believe may\n> > lend it self well toward compression.\n>\n> I should have said compressing the HTTP protocol, not FTP.\n>\n> > This may be of value for users with low bandwidth connectivity to their\n> > servers or where bandwidth may already be at a premium.\n>\n> But don't slow links do the compression themselves, like PPP over a\n> modem?\n\nYes, but that's packet level compression. You'll never get even close to the\nresult you can achieve compressing the set as a whole.\n\nSpeaking of HTTP, it's fairly common for web servers (Apache has mod_gzip)\nto gzip content before sending it to the client (which unzips it silently);\nespecially when dealing with somewhat static content (so it can be cached\nzipped). This can provide great bandwidth savings.\n\nI'm sceptical of the benefit such compressions would provide in this setting\nthough. We're dealing with sets that would have to be compressed every time\n(no caching) which might be a bit expensive on a database server. Having it\nas a default off option for psql migtht be nice, but I wonder if it's worth\nthe time, effort, and cpu cycles.\n\n\n",
"msg_date": "Thu, 14 Mar 2002 15:03:39 -0500",
"msg_from": "\"Arguile\" <arguile@lucentstudios.com>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n> \n> Greg Copeland wrote:\n> \n> Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > Well, it occurred to me that if a large result set were to be identified\n> > before transport between a client and server, a significant amount of\n> > bandwidth may be saved by using a moderate level of compression.\n> > Especially with something like result sets, which I tend to believe may\n> > lend it self well toward compression.\n> >\n> > Unlike FTP which may be transferring (and often is) previously\n> > compressed data, raw result sets being transfered between the server and\n> > a remote client, IMOHO, would tend to compress rather well as I doubt\n> > much of it would be true random data.\n> >\n> \n> I should have said compressing the HTTP protocol, not FTP.\n> \n> > This may be of value for users with low bandwidth connectivity to their\n> > servers or where bandwidth may already be at a premium.\n> \n> But don't slow links do the compression themselves, like PPP over a\n> modem?\n\nYes, and not really. Modems have very very very small buffers, so the\ncompression is extremely ineffectual. Link-level compression can be\n*highly* effective in making client/server communication snappy, since\nfaster processors are tending to push the speed bottleneck onto the\nwire. We use HTTP Content-Encoding of gzip for our company and the\npostgis.refractions.net site, and save about 60% on all the text content\non the wire. For highly redundant data (like result sets) the savings\nwould be even greater. I have nothing but good things to say about\nclient/server compression.\n\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Thu, 14 Mar 2002 12:08:11 -0800",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "On Thu, 2002-03-14 at 14:35, Bruce Momjian wrote:\n> Greg Copeland wrote:\n> \n> Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > Well, it occurred to me that if a large result set were to be identified\n> > before transport between a client and server, a significant amount of\n> > bandwidth may be saved by using a moderate level of compression. \n> > Especially with something like result sets, which I tend to believe may\n> > lend it self well toward compression.\n> > \n> > Unlike FTP which may be transferring (and often is) previously\n> > compressed data, raw result sets being transfered between the server and\n> > a remote client, IMOHO, would tend to compress rather well as I doubt\n> > much of it would be true random data.\n> \n> I should have said compressing the HTTP protocol, not FTP.\n\nExcept that lots of people compress HTTP traffic (or rather should, if\nthey were smart). Bandwidth is much more expensive than CPU time, and\nmost browsers have built-in support for gzip-encoded data. Take a look\nat mod_gzip or mod_deflate (2 Apache modules) for more info on this.\n\nIMHO, compressing data would be valuable iff there are lots of people\nwith a low-bandwidth link between Postgres and their database clients.\nIn my experience, that is rarely the case. For example, people using\nPostgres as a backend for a dynamically generated website usually have\ntheir database on the same server (for a low-end site), or on a separate\nserver connected via 100mbit ethernet to a bunch of webservers. In this\nsituation, compressing the data between the database and the webservers\nwill just add more latency and increase the load on the database.\n\nPerhaps I'm incorrect though -- are there lots of people using Postgres\nwith a slow link between the database server and the clients?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "14 Mar 2002 15:14:31 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "You can get some tremendous gains by compressing HTTP sessions - mod_gzip\nfor Apache does this very well.\n\nI believe SlashDot saves in the order of 30% of their bandwidth by using\ncompression, as do sites like http://www.whitepages.com.au/ and\nhttp://www.yellowpages.com.au/\n\nThe mod_gzip trick is effectively very similar to what Greg is proposing. Of\ncourse, how often would you connect to your database over anything less than\na fast (100mbit+) LAN connection?\n\nIn any case the conversation regarding FE/BE protocol changes occurs\nfrequently, and this thread would certainly impact that protocol. Has any\nthought ever been put into using an existing standard such as HTTP instead\nof the current postgres proprietary protocol? There are a lot of advantages:\n\n* You could leverage the existing client libraries (java.net.URL etc) to\nmake writing PG clients (JDBC/ODBC/custom) an absolute breeze.\n\n* Results sets / server responses could be returned in XML.\n\n* The protocol handles extensions well (X-* headers)\n\n* Load balancing across a postgres cluster would be trivial with any number\nof software/hardware http load balancers.\n\n* The prepared statement work needs to hit the FE/BE protocol anyway...\n\nIf the project gurus thought this was worthwhile, I could certainly like to\nhave a crack at it.\n\nRegards,\n\nMark\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Friday, 15 March 2002 6:36 AM\n> To: Greg Copeland\n> Cc: PostgresSQL Hackers Mailing List\n> Subject: Re: [HACKERS] Client/Server compression?\n>\n>\n> Greg Copeland wrote:\n>\n> Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > Well, it occurred to me that if a large result set were to be identified\n> > before transport between a client and server, a significant amount of\n> > bandwidth may be saved by using a moderate level of compression.\n> > Especially with something like result sets, which I tend to believe may\n> > lend it self well toward compression.\n> >\n> > Unlike FTP which may be transferring (and often is) previously\n> > compressed data, raw result sets being transfered between the server and\n> > a remote client, IMOHO, would tend to compress rather well as I doubt\n> > much of it would be true random data.\n> >\n>\n> I should have said compressing the HTTP protocol, not FTP.\n>\n> > This may be of value for users with low bandwidth connectivity to their\n> > servers or where bandwidth may already be at a premium.\n>\n> But don't slow links do the compression themselves, like PPP over a\n> modem?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Fri, 15 Mar 2002 07:26:44 +1100",
"msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> This may be of value for users with low bandwidth connectivity to their\n>> servers or where bandwidth may already be at a premium.\n\n> But don't slow links do the compression themselves, like PPP over a\n> modem?\n\nEven if the link doesn't compress, shoving the feature into PG itself\nisn't necessarily the answer. I'd suggest running such a connection\nthrough an ssh tunnel, which would give you encryption as well as\ncompression.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Mar 2002 15:29:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression? "
},
{
"msg_contents": "On Thu, 2002-03-14 at 13:35, Bruce Momjian wrote:\n> Greg Copeland wrote:\n> \n> Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > Well, it occurred to me that if a large result set were to be identified\n> > before transport between a client and server, a significant amount of\n> > bandwidth may be saved by using a moderate level of compression. \n> > Especially with something like result sets, which I tend to believe may\n> > lend it self well toward compression.\n> > \n> > Unlike FTP which may be transferring (and often is) previously\n> > compressed data, raw result sets being transfered between the server and\n> > a remote client, IMOHO, would tend to compress rather well as I doubt\n> > much of it would be true random data.\n> > \n> \n> I should have said compressing the HTTP protocol, not FTP.\n> \n> > This may be of value for users with low bandwidth connectivity to their\n> > servers or where bandwidth may already be at a premium.\n> \n> But don't slow links do the compression themselves, like PPP over a\n> modem?\n\n\nYes and no. Modem compression doesn't understand the nature of the data\nthat is actually flowing through it. As a result, a modem is going to\nspeed an equal amount of time trying to compress the PPP/IP/NETBEUI\nprotocols as it does trying to compress the data contained within those\nprotocol envelopes. Furthermore, modems tend to have a very limited\namount of time to even attempt to compress, combined with the fact that\nthey have very limited buffer space, usually limits its ability to\nprovide effective compression. Because of these issues, it not uncommon\nfor a modem to actually yield a larger compressed block than was the\ninput.\n\nI'd also like to point out that there are also other low speed\nconnections available which are in use which do not make use of modems\nas well as modems which do not support compression (long haul modems for\nexample).\n\nAs for your specific example of HTTP versus FTP, I would also like to\npoint out that it is becoming more and more common for gzip'd data to be\ntransported within the HTTP protocol whereby each end is explicitly\naware of the compression taking place on the link with knowledge of what\nto do with it.\n\nAlso, believe it or not, one of the common uses of SSH is to provide\nsession compression. It is not unheard of for people to disable the\nencryption to simply use it for a compression tunnel which also provides\nfor modest session obscurantism.\n\nGreg",
"msg_date": "14 Mar 2002 14:39:43 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "On Thu, 2002-03-14 at 14:14, Neil Conway wrote:\n> On Thu, 2002-03-14 at 14:35, Bruce Momjian wrote:\n> > Greg Copeland wrote:\n> > \n> > Checking application/pgp-signature: FAILURE\n> > -- Start of PGP signed section.\n> > > Well, it occurred to me that if a large result set were to be identified\n> > > before transport between a client and server, a significant amount of\n> > > bandwidth may be saved by using a moderate level of compression. \n> > > Especially with something like result sets, which I tend to believe may\n> > > lend it self well toward compression.\n> > > \n> > > Unlike FTP which may be transferring (and often is) previously\n> > > compressed data, raw result sets being transfered between the server and\n> > > a remote client, IMOHO, would tend to compress rather well as I doubt\n> > > much of it would be true random data.\n> > \n> > I should have said compressing the HTTP protocol, not FTP.\n> \n> Except that lots of people compress HTTP traffic (or rather should, if\n> they were smart). Bandwidth is much more expensive than CPU time, and\n> most browsers have built-in support for gzip-encoded data. Take a look\n> at mod_gzip or mod_deflate (2 Apache modules) for more info on this.\n> \n> IMHO, compressing data would be valuable iff there are lots of people\n> with a low-bandwidth link between Postgres and their database clients.\n> In my experience, that is rarely the case. For example, people using\n> Postgres as a backend for a dynamically generated website usually have\n> their database on the same server (for a low-end site), or on a separate\n> server connected via 100mbit ethernet to a bunch of webservers. In this\n> situation, compressing the data between the database and the webservers\n> will just add more latency and increase the load on the database.\n> \n> Perhaps I'm incorrect though -- are there lots of people using Postgres\n> with a slow link between the database server and the clients?\n> \n\n\nWhat about remote support of these databases where a VPN may not be\navailable? In my past experience, this was very common as many\ncompanies do not was to expose their database, even via a VPN to the out\nside world, while allowing only modem access. Not to mention, road\nwarriors that may need to remotely support their databases may find\nvalue here too. Would they not?\n\n...I think I'm pretty well coming to the conclusion that it may be of\nsome value...even if only for a limited number of users.\n\n\nGreg",
"msg_date": "14 Mar 2002 14:43:50 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "On Thu, 2002-03-14 at 14:29, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> This may be of value for users with low bandwidth connectivity to their\n> >> servers or where bandwidth may already be at a premium.\n> \n> > But don't slow links do the compression themselves, like PPP over a\n> > modem?\n> \n> Even if the link doesn't compress, shoving the feature into PG itself\n> isn't necessarily the answer. I'd suggest running such a connection\n> through an ssh tunnel, which would give you encryption as well as\n> compression.\n> \n> \t\t\tregards, tom lane\n\nCouldn't the same be said for SSL support?\n\nI'd also like to point out that it's *possible* that this could also be\na speed boost under certain work loads where extra CPU is available as\nless data would have to be transfered through the OS, networking layers,\nand device drivers. Until zero copy transfers becomes common on all\nplatforms for all devices, I would think that it's certainly *possible*\nthat this *could* offer a possible improvement...well, perhaps a break\neven at any rate...\n\nSuch claims, again, given specific workloads for compressed file systems\nare not unheard off as less device I/O has to take place.\n\nGreg",
"msg_date": "14 Mar 2002 14:52:31 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "On the subject on client/server compression, does the server\ndecompress toast data before sending it to the client? Is so, why\n(other than requiring modifications to the protocol)?\n\nOn the flip side, does/could the client toast insert/update data\nbefore sending it to the server?\n\n-Kyle\n",
"msg_date": "Thu, 14 Mar 2002 16:52:36 -0800",
"msg_from": "Kyle <kaf@nwlink.com>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression? "
},
{
"msg_contents": "Kyle wrote:\n> On the subject on client/server compression, does the server\n> decompress toast data before sending it to the client? Is so, why\n> (other than requiring modifications to the protocol)?\n> \n> On the flip side, does/could the client toast insert/update data\n> before sending it to the server?\n\nIt has to decrypt it so the server functions can process it too. Hard\nto avoid that. Of course, in some cases, it doesn't need to be\nprocessed on the server, just passed, so it would have to be done\nconditionally.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 14 Mar 2002 20:43:55 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "On Thu, 2002-03-14 at 14:03, Arguile wrote:\n\n[snip]\n\n> I'm sceptical of the benefit such compressions would provide in this setting\n> though. We're dealing with sets that would have to be compressed every time\n> (no caching) which might be a bit expensive on a database server. Having it\n> as a default off option for psql migtht be nice, but I wonder if it's worth\n> the time, effort, and cpu cycles.\n> \n\nI dunno. That's a good question. For now, I'm making what tends to be\na safe assumption (opps...that word), that most database servers will be\nI/O bound rather than CPU bound. *IF* that assumption hold true, it\nsounds like it may make even more sense to implement this. I do know\nthat in the past, I've seen 90+% compression ratios on many databases\nand 50% - 90% compression ratios on result sets using tunneled\ncompression schemes (which were compressing things other than datasets\nwhich probably hurt overall compression ratios). Depending on the work\nload and the available resources on a database system, it's possible\nthat latency could actually be reduced depending on where you measure\nthis. That is, do you measure latency as first packet back to remote or\nlast packet back to remote. If you use last packet, compression may\nactually win.\n\nMy current thoughts are to allow for enabled/disabled compression and\nvariable compression settings (1-9) within a database configuration. \nWorse case, it may be fun to implement and I'm thinking there may\nactually be some surprises as an end result if it's done properly.\n\nIn looking at the communication code, it looks like only an 8k buffer is\nused. I'm currently looking to bump this up to 32k as most OS's tend to\nhave a sweet throughput spot with buffer sizes between 32k and 64k. \nOthers, depending on the devices in use, like even bigger buffers. \nBecause of the fact that this may be a minor optimization, especially on\na heavily loaded server, we may want to consider making this a\nconfigurable parameter.\n\nGreg",
"msg_date": "15 Mar 2002 12:47:20 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "On Thu, 2002-03-14 at 19:43, Bruce Momjian wrote:\n> Kyle wrote:\n> > On the subject on client/server compression, does the server\n> > decompress toast data before sending it to the client? Is so, why\n> > (other than requiring modifications to the protocol)?\n> > \n> > On the flip side, does/could the client toast insert/update data\n> > before sending it to the server?\n> \n> It has to decrypt it so the server functions can process it too. Hard\n> to avoid that. Of course, in some cases, it doesn't need to be\n> processed on the server, just passed, so it would have to be done\n> conditionally.\n> \n\nAlong those lines, it occurred to me if the compressor somehow knew the\ncardinality of the data rows involved with the result set being\nreturned, a compressor data dictionary (...think of it as a heads up on\npatterns to be looking for) could be created using the unique\ncardinality values which, I'm thinking, could dramatically improve the\nlevel of compression for data being transmitted.\n\nJust some food for thought. After all, these two seem to be somewhat\nrelated as you wouldn't want the communication layer attempting to\nrecompress data which was natively compressed and needed to be\ntransparently transmitted.\n\nGreg",
"msg_date": "15 Mar 2002 13:04:38 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "Greg Copeland wrote:\n> On Thu, 2002-03-14 at 14:03, Arguile wrote:\n>\n> [snip]\n>\n> > I'm sceptical of the benefit such compressions would provide in this setting\n> > though. We're dealing with sets that would have to be compressed every time\n> > (no caching) which might be a bit expensive on a database server. Having it\n> > as a default off option for psql migtht be nice, but I wonder if it's worth\n> > the time, effort, and cpu cycles.\n> >\n>\n> I dunno. That's a good question. For now, I'm making what tends to be\n> a safe assumption (opps...that word), that most database servers will be\n> I/O bound rather than CPU bound. *IF* that assumption hold true, it\n\n If you have too much CPU idle time you wasted money by\n oversizing the machine. And as soon as you add SORT BY to\n your queries, you'll see some CPU used.\n\n I only make the assumption that whenever there is a database\n server, there is an application server as well (or multiple\n of them). Scenarios that require direct end-user connectivity\n to the database server (alas Access->MSSQL) should NOT be\n encouraged.\n\n The db and app should be very close together, coupled with a\n dedicated backbone net. No need for encryption, and if volume\n is a problem, gigabit is the answer.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 15 Mar 2002 14:18:34 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "Greg Copeland wrote:\n> [cut]\n> My current thoughts are to allow for enabled/disabled compression and\n> variable compression settings (1-9) within a database configuration. \n> Worse case, it may be fun to implement and I'm thinking there may\n> actually be some surprises as an end result if it's done properly.\n> \n> [cut]\n>\n> Greg\n\n\nWouldn't Tom's suggestion of riding on top of ssh would give similar\nresults? Anyway, it'd probably be a good proof of concept of whether\nor not it's worth the effort. And that brings up the question: how\nwould you measure the benefit? I'd assume you'd get a good cut in\nnetwork traffic, but you'll take a hit in cpu time. What's an\nacceptable tradeoff?\n\nThat's one reason I was thinking about the toast stuff. If the\nbackend could serve toast, you'd get an improvement in server to\nclient network traffic without the server spending cpu time on\ncompression since the data has previously compressed.\n\nLet me know if this is feasible (or slap me if this is how things\nalready are): when the backend detoasts data, keep both copies in\nmemory. When it comes time to put data on the wire, instead of\nputting the whole enchilada down give the client the compressed toast\ninstead. And yeah, I guess this would require a protocol change to\nflag the compressed data. But it seems like a way to leverage work\nalready done.\n\n-kf\n\n",
"msg_date": "Fri, 15 Mar 2002 17:44:09 -0800",
"msg_from": "Kyle <kaf@nwlink.com>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "On Fri, 2002-03-15 at 19:44, Kyle wrote:\n[snip]\n\n> Wouldn't Tom's suggestion of riding on top of ssh would give similar\n> results? Anyway, it'd probably be a good proof of concept of whether\n> or not it's worth the effort. And that brings up the question: how\n> would you measure the benefit? I'd assume you'd get a good cut in\n> network traffic, but you'll take a hit in cpu time. What's an\n> acceptable tradeoff?\n\nGood question. I've been trying to think of meaningful testing methods,\nhowever, I can still think of reasons all day long where it's not an\nissue of a \"tradeoff\". Simply put, if you have a low bandwidth\nconnection, as long as there are extra cycles available on the server,\nwho really cares...except for the guy at the end of the slow connection.\n\nAs for SSH, well, that should be rather obvious. It often is simply not\navailable. While SSH is nice, I can think of many situations this is a\nwin/win. At least in business settings...where I'm assuming the goal is\nto get Postgres into. Also, along those lines, if SSH is the answer,\nthen surely the SSL support should be removed too...as SSH provides for\nencryption too. Simply put, removing SSL support makes about as much\nsense as asserting that SSH is the final compression solution.\n\nAlso, it keeps being stated that a tangible tradeoff between CPU and\nbandwidth must be realized. This is, of course, a false assumption. \nSimply put, if you need bandwidth, you need bandwidth. Its need is not\na function of CPU, rather, it's a lack of bandwidth. Having said that,\nI of course would still like to have something meaningful which reveals\nthe impact on CPU and bandwidth.\n\nI'm talking about something that would be optional. So, what's the cost\nof having a little extra optional code in place? The only issue, best I\ncan tell, is can it be implemented in a backward compatible manner.\n\n> \n> That's one reason I was thinking about the toast stuff. If the\n> backend could serve toast, you'd get an improvement in server to\n> client network traffic without the server spending cpu time on\n> compression since the data has previously compressed.\n> \n> Let me know if this is feasible (or slap me if this is how things\n> already are): when the backend detoasts data, keep both copies in\n> memory. When it comes time to put data on the wire, instead of\n> putting the whole enchilada down give the client the compressed toast\n> instead. And yeah, I guess this would require a protocol change to\n> flag the compressed data. But it seems like a way to leverage work\n> already done.\n> \n\nI agree with that, however, I'm guessing that implementation would\nrequire a significantly larger effort than what I'm suggesting...then\nagain, probably because I'm not aware of all the code yet. Pretty much,\nthe basic implementation could be in place by the end of this weekend\nwith only a couple hours worth of work...and then, mostly because I\nstill don't know lots of the code. The changes you are talking about is\ngoing to require not only protocol changes but changes at several layers\nwithin the engine.\n\nOf course, something else to keep in mind is that using the TOAST\nsolution requires that TOAST already be in use. What I'm suggesting\nbenefits (size wise) all types of data being sent back to a client.\n\nGreg",
"msg_date": "15 Mar 2002 22:09:20 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> I'm talking about something that would be optional. So, what's the cost\n> of having a little extra optional code in place?\n\nIt costs just as much in maintenance effort even if hardly anyone uses\nit. Actually, probably it costs *more*, since seldom-used features\ntend to break without being noticed until late beta or post-release,\nwhen it's a lot more painful to fix 'em.\n\nFWIW, I was not in favor of the SSL addition either, since (just as you\nsay) it does nothing that couldn't be done with an SSH tunnel. If I had\nsole control of this project I would rip out the SSL code, in preference\nto fixing its many problems. For your entertainment I will attach the\nsection of my private TODO list that deals with SSL problems, and you\nmay ask yourself whether you'd rather see that development time expended\non fixing a feature that really adds zero functionality, or on fixing\nthings that are part of Postgres' core functionality. (Also note that\nthis list covers *only* problems in libpq's SSL support. Multiply this\nby jdbc, odbc, etc to get an idea of what we'd be buying into to support\nour own encryption handling across-the-board.)\n\nThe short answer: we should be standing on the shoulders of the SSH\npeople, not reimplementing (probably badly) what they do well.\n\n\t\t\tregards, tom lane\n\n\nSSL support problems\n--------------------\n\nFix USE_SSL code in fe-connect: move to CONNECTION_MADE case, always\ndo initial connect() in nonblock mode. Per my msg 10/26/01 21:43\n\nEven better would be to be able to do the SSL negotiation in nonblock mode.\nSeems like it should be possible from looking at openssl man pages:\nSSL_connect is documented to work on a nonblock socket. Need to pay attention\nto SSL_WANT_READ vs WANT_WRITE return codes, however, to determine how to set\npolling flag.\n\nError handling for SSL connections is a joke in general, not just lack\nof attention to WANT READ/WRITE.\n\nNonblock socket operations are somewhat broken by SSL because of assumption\nthat library will only block waiting for read-ready. Under SSL it could\ntheoretically block waiting for write-ready, though that should be a\nrelatively small problem normally. Possibly add some API to distinguish which\ncase applies? Not clear that it's needed, since worst possible penalty is a\nbusy-wait loop, and it doesn't seem probable that we could ever so block.\n(Sure? COPY IN could well block that way ... of course COPY IN hardly works\nin nonblock mode anyway ...)\n\nFix docs that probably say SSL-enabled lib doesn't support nonblock.\nNote extreme sloppiness of SSL docs in general, eg the PQREQUIRESSL env var\nis not docd...\n\nOught to add API to set allow_ssl_try = FALSE to suppress initial SSL try in\nan SSL-enabled lib. (Perhaps requiressl = -1? Probably a separate var is\nbetter.)\n\nAlso fix connectDB so that params are accepted but ignored if no SSL support\n--- or perhaps better, should requiressl=1 fail in that case?\n\nConnection restart after protocol error is a tad ugly: closing/reopening sock\nis bad for callers, cf note at end of PQconnectPoll, if the sock # should\nhappen to have changed. Fortunately that's just a legacy-server case\n(pre-7.0)\n",
"msg_date": "Sat, 16 Mar 2002 15:38:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression? "
},
{
"msg_contents": "Some questions for you at the end of this Tom...which I'd been thinking\nabout...and you touched on...hey, you did tell me to ask! :)\n\nOn Sat, 2002-03-16 at 14:38, Tom Lane wrote:\n> Greg Copeland <greg@CopelandConsulting.Net> writes:\n> > I'm talking about something that would be optional. So, what's the cost\n> > of having a little extra optional code in place?\n> \n> It costs just as much in maintenance effort even if hardly anyone uses\n> it. Actually, probably it costs *more*, since seldom-used features\n> tend to break without being noticed until late beta or post-release,\n> when it's a lot more painful to fix 'em.\n\nThat wasn't really what I was asking...\n\n\n> \n> FWIW, I was not in favor of the SSL addition either, since (just as you\n> say) it does nothing that couldn't be done with an SSH tunnel. If I had\n> sole control of this project I would rip out the SSL code, in preference\n\n\nExcept we seemingly don't see eye to eye on it. SSH just is not very\nuseful in many situations simply because it may not always be\navailable. Now, bring Win32 platforms into the mix and SSH really isn't\nan option at all...not without bringing extra boxes to the mix. Ack!\n\nI guess I don't really understand why you seem to feel that items such\nas compression and encryption don't belong...compression I can sorta\nsee, however, without supporting evidence one way or another, I guess I\ndon't understand resistance without knowing the whole picture. I would\ncertainly hope the jury would be out on this until some facts to paint a\npicture are at least available. Encryption, on the other hard, clearly\nDOES belong in the database (and not just I think so) and should not be\nthrust onto other applications, such as SSH, when it may not be\navailable or politically risky to use. That of course, doesn't even\naddress the issues of where it may be unpractical for some users, types\nof applications or platforms. SSH is a fine application which addresses\nmany issues, however, it certainly is not an end-all do all\nencryption/compression solution. Does that mean SSL should be the\nnative encryption solution? I'm not sure I have an answer to that,\nhowever, encryption should be natively available IMOHO.\n\nAs for the laundry list of items...those are simply issues that should\nof been worked out prior to it being merged into the code..it migrated\nto being a maintenance issue. That's not really applicable to most\nsituations if an implementation is well coded and complete prior to it\nbeing merged into the code base. Lastly, stating that a maintenance\ncost of one implementation is a shared cost for all unrelated sections\nof code is naive at best. Generally speaking, the level of maintenance\nis inversely proportional to the quality of a specific design and\nimplementation.\n\nAt this point in time, I'm fairly sure I'm going to code up a\ncompression layer to play with. If it never gets accepted, I'm pretty\nsure I'm okay with that. I guess if it's truly worthy, it can always\nreside in the contributed section. On the other hand, if value can be\nfound in such an implementation and all things being equal, I guess I\nwouldn't understand why it wouldn't be accepted.\n\n\n================================\nquestions\n================================\n\nIf I implement compression between the BE and the FE libpq, does that\nmean that it needs to be added to the other interfaces as well? Do all\ninterfaces (JDBC, ODBC, etc) receive the same BE messages?\n\nIs there any documentation which covers the current protocol\nimplementation? Specifically, I'm interested in the negotiation\nsection...I have been read code already.\n\nHave you never had to support a database via modem? I have and I can\ntell you, compression was God-sent. You do realize that this situation\nif more common that you seem to think it is? Maybe not for Postgres\ndatabases now...but for databases in general.\n\nGreg",
"msg_date": "16 Mar 2002 15:17:41 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": true,
"msg_subject": "Re: Client/Server compression?"
},
{
"msg_contents": "You can also use stunnel for SSL. Preferable to having SSL in postgresql \nI'd think.\n\nCheerio,\nLink.\n\nAt 03:38 PM 3/16/02 -0500, Tom Lane wrote:\n\n>FWIW, I was not in favor of the SSL addition either, since (just as you\n>say) it does nothing that couldn't be done with an SSH tunnel. If I had\n\n\n",
"msg_date": "Sun, 17 Mar 2002 19:47:33 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression? "
},
{
"msg_contents": "Greg Copeland <greg@copelandconsulting.net> writes:\n> Except we seemingly don't see eye to eye on it. SSH just is not very\n> useful in many situations simply because it may not always be\n> available. Now, bring Win32 platforms into the mix and SSH really isn't\n> an option at all...not without bringing extra boxes to the mix. Ack!\n\nNot so. See http://www.openssh.org/windows.html.\n\n> If I implement compression between the BE and the FE libpq, does that\n> mean that it needs to be added to the other interfaces as well?\n\nYes.\n\n> Is there any documentation which covers the current protocol\n> implementation?\n\nYes. See the protocol chapter in the developer's guide.\n\n> Have you never had to support a database via modem?\n\nYes. ssh has always worked fine for me ;-)\n\n> You do realize that this situation\n> if more common that you seem to think it is?\n\nI was not the person claiming that low-bandwidth situations are of no\ninterest. I was the person claiming that the Postgres project should\nnot expend effort on coding and maintaining our own solutions, when\nthere are perfectly good solutions available that we can sit on top of.\n\nYes, a solution integrated into Postgres would be easier to use and\nperhaps a bit more efficient --- but do the incremental advantages of\nan integrated solution justify the incremental cost? I don't think so.\nThe advantages seem small to me, and the long-term costs not so small.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Mar 2002 12:47:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Client/Server compression? "
}
] |
[
{
"msg_contents": "I don't fully understand the xlog files or WAL records but...\n\nWhy isn't the writing of the WAL record based on the CACHE value of the\nsequence? If a request to nextval() can't be satisfied by the cache,\nthe sequence on disk should be updated resulting in a WAL record being\nwritten.\n\nIf two sessions are accessing the sequence, they will likely end up not\nwriting sequential values as they have both taken a chunk of values by\ncalling nextval() but the effect of this could be controlled by the user\nby selecting an acceptable value for CACHE. If I don't mind having\nsession 1 write records 1-10 while session 2 interleaves those with\nrecords 11-20, I should set my cache to 10. If I want my id's to map to\ninsertion time as closely as possible I should set the cache lower (or\nNOCACHE, is that an option?).\n\nI'm concerned that the discussion here has been of the opinion that\nsince no records were written to the database using the value retrieved\nfrom the sequence that no damage has been done. We are using database\nsequences to get unique identifiers for things outside the database. If\na sequence could ever under any circumstances reissue a value, this\ncould be damaging to the integrity of our software.\n",
"msg_date": "Thu, 14 Mar 2002 10:26:50 -0500",
"msg_from": "\"Tom Pfau\" <T.Pfau@emCrit.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "On Thu, 14 Mar 2002, Tom Pfau wrote:\n\n> I don't fully understand the xlog files or WAL records but...\n> \n> Why isn't the writing of the WAL record based on the CACHE value of the\n> sequence? If a request to nextval() can't be satisfied by the cache,\n> the sequence on disk should be updated resulting in a WAL record being\n> written.\n>\n> If two sessions are accessing the sequence, they will likely end up not\n> writing sequential values as they have both taken a chunk of values by\n> calling nextval() but the effect of this could be controlled by the user\n> by selecting an acceptable value for CACHE. \n\n\nI was thinking that too, just leave it up to the end user to decide the\nlevel of performance gain they want. But the cool thing about writing\nahead to the WAL when compared to CACHE is that the on disk copy is\nadvanced ahead of the cached value so that if you have a cache value of\n1 you still get time ordered sequences from multiple backends AND you're\nonly writing to the WAL once every 32 nextval's -- though the write\nahead should really be based on a multiple of CACHE since if your cache\nvalue is 32 then you're not getting any benefit from the WAL savings.\n\n> If I don't mind having\n> session 1 write records 1-10 while session 2 interleaves those with\n> records 11-20, I should set my cache to 10. If I want my id's to map to\n> insertion time as closely as possible I should set the cache lower (or\n> NOCACHE, is that an option?).\n\nThe CACHE value defaults to 1 which means no cache (each time nextval is \ncalled it only grabs 1 value). \n\n> I'm concerned that the discussion here has been of the opinion that\n> since no records were written to the database using the value retrieved\n> from the sequence that no damage has been done. We are using database\n> sequences to get unique identifiers for things outside the database. If\n> a sequence could ever under any circumstances reissue a value, this\n> could be damaging to the integrity of our software.\n\nAbsolutely, we use sequences the same way. And the problem exhibits\nitself regardless of whether data is being inserted or not, and\nindependantly of CACHE value. So this has to be fixed for both\nscenarios.\n\n-- Ben\n\n",
"msg_date": "Thu, 14 Mar 2002 09:29:18 -0600",
"msg_from": "<bgrimm@zaeon.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "\"Tom Pfau\" <T.Pfau@emCrit.com> writes:\n> I'm concerned that the discussion here has been of the opinion that\n> since no records were written to the database using the value retrieved\n> from the sequence that no damage has been done.\n\nUm, you certainly didn't hear me saying that ;-)\n\nThere are two different bugs involved here. One is the no-WAL-flush-\nif-transaction-is-only-nextval problem. AFAIK everyone agrees we must\nfix that. The other issue is this business about \"logging ahead\"\n(to reduce the number of WAL records written) not interacting correctly\nwith checkpoints. What we're arguing about is exactly how to fix that\npart.\n\n> We are using database\n> sequences to get unique identifiers for things outside the database. If\n> a sequence could ever under any circumstances reissue a value, this\n> could be damaging to the integrity of our software.\n\nIf you do a SELECT nextval() and then use the returned value externally\n*without waiting for a commit acknowledgement*, then I think you are\nrisking trouble; there's no guarantee that the WAL record (if one is\nneeded) has hit disk yet, and so a crash could roll back the sequence.\n\nThis isn't an issue for a SELECT nextval() standing on its own ---\nAFAIK the result will not be transmitted to the client until after the\ncommit happens. But it would be an issue for a select executed inside\na transaction block (begin/commit).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Mar 2002 12:27:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec "
},
{
"msg_contents": "I noticed a message asking if this scenario was consistent with the\nother reports, and yes it is. We have seen this occuring on our system\nwith versions as old as 7.0.\n\nGlad to see someone has finally nailed this one.\n\nDave\n\n",
"msg_date": "Thu, 14 Mar 2002 12:54:45 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #613: Sequence values fall back to previously chec "
},
{
"msg_contents": "\"Dave Cramer\" <dave@fastcrypt.com> writes:\n> I noticed a message asking if this scenario was consistent with the\n> other reports, and yes it is. We have seen this occuring on our system\n> with versions as old as 7.0.\n\nGiven that these are WAL bugs, they could not predate 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Mar 2002 14:33:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #613: Sequence values fall back to previously chec "
},
{
"msg_contents": "On Thu, 14 Mar 2002, Tom Lane wrote:\n> \n> If you do a SELECT nextval() and then use the returned value externally\n> *without waiting for a commit acknowledgement*, then I think you are\n> risking trouble; there's no guarantee that the WAL record (if one is\n> needed) has hit disk yet, and so a crash could roll back the sequence.\n> \n> This isn't an issue for a SELECT nextval() standing on its own ---\n> AFAIK the result will not be transmitted to the client until after the\n> commit happens. But it would be an issue for a select executed inside\n> a transaction block (begin/commit).\n> \n\nThe behavior of SELECT nextval() should not be conditional on being in or \nout of a transaction block. What you implying by saying that the data is \nthat it would be possible to rollback an uncommited call to nextval().\nAm I missing some terminology? \n\nI think I finally realized why my old patch that forces a log right off \nthe bat works to fix at least part of the problem. When the database \nis shutdown properly all of the sequences that are in memory are written \nback to disk in their state at that time. But the problem with that is\nthat their state at that time can have log_cnt > 0. This is why after\nstartup that the sequence in memory is 'behind' the one on disk, the \ncode sees log > fetch and doesn't log. When you really think about it\nlog_cnt should not be part of the sequence record at all since there\nis never a valid case for storing a log_cnt on disk with a value other \nthan 0. \n\nMaybe the purpose for the on disk value of log_cnt should be changed?\nIt could be the value used in place of the static SEQ_LOG_VALS, which \ncould then be definable on a per sequence basis. And then log_cnt \ncould be moved into elm->log_cnt. Anyway, that's just a thought.\n\nHere's my latest patch to work around the problem. If there is a way \nto prevent log_cnt from being written out with a value greater than\nzero, that would be better than this. With this behavior log_cnt is \nreset to 0 each time a backend accesses a sequence for the first time.\nThat's probably overkill... But I still believe that the XLogFlush \nafter XLogInsert is necessary to ensure that the WAL value is written \nto disk immediately. In my testing this patch works fine, YMMV.\n\n-- Ben\n\n\n\n*** src/backend/commands/sequence.c.orig Tue Mar 12 18:58:55 2002\n--- src/backend/commands/sequence.c Thu Mar 14 17:34:25 2002\n***************\n*** 62,67 ****\n--- 62,68 ----\n int64 cached;\n int64 last;\n int64 increment;\n+ bool reset_logcnt;\n struct SeqTableData *next;\n } SeqTableData;\n\n***************\n*** 270,275 ****\n--- 271,277 ----\n\n PageSetLSN(page, recptr);\n PageSetSUI(page, ThisStartUpID);\n+ XLogFlush(recptr);\n }\n END_CRIT_SECTION();\n\n***************\n*** 314,321 ****\n PG_RETURN_INT64(elm->last);\n }\n\n! seq = read_info(\"nextval\", elm, &buf); /* lock page' buffer and\n! * read tuple */\n\n last = next = result = seq->last_value;\n incby = seq->increment_by;\n--- 316,322 ----\n PG_RETURN_INT64(elm->last);\n }\n\n! seq = read_info(\"nextval\", elm, &buf); /* lock page' buffer and read tuple */\n\n last = next = result = seq->last_value;\n incby = seq->increment_by;\n***************\n*** 331,339 ****\n log--;\n }\n\n! if (log < fetch)\n {\n! fetch = log = fetch - log + SEQ_LOG_VALS;\n logit = true;\n }\n\n--- 332,340 ----\n log--;\n }\n\n! if (log < fetch)\n {\n! fetch = log = fetch - log + SEQ_LOG_VALS * cache;\n logit = true;\n }\n\n***************\n*** 403,409 ****\n rdata[0].data = (char *) &xlrec;\n rdata[0].len = sizeof(xl_seq_rec);\n rdata[0].next = &(rdata[1]);\n!\n seq->last_value = next;\n seq->is_called = true;\n seq->log_cnt = 0;\n--- 404,410 ----\n rdata[0].data = (char *) &xlrec;\n rdata[0].len = sizeof(xl_seq_rec);\n rdata[0].next = &(rdata[1]);\n!\n seq->last_value = next;\n seq->is_called = true;\n seq->log_cnt = 0;\n***************\n*** 417,423 ****\n\n PageSetLSN(page, recptr);\n PageSetSUI(page, ThisStartUpID);\n!\n if (fetch) /* not all numbers were fetched */\n log -= fetch;\n }\n--- 418,425 ----\n\n PageSetLSN(page, recptr);\n PageSetSUI(page, ThisStartUpID);\n! XLogFlush(recptr);\n!\n if (fetch) /* not all numbers were fetched */\n log -= fetch;\n }\n***************\n*** 507,513 ****\n XLogRecPtr recptr;\n XLogRecData rdata[2];\n Page page = BufferGetPage(buf);\n!\n xlrec.node = elm->rel->rd_node;\n rdata[0].buffer = InvalidBuffer;\n rdata[0].data = (char *) &xlrec;\n--- 509,516 ----\n XLogRecPtr recptr;\n XLogRecData rdata[2];\n Page page = BufferGetPage(buf);\n!\n!\n xlrec.node = elm->rel->rd_node;\n rdata[0].buffer = InvalidBuffer;\n rdata[0].data = (char *) &xlrec;\n***************\n*** 527,532 ****\n--- 530,536 ----\n\n PageSetLSN(page, recptr);\n PageSetSUI(page, ThisStartUpID);\n+ XLogFlush(recptr);\n }\n /* save info in sequence relation */\n seq->last_value = next; /* last fetched number */\n***************\n*** 660,665 ****\n--- 664,674 ----\n\n seq = (Form_pg_sequence) GETSTRUCT(&tuple);\n\n+ if (elm->reset_logcnt)\n+ {\n+ seq->log_cnt = 0;\n+ elm->reset_logcnt = false;\n+ }\n elm->increment = seq->increment_by;\n\n return seq;\n***************\n*** 703,708 ****\n--- 712,718 ----\n name, caller);\n elm->relid = RelationGetRelid(seqrel);\n elm->cached = elm->last = elm->increment = 0;\n+ elm->reset_logcnt = true;\n }\n }\n else\n***************\n*** 721,726 ****\n--- 731,737 ----\n elm->relid = RelationGetRelid(seqrel);\n elm->cached = elm->last = elm->increment = 0;\n elm->next = (SeqTable) NULL;\n+ elm->reset_logcnt = true;\n\n if (seqtab == (SeqTable) NULL)\n seqtab = elm;\n\n\n\n",
"msg_date": "Thu, 14 Mar 2002 16:47:39 -0600",
"msg_from": "Ben Grimm <bgrimm@zaeon.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "Ben Grimm <bgrimm@zaeon.com> writes:\n> The behavior of SELECT nextval() should not be conditional on being in or \n> out of a transaction block.\n\nNonsense. The behavior of INSERT or UPDATE is \"conditional\" in exactly\nthe same way: you should not rely on the reported result until it's\ncommitted.\n\nGiven Vadim's performance concerns, I doubt he'll hold still for forcing\nan XLogFlush immediately every time a sequence XLOG record is written\n-- but AFAICS that'd be the only way to guarantee durability of a\nnextval result in advance of commit. Since I don't think that's an\nappropriate goal for the system to have, I don't care for it either.\n\nI'm planning to try coding up Vadim's approach (pay attention to page's\nold LSN to see if a WAL record must be generated) tonight or tomorrow\nand see if it seems reasonable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Mar 2002 18:58:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec "
}
] |
[
{
"msg_contents": "\nHello!\n\nI have run into a problem with the array of boxes datatype. Here is a \nsimple example:\n\ntestdb=# CREATE TABLE boxarray_test (col1 BOX[2]);\nCREATE\n\ntestdb=# INSERT INTO boxarray_test VALUES ('{\"(3,3),(1,1)\",\"(4,4),(2,2)\"}');\nINSERT 32957 1\n\ntestdb=# SELECT * FROM boxarray_test;\n col1\n---------------\n {(4,4),(2,2)}\n(1 row)\n\nInstead of the above, I expected the result of the SELECT to be:\n\n{\"(3,3),(1,1)\",\"(4,4),(2,2)\"}\n\nArrays of other geometric types worked like I expected them to do.\n\nIs this a bug?\n\nI'm running PostgreSQL 7.2 on Mac OS X 10.1.3 \n(powerpc-apple-darwin5.3), compiled by GCC 2.95.2. I ran the \nregression tests against my installation and all tests were completed \nsuccessfully. (The tests don't seem to cover arrays of geometric \ntypes, though.)\n\n-Andre\n\n\n-- \nAndre Radke + mailto:lists@spicynoodles.net + http://www.spicynoodles.net/\n",
"msg_date": "Thu, 14 Mar 2002 17:00:20 +0100",
"msg_from": "Andre Radke <lists@spicynoodles.net>",
"msg_from_op": true,
"msg_subject": "problem with array of boxes"
},
{
"msg_contents": "Andre Radke <lists@spicynoodles.net> writes:\n> I have run into a problem with the array of boxes datatype.\n\nAfter a little poking at this, it seems that some parts of the array\nsupport code may be failing to pay attention to \"typdelim\". Type box\nhas typdelim set to ';' (it's the only standard datatype whose typdelim\nis not ','). Changing that to ',' made the behavior less unexpected.\nHaven't dug into the code yet for a proper fix.\n\nThis does beg the question of whether box's typdelim should be ';'\nrather than the standard ','. I can see why that was done: box likes\nto use commas in its text representation. But I really wonder how\nmuch client code will be prepared to cope with arrays represented\nwith ';' not ',' between items ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Mar 2002 14:45:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with array of boxes "
},
{
"msg_contents": "Andre Radke <lists@spicynoodles.net> writes:\n> testdb=# CREATE TABLE boxarray_test (col1 BOX[2]);\n> testdb=# INSERT INTO boxarray_test VALUES ('{\"(3,3),(1,1)\",\"(4,4),(2,2)\"}');\n> testdb=# SELECT * FROM boxarray_test;\n> col1\n> ---------------\n> {(4,4),(2,2)}\n> (1 row)\n\nI've finished looking into this, and the short answer is that your input\nis not syntactically correct. Because type box has typdelim = ';', the\ncorrect input would have been\n\nINSERT INTO boxarray_test VALUES ('{\"(3,3),(1,1)\";\"(4,4),(2,2)\"}');\n\n(btw, you could omit the double-quote marks here.) There is indeed a\nbug here: since the array parser didn't think the comma was an item\ndelimiter, IMHO it should have considered the array to contain one item\n\t(3,3),(1,1),(4,4),(2,2)\nwhich would have provoked an error when handed to the box-datatype input\nparser. Instead the array parser messed up and passed only the second\ndouble-quoted substring to the box input routine.\n\nI have fixed this for 7.3: with the just-committed code, I get\n\nboxes=# INSERT INTO boxarray_test VALUES ('{\"(3,3),(1,1)\",\"(4,4),(2,2)\"}');\nERROR: Bad box external representation '(3,3),(1,1),(4,4),(2,2)'\nboxes=# INSERT INTO boxarray_test VALUES ('{\"(3,3),(1,1)\";\"(4,4),(2,2)\"}');\nINSERT 533436 1\nboxes=# INSERT INTO boxarray_test VALUES ('{(3,3),(1,1);(4,4),(2,2)}');\nINSERT 533437 1\nboxes=# select * from boxarray_test;\n col1\n---------------------------\n {(3,3),(1,1);(4,4),(2,2)}\n {(3,3),(1,1);(4,4),(2,2)}\n(2 rows)\n\n\nThis still leaves us with the question of whether it's really a good\nidea that type box has typdelim ';' and not ',' like everything else\nuses. Anyone have a strong feeling about changing it or not? If we\nchange it, we'd instead get this behavior:\n\nboxes=# update pg_type set typdelim = ',' where typname = 'box';\nUPDATE 1\nboxes=# select * from boxarray_test;\n col1\n-------------------------------\n {\"(3,3),(1,1)\",\"(4,4),(2,2)\"}\n {\"(3,3),(1,1)\",\"(4,4),(2,2)\"}\n(2 rows)\n\nboxes=# INSERT INTO boxarray_test VALUES ('{\"(3,3),(1,1)\",\"(4,4),(2,2)\"}');\nINSERT 533438 1\n\nand the double quotes would be required.\n\nOne argument against changing is that it'd break pg_dump output for\nexisting tables containing arrays of boxes ... if any there be.\nGiven that this hasn't come up before, I wonder if anyone's using 'em.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Mar 2002 18:01:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with array of boxes "
},
{
"msg_contents": "At 18:01h -0500 16.03.2002, Tom Lane wrote:\n>I've finished looking into this, and the short answer is that your input\n>is not syntactically correct. Because type box has typdelim = ';', the\n>correct input would have been\n>\n>INSERT INTO boxarray_test VALUES ('{\"(3,3),(1,1)\";\"(4,4),(2,2)\"}');\n\nThanks! I changed my code to use a semi-colon instead of a comma as \ndelimiter and that indeed solved my problem.\n\n>This still leaves us with the question of whether it's really a good\n>idea that type box has typdelim ';' and not ',' like everything else\n>uses. Anyone have a strong feeling about changing it or not?\n\nI'm relatively new to PostgreSQL, so I don't have a qualified opinion on this.\n\n-Andre\n\n\n-- \nAndre Radke + mailto:lists@spicynoodles.net + http://www.spicynoodles.net/\n",
"msg_date": "Sun, 17 Mar 2002 16:15:29 +0100",
"msg_from": "Andre Radke <lists@spicynoodles.net>",
"msg_from_op": true,
"msg_subject": "Re: problem with array of boxes"
}
] |
[
{
"msg_contents": "I did everything as you did, however, when start the postmaster,\nI got following:\nFATAL 1:\t'syslog' is not a valid option name.\n\nJie\n\n-----Original Message-----\nFrom: Joe Conway [mailto:mail@joeconway.com]\nSent: Thursday, March 14, 2002 11:27 AM\nTo: Jie Liang\nCc: 'pgsql-admin@postgresql.org'; pgsql-sql\nSubject: Re: [ADMIN] Syslog\n\n\nJie Liang wrote:\n> I did, it didn't work.\n> \n> Jie Liang\n\nWorks for me. Did you change postgresql.conf? Here's what mine looks like.\n\n#\n# Syslog\n#\n# requires ENABLE_SYSLOG\nsyslog = 1 # range 0-2\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n\n From the online docs:\nSYSLOG (integer)\n\n PostgreSQL allows the use of syslog for logging. If this option is \nset to 1, messages go both to syslog and the standard output. A setting \nof 2 sends output only to syslog. (Some messages will still go to the \nstandard output/error.) The default is 0, which means syslog is off. \nThis option must be set at server start.\n\n To use syslog, the build of PostgreSQL must be configured with the \n--enable-syslog option.\n\n\nSee:\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/runtime-config.h\ntml\n\nJoe\n\n",
"msg_date": "Thu, 14 Mar 2002 12:13:49 -0800",
"msg_from": "Jie Liang <jie@stbernard.com>",
"msg_from_op": true,
"msg_subject": "Re: Syslog"
},
{
"msg_contents": "On Thu, 2002-03-14 at 20:13, Jie Liang wrote:\n> I did everything as you did, however, when start the postmaster,\n> I got following:\n> FATAL 1:\t'syslog' is not a valid option name.\n\nThen you haven't configured postgresql with --enable-syslog. (That\nmessage comes from src/backend/utils/misc/guc.c, if you want\nconfirmation.)\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Let your light so shine before men, that they may see \n your good works, and glorify your Father which is in \n heaven.\" Matthew 5:16 \n\n",
"msg_date": "14 Mar 2002 21:17:05 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Syslog"
},
{
"msg_contents": "On 14 Mar 2002 21:17:05 +0000\n\"Oliver Elphick\" <olly@lfix.co.uk> wrote:\n\n> On Thu, 2002-03-14 at 20:13, Jie Liang wrote:\n> > I did everything as you did, however, when start the postmaster,\n> > I got following:\n> > FATAL 1:\t'syslog' is not a valid option name.\n> \n> Then you haven't configured postgresql with --enable-syslog. (That\n> message comes from src/backend/utils/misc/guc.c, if you want\n> confirmation.)\nHackers: Is there any reason to NOT make --enable-syslog the default\nany more? \n\nI.E. can we change the sense of it to be --disable-syslog and have\nUSE_SYSLOG defined by default? \n\n\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight http://www.lfix.co.uk/oliver\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> \n> \"Let your light so shine before men, that they may see \n> your good works, and glorify your Father which is in \n> heaven.\" Matthew 5:16 \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 14 Mar 2002 17:35:23 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Syslog"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Hackers: Is there any reason to NOT make --enable-syslog the default\n> any more? \n> I.E. can we change the sense of it to be --disable-syslog and have\n> USE_SYSLOG defined by default? \n\nI thought we'd agreed to do that already; at least Peter had indicated\nhe planned to do it. I guess he didn't get around to it for 7.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Mar 2002 19:03:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] Syslog "
}
] |
[
{
"msg_contents": "\nOK, I no one can seem to come up with an improved file format for\npg_hba.conf so I am going to continue in the direction outlined in this\nemail --- basically remove the auth_argument column and make it\n'auth_type=auth_arg' and add a username column, plus add the ability for\nthe username and database columns to use a secondary file if the column\nvalue starts with @.\n\n---------------------------------------------------------------------------\n\npgman wrote:\n> > This is definitely stressing pg_hba past its design limits --- heck, the\n> > name of the file isn't even appropriate anymore, if usernames are part\n> > of the match criteria. Rather than contorting things to maintain a\n> > pretense of backwards compatibility, it's time to abandon the current\n> > file format, change the name, and start over. (I believe there are\n> > traces in the code of this having been done before.) We could probably\n> > arrange to read and convert the existing pg_hba format if we don't see\n> > a new-style authentication config file out there.\n> > \n> > My first thoughts are (a) add a column outright for matching username;\n> > (b) for both database and username columns, allow a filename reference\n> > so that a bunch of names can be stored separately from the master\n> > authentication file. I don't much care for sticking large lists of\n> > names into the auth file itself.\n> \n> OK, I have an idea. I was never happy with the AUTH_ARGUMENT column. \n> What I propose is adding an optional auth_type=val capability to the\n> file, so an AUTH_ARGUMENT column isn't needed. If a username column\n> starts with @, it is a file name containing user names. The same can be\n> done with the database column. Seems very backward compatible.. If you\n> don't use auth_argument, it is totally compatible. If you do, you need\n> to use the new format auth_type=val:\n> \n> TYPE DATABASE IP_ADDRESS MASK AUTH_TYPE USERNAMES\n> local all trust\t fred\n> host all 127.0.0.1 255.255.255.255 trust\t @staff\n> host all 127.0.0.1 255.255.255.255 ident=sales jimmy\n> \n> I have thought about a redesign of the file, but I can't come up with\n> something that is as powerful, and cleaner. Do others have ideas?\n> \n> As far as missing features, I can't think of other things people have\n> asked for in pg_hba.conf except usernames.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 14 Mar 2002 16:33:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Allowing usernames in pg_hba.conf"
}
] |
[
{
"msg_contents": "I'm working on an update to contrib/dblink which would allow \nINSERT/UPDATE/DELETE statements in addition to SELECT statements against \na remote database.\n\nIn the current version, only SELECT is possible because the SQL \nstatement passed to the function gets \"DECLARE mycursor CURSOR FOR \" \nappended to the front of it, and the result set is obtained with \"res = \nPQexec(conn, \"FETCH ALL in mycursor\");\".\n\nMy question is, what is the downside (if any) of eliminating the use of \na cursor in this context? I have locally made the changes, and don't see \nany negative impact. I'd appreciate any thoughts.\n\nThanks,\n\nJoe\n\n",
"msg_date": "Thu, 14 Mar 2002 14:52:07 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "libpq usage question"
}
] |
[
{
"msg_contents": "> > This isn't an issue for a SELECT nextval() standing on\n> > its own AFAIK the result will not be transmitted to the\n> > client until after the commit happens. But it would be\n> > an issue for a select executed inside a transaction\n> > block (begin/commit).\n> \n> The behavior of SELECT nextval() should not be conditional\n> on being in or out of a transaction block.\n\nAnd it's not. But behaviour of application *must* be\nconditional on was transaction committed or not.\n\nWhat's the problem for application that need nextval() for\nexternal (out-of-database) purposes to use sequence values\nonly after transaction commit? What's *wrong* for such application\nto behave the same way as when dealing with other database objects\nwhich are under transaction control (eg only after commit you can\nreport to user that $100 was successfully added to his/her account)?\n\n---\n\nI agree that if nextval-s were only \"write\" actions in transaction\nand they made some XLogInsert-s then WAL must be flushed at commit\ntime. But that's it. Was this fixed? Very easy.\n\nVadim\n",
"msg_date": "Thu, 14 Mar 2002 16:17:39 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "On Thu, 14 Mar 2002, Mikheev, Vadim wrote:\n\n> And it's not. But behaviour of application *must* be\n> conditional on was transaction committed or not.\n> \n> What's the problem for application that need nextval() for\n> external (out-of-database) purposes to use sequence values\n> only after transaction commit? What's *wrong* for such application\n> to behave the same way as when dealing with other database objects\n> which are under transaction control (eg only after commit you can\n> report to user that $100 was successfully added to his/her account)?\n\nBut sequences should not be under transaction control. Can you \nsafely rollback a sequence? No! The only way to ensure that would\nbe to lock the sequence for the duration of the transaction. If \nyou want an ACID compliant sequential value, you implement it using\na transaction safe method (e.g. a table with rows you can lock for\nthe duration of a transaction). If you want a number that is \nguaranteed to always move in one direction, return the next value\nwithout requiring locks, and may have gaps in the numbers returned,\nyou choose a sequence.\n\nPlacing a restriction on an application that says it must treat \nthe values returned from a sequence as if they might not be committed\nis absurd. What about applications that don't use explicit \ntransactions? As soon as a result comes back it should be considered\n'live', on disk, never think about it again. \n \n> I agree that if nextval-s were only \"write\" actions in transaction\n> and they made some XLogInsert-s then WAL must be flushed at commit\n> time. But that's it. Was this fixed? Very easy.\n\nBut aren't the nextval's always going to be the only write actions\nin their transactions since the nextval isn't really a part of the \ntransaction that called it? If it were, then it could be rolled \nback along with that transaction. This is why you can, right now, \ninsert data into a table with a serial column, committing after\neach row, crash the database and STILL have the sequence fall back\nto its initial state. The XLogInserts that occur from the table\ninserts must not happen in the same xact as the nextval's \nXLogInserts. I can demonstrate the behavior quite easilly, and \nBruce posted results that confirmed it.\n\n-- Ben\n",
"msg_date": "Thu, 14 Mar 2002 19:55:23 -0600",
"msg_from": "'Ben Grimm' <bgrimm@zaeon.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "> But sequences should not be under transaction control. Can you\n> safely rollback a sequence? No! The only way to ensure that would\n...\n> Placing a restriction on an application that says it must treat the values\n> returned from a sequence as if they might not be committed is absurd.\n\nWhy? The fact that you are not able to rollback sequences does not\nnecessary mean that you are not required to perform commit to ensure\npermanent storage of changes made to database.\n\nAnd isn't it absurd to do more XLogFlush-es for non-transactional objects\nthan we do for transactional ones? And why? Just for convenience of\n<< 1% applications which need to use sequences in their own,\nnon-database, external objects? We are not required to care about those\nobjects, but we'd better care about performance of our operations over our\nobjects.\n\n> > I agree that if nextval-s were only \"write\" actions in transaction\n> > and they made some XLogInsert-s then WAL must be flushed at commit\n> > time. But that's it. Was this fixed? Very easy.\n>\n> But aren't the nextval's always going to be the only write actions\n> in their transactions since the nextval isn't really a part of the\n> transaction that called it? If it were, then it could be rolled\n\nThere are no nextval' transactions. See how XLOG_NO_TRAN flag\nis used in XLogInsert and you'll see why there is no XLogFlush\nafter transaction-with-nextval-only (which causes N1 reported problem).\n\n> back along with that transaction. This is why you can, right now,\n> insert data into a table with a serial column, committing after\n> each row, crash the database and STILL have the sequence fall back\n> to its initial state. The XLogInserts that occur from the table\n> inserts must not happen in the same xact as the nextval's\n> XLogInserts. I can demonstrate the behavior quite easilly, and\n> Bruce posted results that confirmed it.\n\nJust wait until Tom adds check for system RedoRecPtr in nextval()\nand try to reproduce this behaviour (N2 reported problem)\nafter that.\n\nVadim\n\n\n",
"msg_date": "Fri, 15 Mar 2002 01:05:33 -0800",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "On Fri, 15 Mar 2002, Vadim Mikheev wrote:\n\n> > But sequences should not be under transaction control. Can you\n> > safely rollback a sequence? No! The only way to ensure that would\n> ...\n> > Placing a restriction on an application that says it must treat the values\n> > returned from a sequence as if they might not be committed is absurd.\n> \n> Why? The fact that you are not able to rollback sequences does not\n> necessary mean that you are not required to perform commit to ensure\n> permanent storage of changes made to database.\n\nI'm not sure I agree, but I'll wait to see the behavior of the db after\nthe changes are made.\n\n> And isn't it absurd to do more XLogFlush-es for non-transactional objects\n> than we do for transactional ones? And why? Just for convenience of\n> << 1% applications which need to use sequences in their own,\n> non-database, external objects? We are not required to care about those\n> objects, but we'd better care about performance of our operations over our\n> objects.\n\nYes, absolutely - if there's a better way, which apparently there is, \nthen sure, eliminate the calls to XLogFlush. It's a workaround, a hack.\nI am much more concerned with getting the behavior correct than I am \nabout getting some code with my name on it into a release. My workarounds \nonly served to point out flaws in the design, even if I didn't quite\nunderstand at the time why they worked :-)\n\n> There are no nextval' transactions. See how XLOG_NO_TRAN flag\n> is used in XLogInsert and you'll see why there is no XLogFlush\n> after transaction-with-nextval-only (which causes N1 reported problem).\n> \n> Just wait until Tom adds check for system RedoRecPtr in nextval()\n> and try to reproduce this behaviour (N2 reported problem)\n> after that.\n> \n\nThank you! I think I have much better understanding of how this works \nnow. \n\nWhen these bugs are fixed there is still the issue of bug #3 that I \ncame across. The one that I work around by resetting log_cnt to 0 when a \nbackend initializes a sequence. It's this third bug that made the other \ntwo so apparent. Fixing them does not obviate the need to fix this one.\n\nIs there a way to intercept writes or reads such that when a sequnce is\ngoing to or from disk that we can force log_cnt = 0? Right now that's \nworked around by my 'reset_logcnt' flag in the patch, but I know that it \nmay not be an ideal solution. But, since sequences are just tuples like \neverything else I don't see an obvious way to prevent it. \n\n-- Ben\n",
"msg_date": "Fri, 15 Mar 2002 07:23:58 -0600",
"msg_from": "\"'Ben Grimm'\" <bgrimm@zaeon.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "\"'Ben Grimm'\" <bgrimm@zaeon.com> writes:\n> When these bugs are fixed there is still the issue of bug #3 that I \n> came across. The one that I work around by resetting log_cnt to 0 when a \n> backend initializes a sequence. It's this third bug that made the other \n> two so apparent. Fixing them does not obviate the need to fix this one.\n\nWhat's bug #3? I don't recall a third issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Mar 2002 09:34:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec "
},
{
"msg_contents": "Attached is a patch against current CVS that fixes both of the known\nproblems with sequences: failure to flush XLOG after a transaction\nthat only does \"SELECT nextval()\", and failure to force a new WAL\nrecord to be written on the first nextval after a checkpoint.\n(The latter uses Vadim's idea of looking at the sequence page LSN.)\nI haven't tested it really extensively, but it seems to cure the\nreported problems.\n\nSome notes:\n\n1. I found what I believe is another bug in the sequence logic:\n\t\tfetch = log = fetch - log + SEQ_LOG_VALS;\nshould be\n\t\tfetch = log = fetch + SEQ_LOG_VALS;\nI can't see any reason to reduce the number of values prefetched\nby the number formerly prefetched. Also, if the sequence's \"cache\"\nsetting is large (more than SEQ_LOG_VALS), the original code could\neasily fail to fetch as many values as it was supposed to cache,\nlet alone additional ones to be prefetched and logged.\n\n2. I renamed XLogCtl->RedoRecPtr to SavedRedoRecPtr, and renamed\nthe associated routines to SetSavedRedoRecPtr/GetSavedRedoRecPtr,\nin hopes of reducing confusion.\n\n3. I believe it'd now be possible to remove SavedRedoRecPtr and\nSetSavedRedoRecPtr/GetSavedRedoRecPtr entirely, in favor of letting\nthe postmaster fetch the updated pointer with GetRedoRecPtr just\nlike a backend would. This would be cleaner and less code ... but\nsomeone might object that it introduces a risk of postmaster hangup,\nif some backend crashes whilst holding info_lck. I consider that\nrisk minuscule given the short intervals in which info_lck is held,\nbut it can't be denied that the risk is not zero. Thoughts?\n\nComments? Unless I hear objections I will patch this in current\nand the 7.2 branch. (If we agree to remove SavedRedoRecPtr,\nthough, I don't think we should back-patch that change.)\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/access/transam/xact.c.orig\tTue Mar 12 07:56:31 2002\n--- src/backend/access/transam/xact.c\tThu Mar 14 20:00:50 2002\n***************\n*** 546,577 ****\n \txid = GetCurrentTransactionId();\n \n \t/*\n! \t * We needn't write anything in xlog or clog if the transaction was\n! \t * read-only, which we check by testing if it made any xlog entries.\n \t */\n! \tif (MyLastRecPtr.xrecoff != 0)\n \t{\n- \t\tXLogRecData rdata;\n- \t\txl_xact_commit xlrec;\n \t\tXLogRecPtr\trecptr;\n \n \t\tBufmgrCommit();\n \n- \t\txlrec.xtime = time(NULL);\n- \t\trdata.buffer = InvalidBuffer;\n- \t\trdata.data = (char *) (&xlrec);\n- \t\trdata.len = SizeOfXactCommit;\n- \t\trdata.next = NULL;\n- \n \t\tSTART_CRIT_SECTION();\n \n! \t\t/*\n! \t\t * SHOULD SAVE ARRAY OF RELFILENODE-s TO DROP\n! \t\t */\n! \t\trecptr = XLogInsert(RM_XACT_ID, XLOG_XACT_COMMIT, &rdata);\n \n \t\t/*\n! \t\t * Sleep before commit! So we can flush more than one commit\n \t\t * records per single fsync. (The idea is some other backend may\n \t\t * do the XLogFlush while we're sleeping. This needs work still,\n \t\t * because on most Unixen, the minimum select() delay is 10msec or\n--- 546,593 ----\n \txid = GetCurrentTransactionId();\n \n \t/*\n! \t * We only need to log the commit in xlog and clog if the transaction made\n! \t * any transaction-controlled XLOG entries. (Otherwise, its XID appears\n! \t * nowhere in permanent storage, so no one will ever care if it\n! \t * committed.) However, we must flush XLOG to disk if we made any XLOG\n! \t * entries, whether in or out of transaction control. For example, if we\n! \t * reported a nextval() result to the client, this ensures that any XLOG\n! \t * record generated by nextval will hit the disk before we report the\n! \t * transaction committed.\n \t */\n! \tif (MyXactMadeXLogEntry)\n \t{\n \t\tXLogRecPtr\trecptr;\n \n \t\tBufmgrCommit();\n \n \t\tSTART_CRIT_SECTION();\n \n! \t\tif (MyLastRecPtr.xrecoff != 0)\n! \t\t{\n! \t\t\t/* Need to emit a commit record */\n! \t\t\tXLogRecData rdata;\n! \t\t\txl_xact_commit xlrec;\n! \n! \t\t\txlrec.xtime = time(NULL);\n! \t\t\trdata.buffer = InvalidBuffer;\n! \t\t\trdata.data = (char *) (&xlrec);\n! \t\t\trdata.len = SizeOfXactCommit;\n! \t\t\trdata.next = NULL;\n! \n! \t\t\t/*\n! \t\t\t * XXX SHOULD SAVE ARRAY OF RELFILENODE-s TO DROP\n! \t\t\t */\n! \t\t\trecptr = XLogInsert(RM_XACT_ID, XLOG_XACT_COMMIT, &rdata);\n! \t\t}\n! \t\telse\n! \t\t{\n! \t\t\t/* Just flush through last record written by me */\n! \t\t\trecptr = ProcLastRecEnd;\n! \t\t}\n \n \t\t/*\n! \t\t * Sleep before flush! So we can flush more than one commit\n \t\t * records per single fsync. (The idea is some other backend may\n \t\t * do the XLogFlush while we're sleeping. This needs work still,\n \t\t * because on most Unixen, the minimum select() delay is 10msec or\n***************\n*** 593,607 ****\n \n \t\tXLogFlush(recptr);\n \n! \t\t/* Break the chain of back-links in the XLOG records I output */\n! \t\tMyLastRecPtr.xrecoff = 0;\n! \n! \t\t/* Mark the transaction committed in clog */\n! \t\tTransactionIdCommit(xid);\n \n \t\tEND_CRIT_SECTION();\n \t}\n \n \t/* Show myself as out of the transaction in PROC array */\n \tMyProc->logRec.xrecoff = 0;\n \n--- 609,625 ----\n \n \t\tXLogFlush(recptr);\n \n! \t\t/* Mark the transaction committed in clog, if needed */\n! \t\tif (MyLastRecPtr.xrecoff != 0)\n! \t\t\tTransactionIdCommit(xid);\n \n \t\tEND_CRIT_SECTION();\n \t}\n \n+ \t/* Break the chain of back-links in the XLOG records I output */\n+ \tMyLastRecPtr.xrecoff = 0;\n+ \tMyXactMadeXLogEntry = false;\n+ \n \t/* Show myself as out of the transaction in PROC array */\n \tMyProc->logRec.xrecoff = 0;\n \n***************\n*** 689,696 ****\n \tTransactionId xid = GetCurrentTransactionId();\n \n \t/*\n! \t * We needn't write anything in xlog or clog if the transaction was\n! \t * read-only, which we check by testing if it made any xlog entries.\n \t *\n \t * Extra check here is to catch case that we aborted partway through\n \t * RecordTransactionCommit ...\n--- 707,717 ----\n \tTransactionId xid = GetCurrentTransactionId();\n \n \t/*\n! \t * We only need to log the abort in xlog and clog if the transaction made\n! \t * any transaction-controlled XLOG entries. (Otherwise, its XID appears\n! \t * nowhere in permanent storage, so no one will ever care if it\n! \t * committed.) We do not flush XLOG to disk in any case, since the\n! \t * default assumption after a crash would be that we aborted, anyway.\n \t *\n \t * Extra check here is to catch case that we aborted partway through\n \t * RecordTransactionCommit ...\n***************\n*** 714,724 ****\n \t\t */\n \t\trecptr = XLogInsert(RM_XACT_ID, XLOG_XACT_ABORT, &rdata);\n \n- \t\t/*\n- \t\t * There's no need for XLogFlush here, since the default\n- \t\t * assumption would be that we aborted, anyway.\n- \t\t */\n- \n \t\t/* Mark the transaction aborted in clog */\n \t\tTransactionIdAbort(xid);\n \n--- 735,740 ----\n***************\n*** 727,732 ****\n--- 743,750 ----\n \n \t/* Break the chain of back-links in the XLOG records I output */\n \tMyLastRecPtr.xrecoff = 0;\n+ \tMyXactMadeXLogEntry = false;\n+ \n \t/* Show myself as out of the transaction in PROC array */\n \tMyProc->logRec.xrecoff = 0;\n \n*** src/backend/access/transam/xlog.c.orig\tTue Mar 12 07:56:31 2002\n--- src/backend/access/transam/xlog.c\tThu Mar 14 20:29:51 2002\n***************\n*** 131,157 ****\n \n /*\n * MyLastRecPtr points to the start of the last XLOG record inserted by the\n! * current transaction. If MyLastRecPtr.xrecoff == 0, then we are not in\n! * a transaction or the transaction has not yet made any loggable changes.\n *\n * Note that XLOG records inserted outside transaction control are not\n! * reflected into MyLastRecPtr.\n */\n XLogRecPtr\tMyLastRecPtr = {0, 0};\n \n /*\n * ProcLastRecPtr points to the start of the last XLOG record inserted by the\n * current backend. It is updated for all inserts, transaction-controlled\n! * or not.\n */\n static XLogRecPtr ProcLastRecPtr = {0, 0};\n \n /*\n * RedoRecPtr is this backend's local copy of the REDO record pointer\n * (which is almost but not quite the same as a pointer to the most recent\n * CHECKPOINT record).\tWe update this from the shared-memory copy,\n * XLogCtl->Insert.RedoRecPtr, whenever we can safely do so (ie, when we\n! * hold the Insert lock). See XLogInsert for details.\n */\n static XLogRecPtr RedoRecPtr;\n \n--- 131,166 ----\n \n /*\n * MyLastRecPtr points to the start of the last XLOG record inserted by the\n! * current transaction. If MyLastRecPtr.xrecoff == 0, then the current\n! * xact hasn't yet inserted any transaction-controlled XLOG records.\n *\n * Note that XLOG records inserted outside transaction control are not\n! * reflected into MyLastRecPtr. They do, however, cause MyXactMadeXLogEntry\n! * to be set true. The latter can be used to test whether the current xact\n! * made any loggable changes (including out-of-xact changes, such as\n! * sequence updates).\n */\n XLogRecPtr\tMyLastRecPtr = {0, 0};\n \n+ bool\t\tMyXactMadeXLogEntry = false;\n+ \n /*\n * ProcLastRecPtr points to the start of the last XLOG record inserted by the\n * current backend. It is updated for all inserts, transaction-controlled\n! * or not. ProcLastRecEnd is similar but points to end+1 of last record.\n */\n static XLogRecPtr ProcLastRecPtr = {0, 0};\n \n+ XLogRecPtr\tProcLastRecEnd = {0, 0};\n+ \n /*\n * RedoRecPtr is this backend's local copy of the REDO record pointer\n * (which is almost but not quite the same as a pointer to the most recent\n * CHECKPOINT record).\tWe update this from the shared-memory copy,\n * XLogCtl->Insert.RedoRecPtr, whenever we can safely do so (ie, when we\n! * hold the Insert lock). See XLogInsert for details. We are also allowed\n! * to update from XLogCtl->Insert.RedoRecPtr if we hold the info_lck;\n! * see GetRedoRecPtr.\n */\n static XLogRecPtr RedoRecPtr;\n \n***************\n*** 272,278 ****\n \tStartUpID\tThisStartUpID;\n \n \t/* This value is not protected by *any* lock... */\n! \tXLogRecPtr\tRedoRecPtr;\t\t/* see SetRedoRecPtr/GetRedoRecPtr */\n \n \tslock_t\t\tinfo_lck;\t\t/* locks shared LogwrtRqst/LogwrtResult */\n } XLogCtlData;\n--- 281,288 ----\n \tStartUpID\tThisStartUpID;\n \n \t/* This value is not protected by *any* lock... */\n! \t/* see SetSavedRedoRecPtr/GetSavedRedoRecPtr */\n! \tXLogRecPtr\tSavedRedoRecPtr;\n \n \tslock_t\t\tinfo_lck;\t\t/* locks shared LogwrtRqst/LogwrtResult */\n } XLogCtlData;\n***************\n*** 777,782 ****\n--- 787,793 ----\n \t\tMyLastRecPtr = RecPtr;\n \tProcLastRecPtr = RecPtr;\n \tInsert->PrevRecord = RecPtr;\n+ \tMyXactMadeXLogEntry = true;\n \n \tInsert->currpos += SizeOfXLogRecord;\n \tfreespace -= SizeOfXLogRecord;\n***************\n*** 855,860 ****\n--- 866,873 ----\n \t\tSpinLockRelease_NoHoldoff(&xlogctl->info_lck);\n \t}\n \n+ \tProcLastRecEnd = RecPtr;\n+ \n \tEND_CRIT_SECTION();\n \n \treturn (RecPtr);\n***************\n*** 2538,2544 ****\n \n \tThisStartUpID = checkPoint.ThisStartUpID;\n \tRedoRecPtr = XLogCtl->Insert.RedoRecPtr =\n! \t\tXLogCtl->RedoRecPtr = checkPoint.redo;\n \n \tif (XLByteLT(RecPtr, checkPoint.redo))\n \t\telog(PANIC, \"invalid redo in checkpoint record\");\n--- 2551,2557 ----\n \n \tThisStartUpID = checkPoint.ThisStartUpID;\n \tRedoRecPtr = XLogCtl->Insert.RedoRecPtr =\n! \t\tXLogCtl->SavedRedoRecPtr = checkPoint.redo;\n \n \tif (XLByteLT(RecPtr, checkPoint.redo))\n \t\telog(PANIC, \"invalid redo in checkpoint record\");\n***************\n*** 2824,2855 ****\n SetThisStartUpID(void)\n {\n \tThisStartUpID = XLogCtl->ThisStartUpID;\n! \tRedoRecPtr = XLogCtl->RedoRecPtr;\n }\n \n /*\n * CheckPoint process called by postmaster saves copy of new RedoRecPtr\n! * in shmem (using SetRedoRecPtr).\tWhen checkpointer completes, postmaster\n! * calls GetRedoRecPtr to update its own copy of RedoRecPtr, so that\n! * subsequently-spawned backends will start out with a reasonably up-to-date\n! * local RedoRecPtr. Since these operations are not protected by any lock\n! * and copying an XLogRecPtr isn't atomic, it's unsafe to use either of these\n! * routines at other times!\n! *\n! * Note: once spawned, a backend must update its local RedoRecPtr from\n! * XLogCtl->Insert.RedoRecPtr while holding the insert lock. This is\n! * done in XLogInsert().\n */\n void\n! SetRedoRecPtr(void)\n {\n! \tXLogCtl->RedoRecPtr = RedoRecPtr;\n }\n \n void\n GetRedoRecPtr(void)\n {\n! \tRedoRecPtr = XLogCtl->RedoRecPtr;\n }\n \n /*\n--- 2837,2883 ----\n SetThisStartUpID(void)\n {\n \tThisStartUpID = XLogCtl->ThisStartUpID;\n! \tRedoRecPtr = XLogCtl->SavedRedoRecPtr;\n }\n \n /*\n * CheckPoint process called by postmaster saves copy of new RedoRecPtr\n! * in shmem (using SetSavedRedoRecPtr). When checkpointer completes,\n! * postmaster calls GetSavedRedoRecPtr to update its own copy of RedoRecPtr,\n! * so that subsequently-spawned backends will start out with a reasonably\n! * up-to-date local RedoRecPtr. Since these operations are not protected by\n! * any lock and copying an XLogRecPtr isn't atomic, it's unsafe to use either\n! * of these routines at other times!\n */\n void\n! SetSavedRedoRecPtr(void)\n {\n! \tXLogCtl->SavedRedoRecPtr = RedoRecPtr;\n }\n \n void\n+ GetSavedRedoRecPtr(void)\n+ {\n+ \tRedoRecPtr = XLogCtl->SavedRedoRecPtr;\n+ }\n+ \n+ /*\n+ * Once spawned, a backend may update its local RedoRecPtr from\n+ * XLogCtl->Insert.RedoRecPtr; it must hold the insert lock or info_lck\n+ * to do so. This is done in XLogInsert() or GetRedoRecPtr().\n+ */\n+ XLogRecPtr\n GetRedoRecPtr(void)\n {\n! \t/* use volatile pointer to prevent code rearrangement */\n! \tvolatile XLogCtlData *xlogctl = XLogCtl;\n! \n! \tSpinLockAcquire_NoHoldoff(&xlogctl->info_lck);\n! \tAssert(XLByteLE(RedoRecPtr, xlogctl->Insert.RedoRecPtr));\n! \tRedoRecPtr = xlogctl->Insert.RedoRecPtr;\n! \tSpinLockRelease_NoHoldoff(&xlogctl->info_lck);\n! \n! \treturn RedoRecPtr;\n }\n \n /*\n***************\n*** 2862,2867 ****\n--- 2890,2896 ----\n \n \t/* suppress in-transaction check in CreateCheckPoint */\n \tMyLastRecPtr.xrecoff = 0;\n+ \tMyXactMadeXLogEntry = false;\n \n \tCritSectionCount++;\n \tCreateDummyCaches();\n***************\n*** 2886,2892 ****\n \tuint32\t\t_logId;\n \tuint32\t\t_logSeg;\n \n! \tif (MyLastRecPtr.xrecoff != 0)\n \t\telog(ERROR, \"CreateCheckPoint: cannot be called inside transaction block\");\n \n \t/*\n--- 2915,2921 ----\n \tuint32\t\t_logId;\n \tuint32\t\t_logSeg;\n \n! \tif (MyXactMadeXLogEntry)\n \t\telog(ERROR, \"CreateCheckPoint: cannot be called inside transaction block\");\n \n \t/*\n***************\n*** 2972,2980 ****\n \n \t/*\n \t * Here we update the shared RedoRecPtr for future XLogInsert calls;\n! \t * this must be done while holding the insert lock.\n \t */\n! \tRedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n \n \t/*\n \t * Get UNDO record ptr - this is oldest of PROC->logRec values. We do\n--- 3001,3016 ----\n \n \t/*\n \t * Here we update the shared RedoRecPtr for future XLogInsert calls;\n! \t * this must be done while holding the insert lock AND the info_lck.\n \t */\n! \t{\n! \t\t/* use volatile pointer to prevent code rearrangement */\n! \t\tvolatile XLogCtlData *xlogctl = XLogCtl;\n! \n! \t\tSpinLockAcquire_NoHoldoff(&xlogctl->info_lck);\n! \t\tRedoRecPtr = xlogctl->Insert.RedoRecPtr = checkPoint.redo;\n! \t\tSpinLockRelease_NoHoldoff(&xlogctl->info_lck);\n! \t}\n \n \t/*\n \t * Get UNDO record ptr - this is oldest of PROC->logRec values. We do\n*** src/backend/bootstrap/bootstrap.c.orig\tTue Mar 12 07:56:32 2002\n--- src/backend/bootstrap/bootstrap.c\tThu Mar 14 20:19:51 2002\n***************\n*** 386,392 ****\n \t\t\t\tInitDummyProcess();\t\t/* needed to get LWLocks */\n \t\t\tCreateDummyCaches();\n \t\t\tCreateCheckPoint(false);\n! \t\t\tSetRedoRecPtr();\n \t\t\tproc_exit(0);\t\t/* done */\n \n \t\tcase BS_XLOG_STARTUP:\n--- 386,392 ----\n \t\t\t\tInitDummyProcess();\t\t/* needed to get LWLocks */\n \t\t\tCreateDummyCaches();\n \t\t\tCreateCheckPoint(false);\n! \t\t\tSetSavedRedoRecPtr(); /* pass redo ptr back to postmaster */\n \t\t\tproc_exit(0);\t\t/* done */\n \n \t\tcase BS_XLOG_STARTUP:\n*** src/backend/commands/sequence.c.orig\tTue Mar 12 07:56:35 2002\n--- src/backend/commands/sequence.c\tThu Mar 14 22:06:22 2002\n***************\n*** 286,291 ****\n--- 286,292 ----\n \tchar\t *seqname = get_seq_name(seqin);\n \tSeqTable\telm;\n \tBuffer\t\tbuf;\n+ \tPage\t\tpage;\n \tForm_pg_sequence seq;\n \tint64\t\tincby,\n \t\t\t\tmaxv,\n***************\n*** 316,321 ****\n--- 317,323 ----\n \n \tseq = read_info(\"nextval\", elm, &buf);\t\t/* lock page' buffer and\n \t\t\t\t\t\t\t\t\t\t\t\t * read tuple */\n+ \tpage = BufferGetPage(buf);\n \n \tlast = next = result = seq->last_value;\n \tincby = seq->increment_by;\n***************\n*** 331,341 ****\n \t\tlog--;\n \t}\n \n \tif (log < fetch)\n \t{\n! \t\tfetch = log = fetch - log + SEQ_LOG_VALS;\n \t\tlogit = true;\n \t}\n \n \twhile (fetch)\t\t\t\t/* try to fetch cache [+ log ] numbers */\n \t{\n--- 333,365 ----\n \t\tlog--;\n \t}\n \n+ \t/*\n+ \t * Decide whether we should emit a WAL log record. If so, force up\n+ \t * the fetch count to grab SEQ_LOG_VALS more values than we actually\n+ \t * need to cache. (These will then be usable without logging.)\n+ \t *\n+ \t * If this is the first nextval after a checkpoint, we must force\n+ \t * a new WAL record to be written anyway, else replay starting from the\n+ \t * checkpoint would fail to advance the sequence past the logged\n+ \t * values. In this case we may as well fetch extra values.\n+ \t */\n \tif (log < fetch)\n \t{\n! \t\t/* forced log to satisfy local demand for values */\n! \t\tfetch = log = fetch + SEQ_LOG_VALS;\n \t\tlogit = true;\n \t}\n+ \telse\n+ \t{\n+ \t\tXLogRecPtr\tredoptr = GetRedoRecPtr();\n+ \n+ \t\tif (XLByteLE(PageGetLSN(page), redoptr))\n+ \t\t{\n+ \t\t\t/* last update of seq was before checkpoint */\n+ \t\t\tfetch = log = fetch + SEQ_LOG_VALS;\n+ \t\t\tlogit = true;\n+ \t\t}\n+ \t}\n \n \twhile (fetch)\t\t\t\t/* try to fetch cache [+ log ] numbers */\n \t{\n***************\n*** 386,391 ****\n--- 410,418 ----\n \t\t}\n \t}\n \n+ \tlog -= fetch;\t\t\t\t/* adjust for any unfetched numbers */\n+ \tAssert(log >= 0);\n+ \n \t/* save info in local cache */\n \telm->last = result;\t\t\t/* last returned number */\n \telm->cached = last;\t\t\t/* last fetched number */\n***************\n*** 396,402 ****\n \t\txl_seq_rec\txlrec;\n \t\tXLogRecPtr\trecptr;\n \t\tXLogRecData rdata[2];\n- \t\tPage\t\tpage = BufferGetPage(buf);\n \n \t\txlrec.node = elm->rel->rd_node;\n \t\trdata[0].buffer = InvalidBuffer;\n--- 423,428 ----\n***************\n*** 417,431 ****\n \n \t\tPageSetLSN(page, recptr);\n \t\tPageSetSUI(page, ThisStartUpID);\n- \n- \t\tif (fetch)\t\t\t\t/* not all numbers were fetched */\n- \t\t\tlog -= fetch;\n \t}\n \n \t/* update on-disk data */\n \tseq->last_value = last;\t\t/* last fetched number */\n \tseq->is_called = true;\n- \tAssert(log >= 0);\n \tseq->log_cnt = log;\t\t\t/* how much is logged */\n \tEND_CRIT_SECTION();\n \n--- 443,453 ----\n*** src/backend/postmaster/postmaster.c.orig\tTue Mar 12 07:57:01 2002\n--- src/backend/postmaster/postmaster.c\tThu Mar 14 20:20:01 2002\n***************\n*** 1683,1689 ****\n \t\t\t{\n \t\t\t\tcheckpointed = time(NULL);\n \t\t\t\t/* Update RedoRecPtr for future child backends */\n! \t\t\t\tGetRedoRecPtr();\n \t\t\t}\n \t\t}\n \t\telse\n--- 1683,1689 ----\n \t\t\t{\n \t\t\t\tcheckpointed = time(NULL);\n \t\t\t\t/* Update RedoRecPtr for future child backends */\n! \t\t\t\tGetSavedRedoRecPtr();\n \t\t\t}\n \t\t}\n \t\telse\n*** src/include/access/xlog.h.orig\tFri Nov 16 12:17:19 2001\n--- src/include/access/xlog.h\tThu Mar 14 20:20:51 2002\n***************\n*** 178,183 ****\n--- 178,185 ----\n extern StartUpID ThisStartUpID; /* current SUI */\n extern bool InRecovery;\n extern XLogRecPtr MyLastRecPtr;\n+ extern bool MyXactMadeXLogEntry;\n+ extern XLogRecPtr ProcLastRecEnd;\n \n /* these variables are GUC parameters related to XLOG */\n extern int\tCheckPointSegments;\n***************\n*** 205,212 ****\n extern void CreateCheckPoint(bool shutdown);\n extern void SetThisStartUpID(void);\n extern void XLogPutNextOid(Oid nextOid);\n! extern void SetRedoRecPtr(void);\n! extern void GetRedoRecPtr(void);\n \n /* in storage/ipc/sinval.c, but don't want to declare in sinval.h because\n * we'd have to include xlog.h into that ...\n--- 207,215 ----\n extern void CreateCheckPoint(bool shutdown);\n extern void SetThisStartUpID(void);\n extern void XLogPutNextOid(Oid nextOid);\n! extern void SetSavedRedoRecPtr(void);\n! extern void GetSavedRedoRecPtr(void);\n! extern XLogRecPtr GetRedoRecPtr(void);\n \n /* in storage/ipc/sinval.c, but don't want to declare in sinval.h because\n * we'd have to include xlog.h into that ...",
"msg_date": "Fri, 15 Mar 2002 09:39:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec "
},
{
"msg_contents": "Tom Lane wrote:\n> Attached is a patch against current CVS that fixes both of the known\n> problems with sequences: failure to flush XLOG after a transaction\n> that only does \"SELECT nextval()\", and failure to force a new WAL\n> record to be written on the first nextval after a checkpoint.\n> (The latter uses Vadim's idea of looking at the sequence page LSN.)\n> I haven't tested it really extensively, but it seems to cure the\n> reported problems.\n\nI can confirm that the patch fixes the problem shown in my simple test:\n\ntest=> create table test (x serial, y varchar(255));\nNOTICE: CREATE TABLE will create implicit sequence 'test_x_seq' for SERIAL column 'test.x'\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'test_x_key' for table 'test'\nCREATE\ntest=> insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');\nINSERT 16561 1\ntest=> insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');\nINSERT 16562 1\ntest=> insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');\nINSERT 16563 1\n...\n\ntest=> select nextval('test_x_seq');\n nextval \n---------\n 22\n(1 row)\n\ntest=> checkpoint;\nCHECKPOINT\ntest=> insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');\nINSERT 16582 1\ntest=> insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');\nINSERT 16583 1\ntest=> insert into test (y) values ('lkjasdflkja sdfl;kj asdfl;kjasdf');\nINSERT 16584 1\n\n[ kill -9 to backend ]\n\n#$ sql test\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntest=> select nextval('test_x_seq');\n nextval \n---------\n 56\n(1 row)\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Mar 2002 10:43:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #613: Sequence values fall back to previously"
},
{
"msg_contents": "Tom Lane wrote:\n> 2. I renamed XLogCtl->RedoRecPtr to SavedRedoRecPtr, and renamed\n> the associated routines to SetSavedRedoRecPtr/GetSavedRedoRecPtr,\n> in hopes of reducing confusion.\n\nGood.\n\n> 3. I believe it'd now be possible to remove SavedRedoRecPtr and\n> SetSavedRedoRecPtr/GetSavedRedoRecPtr entirely, in favor of letting\n> the postmaster fetch the updated pointer with GetRedoRecPtr just\n> like a backend would. This would be cleaner and less code ... but\n> someone might object that it introduces a risk of postmaster hangup,\n> if some backend crashes whilst holding info_lck. I consider that\n> risk minuscule given the short intervals in which info_lck is held,\n> but it can't be denied that the risk is not zero. Thoughts?\n\nThe change sounds good to me.\n\n> Comments? Unless I hear objections I will patch this in current\n> and the 7.2 branch. (If we agree to remove SavedRedoRecPtr,\n> though, I don't think we should back-patch that change.)\n\nTotally agree.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Mar 2002 10:44:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bug #613: Sequence values fall back to previously"
},
{
"msg_contents": "On Fri, 15 Mar 2002, Tom Lane wrote:\n\n> \"'Ben Grimm'\" <bgrimm@zaeon.com> writes:\n> > When these bugs are fixed there is still the issue of bug #3 that I \n> > came across. The one that I work around by resetting log_cnt to 0 when a \n> > backend initializes a sequence. It's this third bug that made the other \n> > two so apparent. Fixing them does not obviate the need to fix this one.\n> \n> What's bug #3? I don't recall a third issue.\n> \n\nThe problem I was seeing before is that when the postmaster was shutdown \nproperly, log_cnt in the sequence record was saved with whatever value it \nhad at the time. So when it loaded from disk it would have a value greater \nthan zero resulting in no XLogInsert until you'd exceded log_cnt calls to\nnextval. \n\nAFAICT, your patch fixes this problem, as I can't reproduce it now. \n\nThanks!\n\n-- Ben\n",
"msg_date": "Fri, 15 Mar 2002 09:44:35 -0600",
"msg_from": "\"'Ben Grimm'\" <bgrimm@zaeon.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "\"'Ben Grimm'\" <bgrimm@zaeon.com> writes:\n> On Fri, 15 Mar 2002, Tom Lane wrote:\n>> What's bug #3? I don't recall a third issue.\n\n> The problem I was seeing before is that when the postmaster was shutdown \n> properly, log_cnt in the sequence record was saved with whatever value it \n> had at the time.\n\nRight, it's supposed to do that.\n\n> So when it loaded from disk it would have a value greater \n> than zero resulting in no XLogInsert until you'd exceded log_cnt calls to\n> nextval. \n\nThis is the same as the post-checkpoint issue: we fix it by forcing an\nXLogInsert on the first nextval after a checkpoint (or system startup).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Mar 2002 11:03:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec "
},
{
"msg_contents": "(userland comment)\n\nOn Fri, Mar 15, 2002 at 01:05:33AM -0800, Vadim Mikheev wrote:\n| > But sequences should not be under transaction control. Can you\n| > safely rollback a sequence? No! The only way to ensure that would\n| ...\n| > Placing a restriction on an application that says it must treat the values\n| > returned from a sequence as if they might not be committed is absurd.\n| \n| Why? The fact that you are not able to rollback sequences does not\n| necessary mean that you are not required to perform commit to ensure\n| permanent storage of changes made to database.\n\nI use sequences to generate message identifiers for a simple\nexternal-to-database message passing system. I also use\nthem for file upload identifiers. In both cases, if the\nexternal action (message or file upload) succeeds, I commit; \notherwise I roll-back. I assume that the datbase won't give\nme a duplicate sequence... otherwise I'd have to find some\nother way go get sequences or I'd have duplicate messages\nor non-unique file identifiers.\n\nWith these changes is this assumption no longer valid? If\nso, this change will break alot of user programs.\n\n| And why? Just for convenience of << 1% applications which need\n| to use sequences in their own, non-database, external objects?\n\nI think you may be underestimating the amount of \"external resources\"\nwhich may be associated with a datbase object. Regardless, may of the\ndatabase features in PostgreSQL are there for 1% or less of the\nuser base... \n\nBest,\n\nClark\n\n-- \nClark C. Evans Axista, Inc.\nhttp://www.axista.com 800.926.5525\nXCOLLA Collaborative Project Management Software\n",
"msg_date": "Fri, 15 Mar 2002 20:54:02 -0500",
"msg_from": "\"Clark C . Evans\" <cce@clarkevans.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "I do basically the same thing for files. Except I md5 a 4 character\nrandom string, and the sequence ID just incase I get the same one\ntwice -- as it's never been written in stone that I wouldn't -- not to\nmention the high number of requests for returning a sequence ID back\nto the pool on a rollback.\n\nAnyway, you might try using the OID rather than a sequence ID but if\nyou rollback the database commit due to failure of an action\nexternally, shouldn't you be cleaning up that useless external stuff\nas well?\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Clark C . Evans\" <cce@clarkevans.com>\nTo: \"Vadim Mikheev\" <vmikheev@sectorbase.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, March 15, 2002 8:54 PM\nSubject: Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to\npreviously chec\n\n\n> (userland comment)\n>\n> On Fri, Mar 15, 2002 at 01:05:33AM -0800, Vadim Mikheev wrote:\n> | > But sequences should not be under transaction control. Can you\n> | > safely rollback a sequence? No! The only way to ensure that\nwould\n> | ...\n> | > Placing a restriction on an application that says it must treat\nthe values\n> | > returned from a sequence as if they might not be committed is\nabsurd.\n> |\n> | Why? The fact that you are not able to rollback sequences does not\n> | necessary mean that you are not required to perform commit to\nensure\n> | permanent storage of changes made to database.\n>\n> I use sequences to generate message identifiers for a simple\n> external-to-database message passing system. I also use\n> them for file upload identifiers. In both cases, if the\n> external action (message or file upload) succeeds, I commit;\n> otherwise I roll-back. I assume that the datbase won't give\n> me a duplicate sequence... otherwise I'd have to find some\n> other way go get sequences or I'd have duplicate messages\n> or non-unique file identifiers.\n>\n> With these changes is this assumption no longer valid? If\n> so, this change will break alot of user programs.\n>\n> | And why? Just for convenience of << 1% applications which need\n> | to use sequences in their own, non-database, external objects?\n>\n> I think you may be underestimating the amount of \"external\nresources\"\n> which may be associated with a datbase object. Regardless, may of\nthe\n> database features in PostgreSQL are there for 1% or less of the\n> user base...\n>\n> Best,\n>\n> Clark\n>\n> --\n> Clark C. Evans Axista, Inc.\n> http://www.axista.com 800.926.5525\n> XCOLLA Collaborative Project Management Software\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n",
"msg_date": "Fri, 15 Mar 2002 21:02:25 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #613: Sequence values fall back to previously chec"
},
{
"msg_contents": "> | > Placing a restriction on an application that says it must treat the\nvalues\n> | > returned from a sequence as if they might not be committed is absurd.\n> |\n> | Why? The fact that you are not able to rollback sequences does not\n> | necessary mean that you are not required to perform commit to ensure\n> | permanent storage of changes made to database.\n>\n> I use sequences to generate message identifiers for a simple\n> external-to-database message passing system. I also use\n> them for file upload identifiers. In both cases, if the\n> external action (message or file upload) succeeds, I commit;\n> otherwise I roll-back. I assume that the datbase won't give\n> me a duplicate sequence... otherwise I'd have to find some\n\nSo can you do \"select nextval()\" in *separate* (committed)\ntransaction *before* external action and \"real\" transaction where\nyou store information (with sequence number) about external\naction in database?\n\nBEGIN;\nSELECT NEXTVAL();\nCOMMIT;\nBEGIN;\n-- Do external actions and store info in DB --\nCOMMIT/ROLLBACK;\n\nIs this totally unacceptable? Is it really *required* to call nextval()\nin *the same* transaction where you store info in DB? Why?\n\n> other way go get sequences or I'd have duplicate messages\n> or non-unique file identifiers.\n>\n> With these changes is this assumption no longer valid? If\n\n1. It's not valid to assume that sequences will not return duplicate\n numbers if there was no commit after nextval.\n2. It doesn't matter when sequence numbers are stored in\n database objects only.\n3. But if you're going to use sequence numbers in external objects\n you must (pre)fetch those numbers in separate committed\n transaction.\n\n(Can we have this in FAQ?)\n\n> so, this change will break alot of user programs.\n>\n> | And why? Just for convenience of << 1% applications which need\n> | to use sequences in their own, non-database, external objects?\n>\n> I think you may be underestimating the amount of \"external resources\"\n> which may be associated with a datbase object. Regardless, may of the\n> database features in PostgreSQL are there for 1% or less of the\n> user base...\n\nPlease note that I was talking about some *inconvenience*, not about\n*inability* of using sequence numbers externally (seems my words were\ntoo short). Above is how to do this. And though I agreed that it's not\nvery convenient/handy/cosy to *take care* and fetch numbers in\nseparate committed transaction, but it's required only in those special\ncases and I think it's better than do fsync() per each nextval() call what\nwould affect other users/applications where storing sequence numbers\noutside of database is not required.\n\nVadim\n\n\n",
"msg_date": "Fri, 15 Mar 2002 22:04:09 -0800",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #613: Sequence values fall back to previously chec"
}
] |
[
{
"msg_contents": "Is there an easy way to test the lock on a user level lock without actually\nissuing the lock?\n\nI would like to use them, but there is only a LockAcquire() and\nLockRelease().. There is no LockTest()..\n\nI guess I could do:\n\nIF LockAcquire() == 0:\n \"locked\" do whatever if it is locked...\nELSE:\n LockRelease()\n \"unlocked\" do whatever since it was not locked in the first place..\n\nThis just seems to be an inefficient way of doing this...\n\nThanks,\nLance Ellinghaus\n\n",
"msg_date": "Fri, 15 Mar 2002 00:43:07 -0600",
"msg_from": "\"Lance Ellinghaus\" <lellinghaus@yahoo.com>",
"msg_from_op": true,
"msg_subject": "User Level Lock question"
},
{
"msg_contents": "\"Lance Ellinghaus\" <lellinghaus@yahoo.com> writes:\n> Is there an easy way to test the lock on a user level lock without actually\n> issuing the lock?\n\nWhy would you ever want to do such a thing? If you \"test\" the lock but\ndon't actually acquire it, someone else might acquire the lock half a\nmicrosecond after you look at it --- and then what does your test result\nmean? It's certainly unsafe to take any action based on assuming that\nthe lock is free.\n\nI suspect what you really want is a conditional acquire, which you can\nget (in recent versions) using the dontWait parameter to LockAcquire.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Mar 2002 10:11:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: User Level Lock question "
},
{
"msg_contents": "I know it does not sound like something that would need to be done, but here\nis why I am looking at doing this...\n\nI am trying to replace a low level ISAM database with PostgreSQL. The low\nlevel ISAM db allows locking a record during a read to allow Exclusive\naccess to the record for that process. If someone tries to do a READ\noperation on that record, it is skipped. I have to duplicate this\nfunctionality. The application also allows locking multiple records and then\nunlocking individual records or unlocking all of them at once. This cannot\nbe done easily with PostgreSQL unless I add a \"status\" field to the records\nand manage them. This can be done, but User Level Locks seem like a much\nbetter solution as they provide faster locking, no writes to the database,\nwhen the backend quits all locks are released automatically, and I could\nlock multiple records and then clear them as needed. They also exist outside\nof transactions!\n\nSo my idea was to use User Level Locks on records and then include a test on\nthe lock status in my SELECT statements to filter out any records that have\na User Level Lock on it. I don't need to set it during the query, just test\nif there is a lock to remove them from the query. When I need to do a true\nlock during the SELECT, I can do it with the supplied routines.\n\nDoes this make any more sense now or have I made it that much more\nconfusing?\n\nLance\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Lance Ellinghaus\" <lellinghaus@yahoo.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, March 15, 2002 9:11 AM\nSubject: Re: [HACKERS] User Level Lock question\n\n\n> \"Lance Ellinghaus\" <lellinghaus@yahoo.com> writes:\n> > Is there an easy way to test the lock on a user level lock without\nactually\n> > issuing the lock?\n>\n> Why would you ever want to do such a thing? If you \"test\" the lock but\n> don't actually acquire it, someone else might acquire the lock half a\n> microsecond after you look at it --- and then what does your test result\n> mean? It's certainly unsafe to take any action based on assuming that\n> the lock is free.\n>\n> I suspect what you really want is a conditional acquire, which you can\n> get (in recent versions) using the dontWait parameter to LockAcquire.\n>\n> regards, tom lane\n\n",
"msg_date": "Fri, 15 Mar 2002 13:54:30 -0600",
"msg_from": "\"Lance Ellinghaus\" <lellinghaus@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: User Level Lock question "
},
{
"msg_contents": "Are you trying to do a select for update?\n\nGreg\n\n\nOn Fri, 2002-03-15 at 13:54, Lance Ellinghaus wrote:\n> I know it does not sound like something that would need to be done, but here\n> is why I am looking at doing this...\n> \n> I am trying to replace a low level ISAM database with PostgreSQL. The low\n> level ISAM db allows locking a record during a read to allow Exclusive\n> access to the record for that process. If someone tries to do a READ\n> operation on that record, it is skipped. I have to duplicate this\n> functionality. The application also allows locking multiple records and then\n> unlocking individual records or unlocking all of them at once. This cannot\n> be done easily with PostgreSQL unless I add a \"status\" field to the records\n> and manage them. This can be done, but User Level Locks seem like a much\n> better solution as they provide faster locking, no writes to the database,\n> when the backend quits all locks are released automatically, and I could\n> lock multiple records and then clear them as needed. They also exist outside\n> of transactions!\n> \n> So my idea was to use User Level Locks on records and then include a test on\n> the lock status in my SELECT statements to filter out any records that have\n> a User Level Lock on it. I don't need to set it during the query, just test\n> if there is a lock to remove them from the query. When I need to do a true\n> lock during the SELECT, I can do it with the supplied routines.\n> \n> Does this make any more sense now or have I made it that much more\n> confusing?\n> \n> Lance\n> \n> ----- Original Message -----\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> To: \"Lance Ellinghaus\" <lellinghaus@yahoo.com>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Friday, March 15, 2002 9:11 AM\n> Subject: Re: [HACKERS] User Level Lock question\n> \n> \n> > \"Lance Ellinghaus\" <lellinghaus@yahoo.com> writes:\n> > > Is there an easy way to test the lock on a user level lock without\n> actually\n> > > issuing the lock?\n> >\n> > Why would you ever want to do such a thing? If you \"test\" the lock but\n> > don't actually acquire it, someone else might acquire the lock half a\n> > microsecond after you look at it --- and then what does your test result\n> > mean? It's certainly unsafe to take any action based on assuming that\n> > the lock is free.\n> >\n> > I suspect what you really want is a conditional acquire, which you can\n> > get (in recent versions) using the dontWait parameter to LockAcquire.\n> >\n> > regards, tom lane\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html",
"msg_date": "15 Mar 2002 15:03:11 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: User Level Lock question"
},
{
"msg_contents": "On Fri, 2002-03-15 at 14:54, Lance Ellinghaus wrote:\n> I know it does not sound like something that would need to be done, but here\n> is why I am looking at doing this...\n> \n> I am trying to replace a low level ISAM database with PostgreSQL. The low\n> level ISAM db allows locking a record during a read to allow Exclusive\n> access to the record for that process. If someone tries to do a READ\n> operation on that record, it is skipped.\n\nIf the locked record is skipped, how can the application be sure it is\ngetting a consistent view of the data?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "15 Mar 2002 17:24:02 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: User Level Lock question"
},
{
"msg_contents": "On Fri, 2002-03-15 at 16:24, Neil Conway wrote:\n> On Fri, 2002-03-15 at 14:54, Lance Ellinghaus wrote:\n> > I know it does not sound like something that would need to be done, but here\n> > is why I am looking at doing this...\n> > \n> > I am trying to replace a low level ISAM database with PostgreSQL. The low\n> > level ISAM db allows locking a record during a read to allow Exclusive\n> > access to the record for that process. If someone tries to do a READ\n> > operation on that record, it is skipped.\n> \n> If the locked record is skipped, how can the application be sure it is\n> getting a consistent view of the data?\n> \n> Cheers,\n> \n> Neil\n> \n\n\nYa, that's what I'm trying to figure out.\n\nIt sounds like either he's doing what equates to a select for update or\nmore of less needs a visibility attribute for the row in question. \nEither way, perhaps he should share more information on what the end\ngoal is so we can better address any changes in idiom that better\nreflect a relational database.\n\nGreg",
"msg_date": "15 Mar 2002 16:41:32 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: User Level Lock question"
},
{
"msg_contents": "The application actually does not want nor need a consistent view of the\ndata. It is expecting that records that are locked will not be viewed at\nall. The locks are normally held for VERY short periods of time. The fact\nthat the application is expecting locked records not to be viewed is causing\nme problems because under PostgreSQL this is not easy to do. Even if I lock\na record using \"SELECT ... FOR UPDATE\", I can still do a SELECT and read it.\nI need to effectively do a \"SELECT ... FOR UPDATE\" and make the other\nreading clients skip that record completely.\n\nI can do this with a flag column, but this requires the disk access to do\nthe UPDATE and if the client/backend quits/crashes with outstanding records\nmarked, they are locked.\n\nThe User Level Locks look like a great way to do this as I can set a lock\nvery quickly without disk access and if the client/backend quits/crashes,\nthe locks are automatically removed.\n\nI can set the User Level Lock on a record using the supplied routines in the\ncontrib directory when I do a SELECT, and can reset the lock by doing an\nUPDATE or SELECT as well.\nBut without the ability to test for an existing lock (without ever setting\nit) I cannot skip the locked records.\n\nI would set up all the SELECTs in thunking layer (I cannot rewrite the\napplication, only replace the ISAM library with a thunking library that\nconverts the ISAM calls to PostgreSQL calls) to look like the following:\n\nSELECT col1, col2, col3\nFROM table\nWHERE\n col1 = 'whatever'\n AND\n col2 = 'whatever'\n AND\n user_lock_test(oid) = 0;\n\nuser_lock_test() would return 0 if there is no current lock, and 1 if there\nis.\n\nDoes this clear it up a little more or make it more complicated. The big\nproblem is the way that the ISAM code acts compared to a REAL RDBMS. If this\napplication was coded with a RDBMS in mind, things would be much easier.\n\nLance\n\n----- Original Message -----\nFrom: \"Neil Conway\" <nconway@klamath.dyndns.org>\nTo: \"Lance Ellinghaus\" <lellinghaus@yahoo.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, March 15, 2002 4:24 PM\nSubject: Re: [HACKERS] User Level Lock question\n\n\n> On Fri, 2002-03-15 at 14:54, Lance Ellinghaus wrote:\n> > I know it does not sound like something that would need to be done, but\nhere\n> > is why I am looking at doing this...\n> >\n> > I am trying to replace a low level ISAM database with PostgreSQL. The\nlow\n> > level ISAM db allows locking a record during a read to allow Exclusive\n> > access to the record for that process. If someone tries to do a READ\n> > operation on that record, it is skipped.\n>\n> If the locked record is skipped, how can the application be sure it is\n> getting a consistent view of the data?\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n\n",
"msg_date": "Fri, 15 Mar 2002 21:45:19 -0600",
"msg_from": "\"Lance Ellinghaus\" <lellinghaus@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: User Level Lock question"
},
{
"msg_contents": "On Fri, 2002-03-15 at 21:45, Lance Ellinghaus wrote:\n> The application actually does not want nor need a consistent view of the\n> data. It is expecting that records that are locked will not be viewed at\n> all. The locks are normally held for VERY short periods of time. The fact\n> that the application is expecting locked records not to be viewed is causing\n\nYou keep asserting that these \"viewed\" records qualify as being called\nlocked. It sounds like a record attribute to me. Furthermore, it\nsounds like that attribute reflects a record's visibility and not if\nit's locked. Locks are generally used to limit accessibility rather\nthan visibility. This, I think, seems like the primary source of issue\nyou're having with your desired implementation.\n\n> me problems because under PostgreSQL this is not easy to do. Even if I lock\n> a record using \"SELECT ... FOR UPDATE\", I can still do a SELECT and read it.\n> I need to effectively do a \"SELECT ... FOR UPDATE\" and make the other\n> reading clients skip that record completely.\n> \n> I can do this with a flag column, but this requires the disk access to do\n> the UPDATE and if the client/backend quits/crashes with outstanding records\n> marked, they are locked.\n\nThat's what transactions are for. If you have a failure, the\ntransaction should be rolled back. The visibility marker would be\nrestored to it's original visible state.\n\n> \n> The User Level Locks look like a great way to do this as I can set a lock\n> very quickly without disk access and if the client/backend quits/crashes,\n> the locks are automatically removed.\n\nBut do you really need to lock it or hide it or both? If both, you may\nwant to consider doing an update inside of a transaction or even a\nselect for update if it fits your needs. Transactions are your friend. \n:) I'm assuming you're needing to lock it because you are needing to\nupdate the row at some point in time. If you are not wanting to update\nit, then you are really needing to hide it, not lock it.\n\n> \n> I can set the User Level Lock on a record using the supplied routines in the\n> contrib directory when I do a SELECT, and can reset the lock by doing an\n> UPDATE or SELECT as well.\n> But without the ability to test for an existing lock (without ever setting\n> it) I cannot skip the locked records.\n> \n> I would set up all the SELECTs in thunking layer (I cannot rewrite the\n> application, only replace the ISAM library with a thunking library that\n> converts the ISAM calls to PostgreSQL calls) to look like the following:\n> \n> SELECT col1, col2, col3\n> FROM table\n> WHERE\n> col1 = 'whatever'\n> AND\n> col2 = 'whatever'\n> AND\n> user_lock_test(oid) = 0;\n> \n> user_lock_test() would return 0 if there is no current lock, and 1 if there\n> is.\n\n\nSELECT col1, col2, col3\nFROM table\nWHERE\n\tcol1 = 'whatever'\n\tAND\n\tcol2 = 'whatever'\n\tAND\n\tvisible = '1' ;\n\n\n> \n> Does this clear it up a little more or make it more complicated. The big\n> problem is the way that the ISAM code acts compared to a REAL RDBMS. If this\n> application was coded with a RDBMS in mind, things would be much easier.\n> \n\nI understand that...and that can be hard...but sometimes semantics and\nidioms have to be adjusted to allow for an ISAM to RDBMS migration.\n\n\nGreg",
"msg_date": "15 Mar 2002 22:35:27 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: User Level Lock question"
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Lance Ellinghaus\" <lellinghaus@yahoo.com>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: <pgsql-hackers@postgresql.org>\nSent: Saturday, March 16, 2002 6:54 AM\nSubject: Re: [HACKERS] User Level Lock question\n\n\n> I know it does not sound like something that would need to be done, but\nhere\n> is why I am looking at doing this...\n>\n> I am trying to replace a low level ISAM database with PostgreSQL. The low\n> level ISAM db allows locking a record during a read to allow Exclusive\n> access to the record for that process. If someone tries to do a READ\n> operation on that record, it is skipped. I have to duplicate this\n> functionality. The application also allows locking multiple records and\nthen\n> unlocking individual records or unlocking all of them at once. This cannot\n> be done easily with PostgreSQL unless I add a \"status\" field to the\nrecords\n> and manage them. This can be done, but User Level Locks seem like a much\n> better solution as they provide faster locking, no writes to the database,\n> when the backend quits all locks are released automatically, and I could\n> lock multiple records and then clear them as needed. They also exist\noutside\n> of transactions!\n>\n> So my idea was to use User Level Locks on records and then include a test\non\n> the lock status in my SELECT statements to filter out any records that\nhave\n> a User Level Lock on it. I don't need to set it during the query, just\ntest\n> if there is a lock to remove them from the query. When I need to do a true\n> lock during the SELECT, I can do it with the supplied routines.\n>\nIn INFORMIX you have a similar option except that you have the choice to\ndecide whether the other client blocks or continue, but in any case it\nreturns an error status. You even can set a delay while you accept to be\nbloked and the lock can be set on database, table or record level. We use\ntable locking to speed up some time consuming processings.\nI guess it would be better to have at least an error code returned. The\napplication can then choose to ignore the error code.\n\n> Does this make any more sense now or have I made it that much more\n> confusing?\n>\n> Lance\n>\n> ----- Original Message -----\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> To: \"Lance Ellinghaus\" <lellinghaus@yahoo.com>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Friday, March 15, 2002 9:11 AM\n> Subject: Re: [HACKERS] User Level Lock question\n>\n>\n> > \"Lance Ellinghaus\" <lellinghaus@yahoo.com> writes:\n> > > Is there an easy way to test the lock on a user level lock without\n> actually\n> > > issuing the lock?\n> >\n> > Why would you ever want to do such a thing? If you \"test\" the lock but\n> > don't actually acquire it, someone else might acquire the lock half a\n> > microsecond after you look at it --- and then what does your test result\n> > mean? It's certainly unsafe to take any action based on assuming that\n> > the lock is free.\n> >\n> > I suspect what you really want is a conditional acquire, which you can\n> > get (in recent versions) using the dontWait parameter to LockAcquire.\n> >\n> > regards, tom lane\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n",
"msg_date": "Mon, 18 Mar 2002 10:34:30 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": false,
"msg_subject": "Re: User Level Lock question"
}
] |
[
{
"msg_contents": "Do U know if pgSQL supports XML ?\n4 example : BROWSER->APACHE JSERV->SERVLET->DB->xsl+xml->HTML\nDo U know any open source DB doing that?\nThanks a lot\n\n",
"msg_date": "Fri, 15 Mar 2002 14:44:09 +0100",
"msg_from": "longjohn <longjohn@katamail.com>",
"msg_from_op": true,
"msg_subject": "XML"
},
{
"msg_contents": "Please search the archives and then direct these sorts of questions to the\npgsql-general list. This list relates to the development of PostgreSQL.\n\nGavin\n\nOn Fri, 15 Mar 2002, longjohn wrote:\n\n> Do U know if pgSQL supports XML ?\n> 4 example : BROWSER->APACHE JSERV->SERVLET->DB->xsl+xml->HTML\n> Do U know any open source DB doing that?\n> Thanks a lot\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Thu, 21 Mar 2002 01:05:14 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: XML"
},
{
"msg_contents": "If it's a servlet calling the database in your example, the below can\nbe accomplished through the use of procedures which create the XML in\nquestion and return it.\n\nIe. select xmlGetUser(userid);\n\nYou have to write xmlGetUser() to take in the userid and return the\nxml required for it. I see no advantage to generating the xml in the\ndb rather than in the servlet.\n\nSoap or XML-RPC on the other hand could be a useful tool for the\ndatabase to understand directly -- but thats certainly not going to be\nfast.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"longjohn\" <longjohn@katamail.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Friday, March 15, 2002 8:44 AM\nSubject: [HACKERS] XML\n\n\n> Do U know if pgSQL supports XML ?\n> 4 example : BROWSER->APACHE JSERV->SERVLET->DB->xsl+xml->HTML\n> Do U know any open source DB doing that?\n> Thanks a lot\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Wed, 20 Mar 2002 09:20:47 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: XML"
},
{
"msg_contents": "On Fri, 2002-03-15 at 15:44, longjohn wrote:\n> Do U know if pgSQL supports XML ?\n> 4 example : BROWSER->APACHE JSERV->SERVLET->DB->xsl+xml->HTML\n\nNot natively, you need extra tier that will do (DB->xml)\n\n> Do U know any open source DB doing that?\n\nCheck http://dmoz.org/Computers/Software/Databases/XML/\n\n--------------\nHannu\n\n\n",
"msg_date": "20 Mar 2002 18:40:16 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: XML"
},
{
"msg_contents": "On Fri, 15 Mar 2002, longjohn wrote:\n\n> Do U know if pgSQL supports XML ?\n> 4 example : BROWSER->APACHE JSERV->SERVLET->DB->xsl+xml->HTML\n\nThere's some xml stuff in the contrib directory of the PostgreSQL\ndistribution.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 20 Mar 2002 11:56:08 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: XML"
},
{
"msg_contents": "> You have to write xmlGetUser() to take in the userid and return the\n> xml required for it. I see no advantage to generating the xml in the\n> db rather than in the servlet.\n\nAs a counterexample, my PKIX extensions defined an \"XML\" datatype\nthat could be used to generate XML instead of the standard format.\nE.g.,\n\n select cert as xml from certs where ....;\n\nBut this was an exceptional case - for many of the objects the\n\"standard\" format is a base-64 encoded ASN.1 string, but with\nXML I need to extract the fields *and* still include the object\nas a base-64 encoded ASN.1 string. It was *much* easier to just \ngenerate it in the backend than to do it at the db->xml level.\n\nMore generally, I think we need to keep an mind an important \ndistinction here. Most of the time XML represents the contents\nof an entire tuple, and each field corresponds to a attribute.\nIn these cases an external db->xml layer makes the most sense.\n\nBut with extensions it's common to have complex objects in a\nsingle attribute, and there may be a standard way to represent\nthe object in XML. (E.g., all of my XML conversions are extracted\nfrom the proposed \"Signature\" schema at W3C.) In these cases\nit may make more sense for the extension to provide its own\nXML mechanisms, but it would be nice if there was s standard \nway of handling this.\n\nMy suggestion was mentioned above. Just make \"xml\" a standard\ndata type. It can be transparently converted to/from text, but\nyou can define functions that return \"xml\" (or accept xml) and\nuse casts to specify when you want XML instead of the normal\nformat for that attribute.\n\n",
"msg_date": "Wed, 20 Mar 2002 10:40:49 -0700 (MST)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": false,
"msg_subject": "Re: XML"
},
{
"msg_contents": "Hi,\n\nI have something that do exactly what you are looking for (and more)\nbut damned it's perl stuff !\n\nYou just have to set your SQL queries into XML files, associate\nXSL template to them and run a cgi perl script througth apache (with or\nwithout mod_perl). You can also add perl procedures to process your data\nbefore output. I also parse cgi parameters... and more.\n\nIf you're interested let me know !\n\nlongjohn wrote:\n\n> Do U know if pgSQL supports XML ?\n> 4 example : BROWSER->APACHE JSERV->SERVLET->DB->xsl+xml->HTML\n> Do U know any open source DB doing that?\n> Thanks a lot\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Wed, 20 Mar 2002 20:14:23 +0100",
"msg_from": "Gilles DAROLD <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: XML"
}
] |
[
{
"msg_contents": "> Attached is a patch against current CVS that fixes both of the known\n> problems with sequences: failure to flush XLOG after a transaction\n\nGreat! Thanks... and sorry for missing these cases year ago -:)\n\nVadim\n",
"msg_date": "Fri, 15 Mar 2002 10:36:16 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Re: Bug #613: Sequence values fall back to previously chec"
}
] |
[
{
"msg_contents": "I'm inclined to go to that thesis defense. Sounds\nquite interesting :)\n\n-s\n\n----- Original Message ----- \nFrom: \"MONKIEWICZ Halina\" <halina@cs.concordia.ca>\nTo: <csseminar@cs.concordia.ca>; <khendek@ece.concordia.ca>\nSent: Friday, March 15, 2002 1:14 PM\nSubject: [General] master thesis defence, Xin Shen, Wed. March 27, 16:00, H 601\n\n> MASTER THESIS DEFENCE\n> \n> \n> SPEAKER: Xin Shen\n> \n> TITLE: An Architecture Tradeoff Analysis of PostgreSQL \n> System\n> \n> DATE: Wednesday, March 27, 2002\n> \n> TIME: 16:00\n> \n> PLACE: H 601\n> \n> \n> ABSTRACT\n> \n> \n> The Architecture Tradeoff Analysis Method(ATAM) was develped by R.Kazman, M.Klein \n> and P.Clements to evaluate early architectural decisions of software\\ development \n> in terms of quality attributes to avoid expensive architectural mistakes. \n> \n> PostgreSQL developed in the University of California was a pioneer of many modern \n> RDBMS systems. It is now an open source project. \n> \n> We applied ATAM to the postgreSQL project in light of identifying the \n> architectural features and possible pitfalls in the architecture of similar DBMS \n> systems. The result shows that we identifed some sensitive points, tradeoff \n> points and risks of postgreSQL although we did have some difficulties when \n> extracting precise quality attributes. This work proves the effectiveness of \n> ATAM method and explores the possible way to use ATAM on general purpose software.\n\n",
"msg_date": "Fri, 15 Mar 2002 13:49:21 -0500",
"msg_from": "\"Serguei Mokhov\" <mokhov@cs.concordia.ca>",
"msg_from_op": true,
"msg_subject": "FYI: Fw: [General] master thesis defence, Xin Shen, Wed. March 27,\n\t16:00, H 601"
}
] |
[
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Fri, 15 Mar 2002, Thomas Lockhart wrote:\n>> But I *really* don't see the benefit of that <table>(<table>.<col>)\n>> syntax. Especially when it cannot (?? we need a counterexample) lead to\n>> any additional interesting beneficial behavior.\n\n> The only benefit I can come up with is existing stuff written under\n> the impression that it's acceptable.\n\nThat's the only benefit I can see either --- but it's not negligible.\nEspecially not if the majority of other DBMSes will take this syntax.\n\nI was originally against adding any such thing, but I'm starting to\nlean in the other direction.\n\nI'd want it to error out on \"INSERT foo (bar.col)\", though ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Mar 2002 15:06:45 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements "
},
{
"msg_contents": "On Fri, 15 Mar 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > On Fri, 15 Mar 2002, Thomas Lockhart wrote:\n> >> But I *really* don't see the benefit of that <table>(<table>.<col>)\n> >> syntax. Especially when it cannot (?? we need a counterexample) lead to\n> >> any additional interesting beneficial behavior.\n>\n> > The only benefit I can come up with is existing stuff written under\n> > the impression that it's acceptable.\n>\n> That's the only benefit I can see either --- but it's not negligible.\n> Especially not if the majority of other DBMSes will take this syntax.\n>\n> I was originally against adding any such thing, but I'm starting to\n> lean in the other direction.\n>\n> I'd want it to error out on \"INSERT foo (bar.col)\", though ;-)\n\nSo would I.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 17 Mar 2002 22:36:54 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: insert statements "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I'd want it to error out on \"INSERT foo (bar.col)\", though ;-)\n> \n\nAnd on \"INSERT foo (bar.foo.col)\" as well.\n\nThis means we will have to take this check down to the analyze\nphase (where the schema where foo is located is finally known,\nif it was not specified explicitly).\n\nWe could easily take \"INSERT bar.foo (bar.foo.col)\" but the\nabove one is trouble.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 18 Mar 2002 11:58:23 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: insert statements"
}
] |
[
{
"msg_contents": "Right now, we support a secondary password file reference in\npg_hba.conf.\n\nIf the file contains only usernames, we assume that it is the list of\nvalid usernames for the connection. If it contains usernames and\npasswords, like /etc/passwd, we assume these are the passwords to be\nused for the connection. Such connections must pass the unencrypted\npasswords over the wire so they can be matched against the file;\n'password' encryption in pg_hba.conf.\n\nIs it worth keeping this password capability in 7.3? It requires\n'password' in pg_hba.conf, which is not secure, and I am not sure how\nmany OS's still use crypt in /etc/passwd anyway. Removing the feature\nwould clear up pg_hba.conf options a little.\n\nThe ability to specify usernames in pg_hba.conf or in a secondary file\nis being added to pg_hba.conf anyway, so it is really only the password\npart that we have to decide to keep or remove.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Mar 2002 17:46:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pg_hba.conf and secondary password file"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Right now, we support a secondary password file reference in\n> pg_hba.conf.\n> Is it worth keeping this password capability in 7.3?\n\nI'd not cry if it went away. We could get rid of pg_passwd, which\nis an ugly mess...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Mar 2002 18:18:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_hba.conf and secondary password file "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Right now, we support a secondary password file reference in\n> > pg_hba.conf.\n> > Is it worth keeping this password capability in 7.3?\n> \n> I'd not cry if it went away. We could get rid of pg_passwd, which\n> is an ugly mess...\n\nYes, that was my thinking too. Seems like a good time for housecleaning\npg_hba.conf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Mar 2002 18:22:35 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_hba.conf and secondary password file"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Is it worth keeping this password capability in 7.3? It requires\n> > 'password' in pg_hba.conf, which is not secure, and I am not sure how\n> > many OS's still use crypt in /etc/passwd anyway. Removing the feature\n> > would clear up pg_hba.conf options a little.\n> \n> Personally, I don't care. But I'm concerned that some people might use\n> this to support different passwords for different databases. Not sure why\n> you'd want that. Maybe send an advisory to -general to see.\n\nYes, I will send to general. I wanted to get feedback from hackers\nfirst --- I will send now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 15 Mar 2002 19:50:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_hba.conf and secondary password file"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Is it worth keeping this password capability in 7.3? It requires\n> 'password' in pg_hba.conf, which is not secure, and I am not sure how\n> many OS's still use crypt in /etc/passwd anyway. Removing the feature\n> would clear up pg_hba.conf options a little.\n\nPersonally, I don't care. But I'm concerned that some people might use\nthis to support different passwords for different databases. Not sure why\nyou'd want that. Maybe send an advisory to -general to see.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 15 Mar 2002 19:54:03 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_hba.conf and secondary password file"
}
] |
[
{
"msg_contents": "I had an idea on a possible way to increase performance under query\nloads with a lot of short-term locking. I haven't looked at the\nimplementation of this at all, so if someone could tell me why this\nwouldn't work, that would save me some time ;-)\n\nAFAIK, current Postgres behavior when processing SELECT queries is like\nthis:\n\n\t(1) for each tuple in the result set, try to get an\n AccessShareLock on it\n\t\n\t(2) if it can't acqure the lock, wait until it can\n\n\t(3) read the data on the previously locked row and continue\n onward\n\ni.e. when it encounters a locked row, it waits for the lock to be\nreleased and then continued the scan.\n\nInstead, why not modify the behavior in (2) so that instead of waiting\nfor the lock to be released, Postgres would instead continue the scan,\nkeeping a note that it has skipped over the locked tuple. When it has\nfinished the scan (and so it has the entire result set, except for the\nlocked tuples), it should return to each of the previously locked\ntuples. Since most locks are relatively short-term (AFAIK), there's a\ngood chance that during the time it took to scan the rest of the table,\nthe lock on the tuple has been released -- so it can read the value and\nadd it into the result set at the appropriate place without needing to\ndo nothing while waiting for the lock to be released.\n\nThis is probably stupid for some reason: can someone let me know what\nthat reason is? ;-)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "15 Mar 2002 17:55:24 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "question on index access"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> AFAIK, current Postgres behavior when processing SELECT queries is like\n> this:\n> \t(1) for each tuple in the result set, try to get an\n> AccessShareLock on it\n\nUh, no. There are no per-tuple locks, other than SELECT FOR UPDATE\nwhich doesn't affect SELECT at all. AccessShareLock is taken on the\nentire table, mainly as a means of ensuring the table doesn't disappear\nfrom under us.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Mar 2002 18:23:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: question on index access "
},
{
"msg_contents": "On Fri, 2002-03-15 at 18:23, Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > AFAIK, current Postgres behavior when processing SELECT queries is like\n> > this:\n> > \t(1) for each tuple in the result set, try to get an\n> > AccessShareLock on it\n> \n> Uh, no. There are no per-tuple locks, other than SELECT FOR UPDATE\n> which doesn't affect SELECT at all. AccessShareLock is taken on the\n> entire table, mainly as a means of ensuring the table doesn't disappear\n> from under us.\n\nAh, that makes sense. My mistake -- thanks for the info.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "15 Mar 2002 18:32:26 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "Re: question on index access"
}
] |
[
{
"msg_contents": "I know it's probably a long shot, but has anyone coded statistical\ndistributions as functions in PostgreSQL? Specifically, I'm looking\nfor a function to calculate the cumulative F distribution.\n\nBy the way, I know that I can do /df at the psql command line to list\nthe available functions. Is there a help function or better\ndescription for a given function? Specifically, I'd like to know what\narray_in and array_out do.\n\nThanks\n-Tony\n",
"msg_date": "15 Mar 2002 17:20:09 -0800",
"msg_from": "reina@nsi.edu (Tony Reina)",
"msg_from_op": true,
"msg_subject": "Anyone have a SQL code for cumulative F distribution function?"
},
{
"msg_contents": "Tony Reina wrote:\n> I know it's probably a long shot, but has anyone coded statistical\n> distributions as functions in PostgreSQL? Specifically, I'm looking\n> for a function to calculate the cumulative F distribution.\n> \n> By the way, I know that I can do /df at the psql command line to list\n> the available functions. Is there a help function or better\n> description for a given function? Specifically, I'd like to know what\n> array_in and array_out do.\n> \n> Thanks\n> -Tony\n> \n\nNot quite what you asked for, but there *is* a library which allows you \nto query data from a PostgreSQL database from within R, called RPgSQL. \nIts available on Sourceforge and from the R Archive: \nhttp://cran.r-project.org/\n\nJoe\n\n\n\n",
"msg_date": "Fri, 15 Mar 2002 18:30:21 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Anyone have a SQL code for cumulative F distribution function?"
}
] |
[
{
"msg_contents": "Last year we had a drawn out discussion about this and I created a patch \nfor it. I never noticed that the patch didn't go in until I installed \n7.2 the other day and realised that fe-connect.c never was fixed.\n\nHere is the patch again. It is against CVS 3/16/2002. This time I only \nrewrote the connect procedure at line 912, I leave it up to the regular \nhackers to copy it's functionality to the SSL procedure just below it.\n\nIn summary, if a software writer implements timer events or other events \nwhich generate a signal with a timing fast enough to occur while libpq \nis inside connect(), then connect returns -EINTR. The code following \nthe connect call does not handle this and generates an error message. \n The sum result is that the pg_connect() fails. If the timer or other \nevent is right on the window of the connect() completion time, the \npg_connect() may appear to work sporadically. If the event is too slow, \npg_connect() will appear to always work and if the event is too fast, \npg_connect() will always fail.\n\nDavid",
"msg_date": "Sat, 16 Mar 2002 00:44:25 -0500",
"msg_from": "David Ford <david+cert@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "[patch] fe-connect.c doesn't handle EINTR correctly"
},
{
"msg_contents": "David, sorry you patch didn't make it into 7.2.X. That whole EINTR\ndiscussion was quite complicated so I am not surprised we missed it.\n\nThe attached patch implements your ENITR test in cases that seems to\nneed it. I have followed the method we used for ENITRY in fe-misc.c.\n\n\n---------------------------------------------------------------------------\n\nDavid Ford wrote:\n> Last year we had a drawn out discussion about this and I created a patch \n> for it. I never noticed that the patch didn't go in until I installed \n> 7.2 the other day and realised that fe-connect.c never was fixed.\n> \n> Here is the patch again. It is against CVS 3/16/2002. This time I only \n> rewrote the connect procedure at line 912, I leave it up to the regular \n> hackers to copy it's functionality to the SSL procedure just below it.\n> \n> In summary, if a software writer implements timer events or other events \n> which generate a signal with a timing fast enough to occur while libpq \n> is inside connect(), then connect returns -EINTR. The code following \n> the connect call does not handle this and generates an error message. \n> The sum result is that the pg_connect() fails. If the timer or other \n> event is right on the window of the connect() completion time, the \n> pg_connect() may appear to work sporadically. If the event is too slow, \n> pg_connect() will appear to always work and if the event is too fast, \n> pg_connect() will always fail.\n> \n> David\n> \n\n> Index: src/interfaces/libpq/fe-connect.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.181\n> diff -u -r1.181 fe-connect.c\n> --- src/interfaces/libpq/fe-connect.c\t2001/11/11 02:09:05\t1.181\n> +++ src/interfaces/libpq/fe-connect.c\t2002/03/16 05:17:47\n> @@ -909,29 +909,48 @@\n> \t * Thus, we have to make arrangements for all eventualities.\n> \t * ----------\n> \t */\n> -\tif (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)\n> -\t{\n> -\t\tif (SOCK_ERRNO == EINPROGRESS || SOCK_ERRNO == EWOULDBLOCK || SOCK_ERRNO == 0)\n> -\t\t{\n> -\t\t\t/*\n> -\t\t\t * This is fine - we're in non-blocking mode, and the\n> -\t\t\t * connection is in progress.\n> -\t\t\t */\n> -\t\t\tconn->status = CONNECTION_STARTED;\n> -\t\t}\n> -\t\telse\n> -\t\t{\n> -\t\t\t/* Something's gone wrong */\n> -\t\t\tconnectFailureMessage(conn, SOCK_ERRNO);\n> -\t\t\tgoto connect_errReturn;\n> +\tdo {\n> +\t\tint e;\n> +\t\te=connect(conn->sock, &conn->raddr.sa, conn->raddr_len)\n> +\n> +\t\tif(e < 0) {\n> +\t\t\tswitch (e) {\n> +\t\t\t\tcase EINTR:\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * Interrupted by a signal, keep trying. This handling is\n> +\t\t\t\t\t * required because the user may have turned on signals in\n> +\t\t\t\t\t * his program. Previously, libpq would erronously fail to\n> +\t\t\t\t\t * connect if the user's timer event fired and interrupted\n> +\t\t\t\t\t * this syscall. It is important that we don't try to sleep\n> +\t\t\t\t\t * here because this may cause havoc with the user program.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tcontinue;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\tcase 0:\n> +\t\t\t\tcase EINPROGRESS:\n> +\t\t\t\tcase EWOULDBLOCK:\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * This is fine - we're in non-blocking mode, and the\n> +\t\t\t\t\t * connection is in progress.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tconn->status = CONNECTION_STARTED;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\tdefault:\n> +\t\t\t\t\t/* Something's gone wrong */\n> +\t\t\t\t\tconnectFailureMessage(conn, SOCK_ERRNO);\n> +\t\t\t\t\tgoto connect_errReturn;\n> +\t\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t} else {\n> +\t\t\t/* We're connected now */\n> +\t\t\tconn->status = CONNECTION_MADE;\n> \t\t}\n> -\t}\n> -\telse\n> -\t{\n> -\t\t/* We're connected already */\n> -\t\tconn->status = CONNECTION_MADE;\n> -\t}\n> +\t\t\n> +\t\tif(conn->status == CONNECTION_STARTED || conn->status == CONNECTION_MADE)\n> +\t\t\tbreak;\n> \n> +\t} while(1);\n> +\t\n> #ifdef USE_SSL\n> \t/* Attempt to negotiate SSL usage */\n> \tif (conn->allow_ssl_try)\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.182\ndiff -c -r1.182 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t2 Mar 2002 00:49:22 -0000\t1.182\n--- src/interfaces/libpq/fe-connect.c\t14 Apr 2002 04:40:24 -0000\n***************\n*** 913,920 ****\n--- 913,925 ----\n \t * Thus, we have to make arrangements for all eventualities.\n \t * ----------\n \t */\n+ retry1:\n \tif (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)\n \t{\n+ \t\tif (SOCK_ERRNO == EINTR)\n+ \t\t\t/* Interrupted system call - we'll just try again */\n+ \t\t\tgoto retry1;\n+ \n \t\tif (SOCK_ERRNO == EINPROGRESS || SOCK_ERRNO == EWOULDBLOCK || SOCK_ERRNO == 0)\n \t\t{\n \t\t\t/*\n***************\n*** 949,957 ****\n--- 954,967 ----\n \t\t\t\t\t\t\t SOCK_STRERROR(SOCK_ERRNO));\n \t\t\tgoto connect_errReturn;\n \t\t}\n+ retry2:\n \t\t/* Now receive the postmasters response */\n \t\tif (recv(conn->sock, &SSLok, 1, 0) != 1)\n \t\t{\n+ \t\t\tif (SOCK_ERRNO == EINTR)\n+ \t\t\t\t/* Interrupted system call - we'll just try again */\n+ \t\t\t\tgoto retry2;\n+ \n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t libpq_gettext(\"could not receive server response to SSL negotiation packet: %s\\n\"),\n \t\t\t\t\t\t\t SOCK_STRERROR(SOCK_ERRNO));\n***************\n*** 2132,2139 ****\n--- 2142,2153 ----\n \t\t\t \"PQrequestCancel() -- socket() failed: \");\n \t\tgoto cancel_errReturn;\n \t}\n+ retry3:\n \tif (connect(tmpsock, &conn->raddr.sa, conn->raddr_len) < 0)\n \t{\n+ \t\tif (SOCK_ERRNO == EINTR)\n+ \t\t\t/* Interrupted system call - we'll just try again */\n+ \t\t\tgoto retry3;\n \t\tstrcpy(conn->errorMessage.data,\n \t\t\t \"PQrequestCancel() -- connect() failed: \");\n \t\tgoto cancel_errReturn;\n***************\n*** 2150,2157 ****\n--- 2164,2175 ----\n \tcrp.cp.backendPID = htonl(conn->be_pid);\n \tcrp.cp.cancelAuthCode = htonl(conn->be_key);\n \n+ retry4:\n \tif (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))\n \t{\n+ \t\tif (SOCK_ERRNO == EINTR)\n+ \t\t\t/* Interrupted system call - we'll just try again */\n+ \t\t\tgoto retry4;\n \t\tstrcpy(conn->errorMessage.data,\n \t\t\t \"PQrequestCancel() -- send() failed: \");\n \t\tgoto cancel_errReturn;\nIndex: src/interfaces/libpq/fe-misc.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.68\ndiff -c -r1.68 fe-misc.c\n*** src/interfaces/libpq/fe-misc.c\t6 Mar 2002 06:10:42 -0000\t1.68\n--- src/interfaces/libpq/fe-misc.c\t14 Apr 2002 04:40:25 -0000\n***************\n*** 361,367 ****\n \tif (!conn || conn->sock < 0)\n \t\treturn -1;\n \n! retry:\n \tFD_ZERO(&input_mask);\n \tFD_SET(conn->sock, &input_mask);\n \ttimeout.tv_sec = 0;\n--- 361,367 ----\n \tif (!conn || conn->sock < 0)\n \t\treturn -1;\n \n! retry1:\n \tFD_ZERO(&input_mask);\n \tFD_SET(conn->sock, &input_mask);\n \ttimeout.tv_sec = 0;\n***************\n*** 371,377 ****\n \t{\n \t\tif (SOCK_ERRNO == EINTR)\n \t\t\t/* Interrupted system call - we'll just try again */\n! \t\t\tgoto retry;\n \n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n--- 371,377 ----\n \t{\n \t\tif (SOCK_ERRNO == EINTR)\n \t\t\t/* Interrupted system call - we'll just try again */\n! \t\t\tgoto retry1;\n \n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n***************\n*** 395,401 ****\n \tif (!conn || conn->sock < 0)\n \t\treturn -1;\n \n! retry:\n \tFD_ZERO(&input_mask);\n \tFD_SET(conn->sock, &input_mask);\n \ttimeout.tv_sec = 0;\n--- 395,401 ----\n \tif (!conn || conn->sock < 0)\n \t\treturn -1;\n \n! retry2:\n \tFD_ZERO(&input_mask);\n \tFD_SET(conn->sock, &input_mask);\n \ttimeout.tv_sec = 0;\n***************\n*** 405,411 ****\n \t{\n \t\tif (SOCK_ERRNO == EINTR)\n \t\t\t/* Interrupted system call - we'll just try again */\n! \t\t\tgoto retry;\n \n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n--- 405,411 ----\n \t{\n \t\tif (SOCK_ERRNO == EINTR)\n \t\t\t/* Interrupted system call - we'll just try again */\n! \t\t\tgoto retry2;\n \n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n***************\n*** 478,484 ****\n \t}\n \n \t/* OK, try to read some data */\n! tryAgain:\n #ifdef USE_SSL\n \tif (conn->ssl)\n \t\tnread = SSL_read(conn->ssl, conn->inBuffer + conn->inEnd,\n--- 478,484 ----\n \t}\n \n \t/* OK, try to read some data */\n! retry3:\n #ifdef USE_SSL\n \tif (conn->ssl)\n \t\tnread = SSL_read(conn->ssl, conn->inBuffer + conn->inEnd,\n***************\n*** 490,496 ****\n \tif (nread < 0)\n \t{\n \t\tif (SOCK_ERRNO == EINTR)\n! \t\t\tgoto tryAgain;\n \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n #ifdef EAGAIN\n \t\tif (SOCK_ERRNO == EAGAIN)\n--- 490,496 ----\n \tif (nread < 0)\n \t{\n \t\tif (SOCK_ERRNO == EINTR)\n! \t\t\tgoto retry3;\n \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n #ifdef EAGAIN\n \t\tif (SOCK_ERRNO == EAGAIN)\n***************\n*** 531,537 ****\n \t\t\t(conn->inBufSize - conn->inEnd) >= 8192)\n \t\t{\n \t\t\tsomeread = 1;\n! \t\t\tgoto tryAgain;\n \t\t}\n \t\treturn 1;\n \t}\n--- 531,537 ----\n \t\t\t(conn->inBufSize - conn->inEnd) >= 8192)\n \t\t{\n \t\t\tsomeread = 1;\n! \t\t\tgoto retry3;\n \t\t}\n \t\treturn 1;\n \t}\n***************\n*** 564,570 ****\n \t * Still not sure that it's EOF, because some data could have just\n \t * arrived.\n \t */\n! tryAgain2:\n #ifdef USE_SSL\n \tif (conn->ssl)\n \t\tnread = SSL_read(conn->ssl, conn->inBuffer + conn->inEnd,\n--- 564,570 ----\n \t * Still not sure that it's EOF, because some data could have just\n \t * arrived.\n \t */\n! retry4:\n #ifdef USE_SSL\n \tif (conn->ssl)\n \t\tnread = SSL_read(conn->ssl, conn->inBuffer + conn->inEnd,\n***************\n*** 576,582 ****\n \tif (nread < 0)\n \t{\n \t\tif (SOCK_ERRNO == EINTR)\n! \t\t\tgoto tryAgain2;\n \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n #ifdef EAGAIN\n \t\tif (SOCK_ERRNO == EAGAIN)\n--- 576,582 ----\n \tif (nread < 0)\n \t{\n \t\tif (SOCK_ERRNO == EINTR)\n! \t\t\tgoto retry4;\n \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n #ifdef EAGAIN\n \t\tif (SOCK_ERRNO == EAGAIN)\n***************\n*** 804,810 ****\n \n \tif (forRead || forWrite)\n \t{\n! retry:\n \t\tFD_ZERO(&input_mask);\n \t\tFD_ZERO(&output_mask);\n \t\tFD_ZERO(&except_mask);\n--- 804,810 ----\n \n \tif (forRead || forWrite)\n \t{\n! retry5:\n \t\tFD_ZERO(&input_mask);\n \t\tFD_ZERO(&output_mask);\n \t\tFD_ZERO(&except_mask);\n***************\n*** 817,823 ****\n \t\t\t\t (struct timeval *) NULL) < 0)\n \t\t{\n \t\t\tif (SOCK_ERRNO == EINTR)\n! \t\t\t\tgoto retry;\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n \t\t\t\t\t\t\t SOCK_STRERROR(SOCK_ERRNO));\n--- 817,823 ----\n \t\t\t\t (struct timeval *) NULL) < 0)\n \t\t{\n \t\t\tif (SOCK_ERRNO == EINTR)\n! \t\t\t\tgoto retry5;\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n \t\t\t\t\t\t\t SOCK_STRERROR(SOCK_ERRNO));",
"msg_date": "Sun, 14 Apr 2002 00:54:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [patch] fe-connect.c doesn't handle EINTR correctly"
},
{
"msg_contents": "\nFix applied. Thanks.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> \n> David, sorry you patch didn't make it into 7.2.X. That whole EINTR\n> discussion was quite complicated so I am not surprised we missed it.\n> \n> The attached patch implements your ENITR test in cases that seems to\n> need it. I have followed the method we used for ENITRY in fe-misc.c.\n> \n> \n> ---------------------------------------------------------------------------\n> \n> David Ford wrote:\n> > Last year we had a drawn out discussion about this and I created a patch \n> > for it. I never noticed that the patch didn't go in until I installed \n> > 7.2 the other day and realised that fe-connect.c never was fixed.\n> > \n> > Here is the patch again. It is against CVS 3/16/2002. This time I only \n> > rewrote the connect procedure at line 912, I leave it up to the regular \n> > hackers to copy it's functionality to the SSL procedure just below it.\n> > \n> > In summary, if a software writer implements timer events or other events \n> > which generate a signal with a timing fast enough to occur while libpq \n> > is inside connect(), then connect returns -EINTR. The code following \n> > the connect call does not handle this and generates an error message. \n> > The sum result is that the pg_connect() fails. If the timer or other \n> > event is right on the window of the connect() completion time, the \n> > pg_connect() may appear to work sporadically. If the event is too slow, \n> > pg_connect() will appear to always work and if the event is too fast, \n> > pg_connect() will always fail.\n> > \n> > David\n> > \n> \n> > Index: src/interfaces/libpq/fe-connect.c\n> > ===================================================================\n> > RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\n> > retrieving revision 1.181\n> > diff -u -r1.181 fe-connect.c\n> > --- src/interfaces/libpq/fe-connect.c\t2001/11/11 02:09:05\t1.181\n> > +++ src/interfaces/libpq/fe-connect.c\t2002/03/16 05:17:47\n> > @@ -909,29 +909,48 @@\n> > \t * Thus, we have to make arrangements for all eventualities.\n> > \t * ----------\n> > \t */\n> > -\tif (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)\n> > -\t{\n> > -\t\tif (SOCK_ERRNO == EINPROGRESS || SOCK_ERRNO == EWOULDBLOCK || SOCK_ERRNO == 0)\n> > -\t\t{\n> > -\t\t\t/*\n> > -\t\t\t * This is fine - we're in non-blocking mode, and the\n> > -\t\t\t * connection is in progress.\n> > -\t\t\t */\n> > -\t\t\tconn->status = CONNECTION_STARTED;\n> > -\t\t}\n> > -\t\telse\n> > -\t\t{\n> > -\t\t\t/* Something's gone wrong */\n> > -\t\t\tconnectFailureMessage(conn, SOCK_ERRNO);\n> > -\t\t\tgoto connect_errReturn;\n> > +\tdo {\n> > +\t\tint e;\n> > +\t\te=connect(conn->sock, &conn->raddr.sa, conn->raddr_len)\n> > +\n> > +\t\tif(e < 0) {\n> > +\t\t\tswitch (e) {\n> > +\t\t\t\tcase EINTR:\n> > +\t\t\t\t\t/*\n> > +\t\t\t\t\t * Interrupted by a signal, keep trying. This handling is\n> > +\t\t\t\t\t * required because the user may have turned on signals in\n> > +\t\t\t\t\t * his program. Previously, libpq would erronously fail to\n> > +\t\t\t\t\t * connect if the user's timer event fired and interrupted\n> > +\t\t\t\t\t * this syscall. It is important that we don't try to sleep\n> > +\t\t\t\t\t * here because this may cause havoc with the user program.\n> > +\t\t\t\t\t */\n> > +\t\t\t\t\tcontinue;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\tcase 0:\n> > +\t\t\t\tcase EINPROGRESS:\n> > +\t\t\t\tcase EWOULDBLOCK:\n> > +\t\t\t\t\t/*\n> > +\t\t\t\t\t * This is fine - we're in non-blocking mode, and the\n> > +\t\t\t\t\t * connection is in progress.\n> > +\t\t\t\t\t */\n> > +\t\t\t\t\tconn->status = CONNECTION_STARTED;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t\tdefault:\n> > +\t\t\t\t\t/* Something's gone wrong */\n> > +\t\t\t\t\tconnectFailureMessage(conn, SOCK_ERRNO);\n> > +\t\t\t\t\tgoto connect_errReturn;\n> > +\t\t\t\t\tbreak;\n> > +\t\t\t}\n> > +\t\t} else {\n> > +\t\t\t/* We're connected now */\n> > +\t\t\tconn->status = CONNECTION_MADE;\n> > \t\t}\n> > -\t}\n> > -\telse\n> > -\t{\n> > -\t\t/* We're connected already */\n> > -\t\tconn->status = CONNECTION_MADE;\n> > -\t}\n> > +\t\t\n> > +\t\tif(conn->status == CONNECTION_STARTED || conn->status == CONNECTION_MADE)\n> > +\t\t\tbreak;\n> > \n> > +\t} while(1);\n> > +\t\n> > #ifdef USE_SSL\n> > \t/* Attempt to negotiate SSL usage */\n> > \tif (conn->allow_ssl_try)\n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n> Index: src/interfaces/libpq/fe-connect.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.182\n> diff -c -r1.182 fe-connect.c\n> *** src/interfaces/libpq/fe-connect.c\t2 Mar 2002 00:49:22 -0000\t1.182\n> --- src/interfaces/libpq/fe-connect.c\t14 Apr 2002 04:40:24 -0000\n> ***************\n> *** 913,920 ****\n> --- 913,925 ----\n> \t * Thus, we have to make arrangements for all eventualities.\n> \t * ----------\n> \t */\n> + retry1:\n> \tif (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)\n> \t{\n> + \t\tif (SOCK_ERRNO == EINTR)\n> + \t\t\t/* Interrupted system call - we'll just try again */\n> + \t\t\tgoto retry1;\n> + \n> \t\tif (SOCK_ERRNO == EINPROGRESS || SOCK_ERRNO == EWOULDBLOCK || SOCK_ERRNO == 0)\n> \t\t{\n> \t\t\t/*\n> ***************\n> *** 949,957 ****\n> --- 954,967 ----\n> \t\t\t\t\t\t\t SOCK_STRERROR(SOCK_ERRNO));\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> + retry2:\n> \t\t/* Now receive the postmasters response */\n> \t\tif (recv(conn->sock, &SSLok, 1, 0) != 1)\n> \t\t{\n> + \t\t\tif (SOCK_ERRNO == EINTR)\n> + \t\t\t\t/* Interrupted system call - we'll just try again */\n> + \t\t\t\tgoto retry2;\n> + \n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t libpq_gettext(\"could not receive server response to SSL negotiation packet: %s\\n\"),\n> \t\t\t\t\t\t\t SOCK_STRERROR(SOCK_ERRNO));\n> ***************\n> *** 2132,2139 ****\n> --- 2142,2153 ----\n> \t\t\t \"PQrequestCancel() -- socket() failed: \");\n> \t\tgoto cancel_errReturn;\n> \t}\n> + retry3:\n> \tif (connect(tmpsock, &conn->raddr.sa, conn->raddr_len) < 0)\n> \t{\n> + \t\tif (SOCK_ERRNO == EINTR)\n> + \t\t\t/* Interrupted system call - we'll just try again */\n> + \t\t\tgoto retry3;\n> \t\tstrcpy(conn->errorMessage.data,\n> \t\t\t \"PQrequestCancel() -- connect() failed: \");\n> \t\tgoto cancel_errReturn;\n> ***************\n> *** 2150,2157 ****\n> --- 2164,2175 ----\n> \tcrp.cp.backendPID = htonl(conn->be_pid);\n> \tcrp.cp.cancelAuthCode = htonl(conn->be_key);\n> \n> + retry4:\n> \tif (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))\n> \t{\n> + \t\tif (SOCK_ERRNO == EINTR)\n> + \t\t\t/* Interrupted system call - we'll just try again */\n> + \t\t\tgoto retry4;\n> \t\tstrcpy(conn->errorMessage.data,\n> \t\t\t \"PQrequestCancel() -- send() failed: \");\n> \t\tgoto cancel_errReturn;\n> Index: src/interfaces/libpq/fe-misc.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.68\n> diff -c -r1.68 fe-misc.c\n> *** src/interfaces/libpq/fe-misc.c\t6 Mar 2002 06:10:42 -0000\t1.68\n> --- src/interfaces/libpq/fe-misc.c\t14 Apr 2002 04:40:25 -0000\n> ***************\n> *** 361,367 ****\n> \tif (!conn || conn->sock < 0)\n> \t\treturn -1;\n> \n> ! retry:\n> \tFD_ZERO(&input_mask);\n> \tFD_SET(conn->sock, &input_mask);\n> \ttimeout.tv_sec = 0;\n> --- 361,367 ----\n> \tif (!conn || conn->sock < 0)\n> \t\treturn -1;\n> \n> ! retry1:\n> \tFD_ZERO(&input_mask);\n> \tFD_SET(conn->sock, &input_mask);\n> \ttimeout.tv_sec = 0;\n> ***************\n> *** 371,377 ****\n> \t{\n> \t\tif (SOCK_ERRNO == EINTR)\n> \t\t\t/* Interrupted system call - we'll just try again */\n> ! \t\t\tgoto retry;\n> \n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> --- 371,377 ----\n> \t{\n> \t\tif (SOCK_ERRNO == EINTR)\n> \t\t\t/* Interrupted system call - we'll just try again */\n> ! \t\t\tgoto retry1;\n> \n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> ***************\n> *** 395,401 ****\n> \tif (!conn || conn->sock < 0)\n> \t\treturn -1;\n> \n> ! retry:\n> \tFD_ZERO(&input_mask);\n> \tFD_SET(conn->sock, &input_mask);\n> \ttimeout.tv_sec = 0;\n> --- 395,401 ----\n> \tif (!conn || conn->sock < 0)\n> \t\treturn -1;\n> \n> ! retry2:\n> \tFD_ZERO(&input_mask);\n> \tFD_SET(conn->sock, &input_mask);\n> \ttimeout.tv_sec = 0;\n> ***************\n> *** 405,411 ****\n> \t{\n> \t\tif (SOCK_ERRNO == EINTR)\n> \t\t\t/* Interrupted system call - we'll just try again */\n> ! \t\t\tgoto retry;\n> \n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> --- 405,411 ----\n> \t{\n> \t\tif (SOCK_ERRNO == EINTR)\n> \t\t\t/* Interrupted system call - we'll just try again */\n> ! \t\t\tgoto retry2;\n> \n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> ***************\n> *** 478,484 ****\n> \t}\n> \n> \t/* OK, try to read some data */\n> ! tryAgain:\n> #ifdef USE_SSL\n> \tif (conn->ssl)\n> \t\tnread = SSL_read(conn->ssl, conn->inBuffer + conn->inEnd,\n> --- 478,484 ----\n> \t}\n> \n> \t/* OK, try to read some data */\n> ! retry3:\n> #ifdef USE_SSL\n> \tif (conn->ssl)\n> \t\tnread = SSL_read(conn->ssl, conn->inBuffer + conn->inEnd,\n> ***************\n> *** 490,496 ****\n> \tif (nread < 0)\n> \t{\n> \t\tif (SOCK_ERRNO == EINTR)\n> ! \t\t\tgoto tryAgain;\n> \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n> #ifdef EAGAIN\n> \t\tif (SOCK_ERRNO == EAGAIN)\n> --- 490,496 ----\n> \tif (nread < 0)\n> \t{\n> \t\tif (SOCK_ERRNO == EINTR)\n> ! \t\t\tgoto retry3;\n> \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n> #ifdef EAGAIN\n> \t\tif (SOCK_ERRNO == EAGAIN)\n> ***************\n> *** 531,537 ****\n> \t\t\t(conn->inBufSize - conn->inEnd) >= 8192)\n> \t\t{\n> \t\t\tsomeread = 1;\n> ! \t\t\tgoto tryAgain;\n> \t\t}\n> \t\treturn 1;\n> \t}\n> --- 531,537 ----\n> \t\t\t(conn->inBufSize - conn->inEnd) >= 8192)\n> \t\t{\n> \t\t\tsomeread = 1;\n> ! \t\t\tgoto retry3;\n> \t\t}\n> \t\treturn 1;\n> \t}\n> ***************\n> *** 564,570 ****\n> \t * Still not sure that it's EOF, because some data could have just\n> \t * arrived.\n> \t */\n> ! tryAgain2:\n> #ifdef USE_SSL\n> \tif (conn->ssl)\n> \t\tnread = SSL_read(conn->ssl, conn->inBuffer + conn->inEnd,\n> --- 564,570 ----\n> \t * Still not sure that it's EOF, because some data could have just\n> \t * arrived.\n> \t */\n> ! retry4:\n> #ifdef USE_SSL\n> \tif (conn->ssl)\n> \t\tnread = SSL_read(conn->ssl, conn->inBuffer + conn->inEnd,\n> ***************\n> *** 576,582 ****\n> \tif (nread < 0)\n> \t{\n> \t\tif (SOCK_ERRNO == EINTR)\n> ! \t\t\tgoto tryAgain2;\n> \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n> #ifdef EAGAIN\n> \t\tif (SOCK_ERRNO == EAGAIN)\n> --- 576,582 ----\n> \tif (nread < 0)\n> \t{\n> \t\tif (SOCK_ERRNO == EINTR)\n> ! \t\t\tgoto retry4;\n> \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n> #ifdef EAGAIN\n> \t\tif (SOCK_ERRNO == EAGAIN)\n> ***************\n> *** 804,810 ****\n> \n> \tif (forRead || forWrite)\n> \t{\n> ! retry:\n> \t\tFD_ZERO(&input_mask);\n> \t\tFD_ZERO(&output_mask);\n> \t\tFD_ZERO(&except_mask);\n> --- 804,810 ----\n> \n> \tif (forRead || forWrite)\n> \t{\n> ! retry5:\n> \t\tFD_ZERO(&input_mask);\n> \t\tFD_ZERO(&output_mask);\n> \t\tFD_ZERO(&except_mask);\n> ***************\n> *** 817,823 ****\n> \t\t\t\t (struct timeval *) NULL) < 0)\n> \t\t{\n> \t\t\tif (SOCK_ERRNO == EINTR)\n> ! \t\t\t\tgoto retry;\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> \t\t\t\t\t\t\t SOCK_STRERROR(SOCK_ERRNO));\n> --- 817,823 ----\n> \t\t\t\t (struct timeval *) NULL) < 0)\n> \t\t{\n> \t\t\tif (SOCK_ERRNO == EINTR)\n> ! \t\t\t\tgoto retry5;\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> \t\t\t\t\t\t\t SOCK_STRERROR(SOCK_ERRNO));\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Apr 2002 19:34:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [patch] fe-connect.c doesn't handle EINTR correctly"
}
] |
[
{
"msg_contents": "I traded a couple emails with a guy using one of my open source projects. To\nmake a long story short, he is going to the new version of MySQL for his\nwebsite because of the new caching feature. He is convinced that it will speed\nup his web site, and he is probably right.\n\nOn a web site, a few specific queries get executed, unchanged, repeatedly.\nThink about an ecommerce site, most of the time it is just a handful of basic\nqueries. These basic queries are usually against pretty large product tables. A\ncaching mechanism would make these queries pretty light weight.\n\nThe arguments against caching:\n\n\"It is an application issue\"\nThis is completely wrong. Caching can not be done against a database without\nknowledge of the database, i.e. when the data changes.\n\n\"If it is mostly static data, why not just make it a static page?\"\nBecause a static page is a maintenance nightmare. One uses a database in a web\nsite to allow content to be changed and upgraded dynamically and with a minimum\nof work.\n\n\"It isn't very useful\"\nI disagree completely. A cache of most frequently used queries, or specific\nones, could make for REALLY good performance in some very specific, but very\ncommon, applications. Any system that has a hierarchical \"drill down\" interface\nto a data set, ecommerce, libraries, document management systems, etc. will\ngreatly benefit from a query cache.\n\nI was thinking that it could be implemented as a keyword or comment in a query.\nSuch as:\n\nselect * from table where column = 'foo' cacheable\nor\nselect * from table where column = 'bar' /* cacheable */\n\nEither way, it would speed up a lot of common application types. It would even\nbe very cool if you could just cache the results of sub queries, such as:\n\nselect * from (select * from table where col1 = 'foo' cacheable) as subset\nwhere subset.col2 = 'bar' ;\n\nWhich would mean that the subquery gets cached, but the greater select need not\nbe. The cache could be like a global temp table. Perhaps the user could even\nname the cache entry:\n\nselect * from table where column = 'foo' cache on foo\n\nWhere one could also do:\n\nselect * from cache_foo\n\nUsing a keyword is probably a better idea, it can be picked up by the parser\nand instruct PostgreSQL to use the cache, otherwise there will be no additional\noverhead.\n\nHaving caching within PostgreSQL will be good for data integrity. Application\ncaches can't tell when an update/delete/insert happens, they often have to use\na time-out mechanism.\n\nOK, let me have it, tell me how terrible an idea this is. tell me how wrong I\nam.\n",
"msg_date": "Sat, 16 Mar 2002 09:01:28 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Again, sorry, caching."
},
{
"msg_contents": "I previously replied to you vaguely describing a way you could implement\nthis by using a combination of client side caching and database tables\nand triggers to allow you to determine if your cache is still valid. \nSomeone came right behind me, Tom maybe??, and indicated that the\nproper/ideal way to do this would be to using postgres' asynchronous\ndatabase notification mechanisms (listen/notify I believe were the\nsemantics) to alert your application that your cache has become\ninvalid. Basically, a couple of triggers and the use of the list/notify\nmodel, and you should be all set.\n\nDone properly, a client side cache which is asynchronously notified by\nthe database when it's contents become invalid should be faster than\nrelying on MySQL's database caching scheme. Basically, a strong client\nside cache is going to prevent your database from even having to return\na cached result set while a database side cache is going to always\nreturn a result set. Of course, one of the extra cool things you can do\nis to cache a gzip'd copy of the data contents which would further act\nas an optimization preventing the client or web server (in case they are\ndifferent) from having to recompress every result set.\n\nIn the long run, again, if properly done, you should be able to beat\nMySQL's implementation without too extra much effort. Why? Because a\nclient side cache can be much smarter in the way that it uses it's\ncached contents much in the same way an application is able to better\ncache it's data then what the file system is able to do. This is why an\nclient side cache should be preferred over that of a database result set\ncache.\n\nGreg\n\nReferences:\nhttp://www.postgresql.org/idocs/index.php?sql-notify.html\nhttp://www.postgresql.org/idocs/index.php?sql-listen.html\n\n\nOn Sat, 2002-03-16 at 08:01, mlw wrote:\n> I traded a couple emails with a guy using one of my open source projects. To\n> make a long story short, he is going to the new version of MySQL for his\n> website because of the new caching feature. He is convinced that it will speed\n> up his web site, and he is probably right.\n> \n> On a web site, a few specific queries get executed, unchanged, repeatedly.\n> Think about an ecommerce site, most of the time it is just a handful of basic\n> queries. These basic queries are usually against pretty large product tables. A\n> caching mechanism would make these queries pretty light weight.\n> \n> The arguments against caching:\n> \n> \"It is an application issue\"\n> This is completely wrong. Caching can not be done against a database without\n> knowledge of the database, i.e. when the data changes.\n> \n> \"If it is mostly static data, why not just make it a static page?\"\n> Because a static page is a maintenance nightmare. One uses a database in a web\n> site to allow content to be changed and upgraded dynamically and with a minimum\n> of work.\n> \n> \"It isn't very useful\"\n> I disagree completely. A cache of most frequently used queries, or specific\n> ones, could make for REALLY good performance in some very specific, but very\n> common, applications. Any system that has a hierarchical \"drill down\" interface\n> to a data set, ecommerce, libraries, document management systems, etc. will\n> greatly benefit from a query cache.\n> \n> I was thinking that it could be implemented as a keyword or comment in a query.\n> Such as:\n> \n> select * from table where column = 'foo' cacheable\n> or\n> select * from table where column = 'bar' /* cacheable */\n> \n> Either way, it would speed up a lot of common application types. It would even\n> be very cool if you could just cache the results of sub queries, such as:\n> \n> select * from (select * from table where col1 = 'foo' cacheable) as subset\n> where subset.col2 = 'bar' ;\n> \n> Which would mean that the subquery gets cached, but the greater select need not\n> be. The cache could be like a global temp table. Perhaps the user could even\n> name the cache entry:\n> \n> select * from table where column = 'foo' cache on foo\n> \n> Where one could also do:\n> \n> select * from cache_foo\n> \n> Using a keyword is probably a better idea, it can be picked up by the parser\n> and instruct PostgreSQL to use the cache, otherwise there will be no additional\n> overhead.\n> \n> Having caching within PostgreSQL will be good for data integrity. Application\n> caches can't tell when an update/delete/insert happens, they often have to use\n> a time-out mechanism.\n> \n> OK, let me have it, tell me how terrible an idea this is. tell me how wrong I\n> am.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly",
"msg_date": "16 Mar 2002 08:28:33 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Triggers and asynchronous notification are not substitutes for real hard ACID\ncomplient caching. The way you suggest implies only one access model. Take the\nnotion of a library, they have both web and application access. These should\nboth be able to use the cache.\n\nAlso, your suggestion does not address the sub-select case, which I think is\nmuch bigger, performance wise, and more efficient than MySQL's cache.\n\nThis whole discussion could be moot, and this could be developed as an\nextension, if there were a function API that could return sets of whole rows.\n\n\n\nGreg Copeland wrote:\n> \n> I previously replied to you vaguely describing a way you could implement\n> this by using a combination of client side caching and database tables\n> and triggers to allow you to determine if your cache is still valid.\n> Someone came right behind me, Tom maybe??, and indicated that the\n> proper/ideal way to do this would be to using postgres' asynchronous\n> database notification mechanisms (listen/notify I believe were the\n> semantics) to alert your application that your cache has become\n> invalid. Basically, a couple of triggers and the use of the list/notify\n> model, and you should be all set.\n> \n> Done properly, a client side cache which is asynchronously notified by\n> the database when it's contents become invalid should be faster than\n> relying on MySQL's database caching scheme. Basically, a strong client\n> side cache is going to prevent your database from even having to return\n> a cached result set while a database side cache is going to always\n> return a result set. Of course, one of the extra cool things you can do\n> is to cache a gzip'd copy of the data contents which would further act\n> as an optimization preventing the client or web server (in case they are\n> different) from having to recompress every result set.\n> \n> In the long run, again, if properly done, you should be able to beat\n> MySQL's implementation without too extra much effort. Why? Because a\n> client side cache can be much smarter in the way that it uses it's\n> cached contents much in the same way an application is able to better\n> cache it's data then what the file system is able to do. This is why an\n> client side cache should be preferred over that of a database result set\n> cache.\n> \n> Greg\n> \n> References:\n> http://www.postgresql.org/idocs/index.php?sql-notify.html\n> http://www.postgresql.org/idocs/index.php?sql-listen.html\n> \n> On Sat, 2002-03-16 at 08:01, mlw wrote:\n> > I traded a couple emails with a guy using one of my open source projects. To\n> > make a long story short, he is going to the new version of MySQL for his\n> > website because of the new caching feature. He is convinced that it will speed\n> > up his web site, and he is probably right.\n> >\n> > On a web site, a few specific queries get executed, unchanged, repeatedly.\n> > Think about an ecommerce site, most of the time it is just a handful of basic\n> > queries. These basic queries are usually against pretty large product tables. A\n> > caching mechanism would make these queries pretty light weight.\n> >\n> > The arguments against caching:\n> >\n> > \"It is an application issue\"\n> > This is completely wrong. Caching can not be done against a database without\n> > knowledge of the database, i.e. when the data changes.\n> >\n> > \"If it is mostly static data, why not just make it a static page?\"\n> > Because a static page is a maintenance nightmare. One uses a database in a web\n> > site to allow content to be changed and upgraded dynamically and with a minimum\n> > of work.\n> >\n> > \"It isn't very useful\"\n> > I disagree completely. A cache of most frequently used queries, or specific\n> > ones, could make for REALLY good performance in some very specific, but very\n> > common, applications. Any system that has a hierarchical \"drill down\" interface\n> > to a data set, ecommerce, libraries, document management systems, etc. will\n> > greatly benefit from a query cache.\n> >\n> > I was thinking that it could be implemented as a keyword or comment in a query.\n> > Such as:\n> >\n> > select * from table where column = 'foo' cacheable\n> > or\n> > select * from table where column = 'bar' /* cacheable */\n> >\n> > Either way, it would speed up a lot of common application types. It would even\n> > be very cool if you could just cache the results of sub queries, such as:\n> >\n> > select * from (select * from table where col1 = 'foo' cacheable) as subset\n> > where subset.col2 = 'bar' ;\n> >\n> > Which would mean that the subquery gets cached, but the greater select need not\n> > be. The cache could be like a global temp table. Perhaps the user could even\n> > name the cache entry:\n> >\n> > select * from table where column = 'foo' cache on foo\n> >\n> > Where one could also do:\n> >\n> > select * from cache_foo\n> >\n> > Using a keyword is probably a better idea, it can be picked up by the parser\n> > and instruct PostgreSQL to use the cache, otherwise there will be no additional\n> > overhead.\n> >\n> > Having caching within PostgreSQL will be good for data integrity. Application\n> > caches can't tell when an update/delete/insert happens, they often have to use\n> > a time-out mechanism.\n> >\n> > OK, let me have it, tell me how terrible an idea this is. tell me how wrong I\n> > am.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> -------------------------------------------------------------------------------\n> Name: signature.asc\n> signature.asc Type: application/pgp-signature\n> Description: This is a digitally signed message part\n",
"msg_date": "Sat, 16 Mar 2002 09:36:12 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Sat, 2002-03-16 at 08:01, mlw wrote:\n[snip]\n\n> \"If it is mostly static data, why not just make it a static page?\"\n> Because a static page is a maintenance nightmare. One uses a database in a web\n> site to allow content to be changed and upgraded dynamically and with a minimum\n> of work.\n> \n\n\nOh ya, I forgot that reply to that part. I think you are forgetting\nthat you can use a database to generate a static page. That is, only\nregenerate the static page when the data within the database changes. \nAgain, this is another example of efficient application caching. If you\nhave an application which listens for your cache invalidation event, you\ncan then recreate your static page. Again, database result set caching\nis not required. And again, then should be significantly faster than\nMySQL's result set caching. Also worth noting that you could then gzip\nyour static page (keeping both static pages -- compressed and\nuncompressed) resulting in yet another optimization for most web servers\nand browsers.\n\nGreg",
"msg_date": "16 Mar 2002 08:39:31 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Sat, 2002-03-16 at 08:36, mlw wrote:\n> Triggers and asynchronous notification are not substitutes for real hard ACID\n> complient caching. The way you suggest implies only one access model. Take the\n> notion of a library, they have both web and application access. These should\n> both be able to use the cache.\n> \n\nWell, obviously, you'd need to re-implement the client side cache in\neach implementation of the client. That is a down side and I certainly\nwon't argue that. As for the \"no substitute\" comment, I'm guess I'll\nplead ignorance because I'm not sure what I'm missing here. What am I\nmissing that would not be properly covered by that model?\n\n> Also, your suggestion does not address the sub-select case, which I think is\n> much bigger, performance wise, and more efficient than MySQL's cache.\n\nI'm really not sure what you mean by that. Doesn't address it but is\nmore efficient? Maybe it's because I've not had my morning coffee\nyet... ;)\n\n> \n> This whole discussion could be moot, and this could be developed as an\n> extension, if there were a function API that could return sets of whole rows.\n> \n\nMaybe...but you did ask for feedback. :)\n\nGreg",
"msg_date": "16 Mar 2002 08:48:02 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "> I was thinking that it could be implemented as a keyword or comment in a query.\n> Such as:\n>\n> select * from table where column = 'foo' cacheable\n> or\n> select * from table where column = 'bar' /* cacheable */\n\n\n> Having caching within PostgreSQL will be good for data integrity. Application\n> caches can't tell when an update/delete/insert happens, they often have to use\n> a time-out mechanism.\n>\n> OK, let me have it, tell me how terrible an idea this is. tell me how wrong I\n> am.\n\nI don't think it's a bad idea, but a cache that takes a query string (or\nsubquery string) and looks for a match based on that is flawed without\nspecial consideration to non-cacheable functions and constructs\n(CURRENT_USER, things that depend on timezone, things that depend on\ndatestyle). We'd also need to work out an appropriate mechanism to deal\nwith cache invalidation and adding things to the cache.\n\n",
"msg_date": "Sat, 16 Mar 2002 09:26:43 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Andrew Sullivan wrote:\n> \n> On Sat, Mar 16, 2002 at 09:01:28AM -0500, mlw wrote:\n> \n> > \"If it is mostly static data, why not just make it a static page?\"\n> > Because a static page is a maintenance nightmare. One uses a\n> > database in a web site to allow content to be changed and upgraded\n> > dynamically and with a minimum of work.\n> \n> This seems wrong to me. Why not build an extra bit of functionality\n> so that when the admin makes a static-data change, the new static\n> data gets pushed into the static files?\n> \n> I was originally intrigued by the suggestion you made, but the more I\n> thought about it (and read the arguments of others) the more\n> convinced I became that the MySQL approach is a mistake. It's\n> probably worth it for their users, who seem not to care that much\n> about ACID anyway. But I think for a system that really wants to\n> play in the big leagues, the cache is a big feature that requires a\n> lot of development, but which is not adequately useful for most\n> cases. If we had infinite developer resources, it might be worth it.\n> In the actual case, I think it's too low a priority.\n\nAgain, I can't speak to priority, but I can name a few common application where\ncaching would be a great benefit. The more I think about it, the more I like\nthe idea of a 'cacheable' keyword in the select statement.\n\nMy big problem with putting the cache outside of the database is that it is now\nincumbent on the applications programmer to write a cache. A database should\nmanage the data, the application should handle how the data is presented.\nForcing the application to implement a cache feels wrong.\n",
"msg_date": "Mon, 18 Mar 2002 01:01:09 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Greg Copeland wrote:\n> \n> On Sat, 2002-03-16 at 08:36, mlw wrote:\n> > Triggers and asynchronous notification are not substitutes for real hard ACID\n> > complient caching. The way you suggest implies only one access model. Take\nthe\n> > notion of a library, they have both web and application access. These should\n> > both be able to use the cache.\n> >\n> \n> Well, obviously, you'd need to re-implement the client side cache in\n> each implementation of the client. That is a down side and I certainly\n> won't argue that. As for the \"no substitute\" comment, I'm guess I'll\n> plead ignorance because I'm not sure what I'm missing here. What am I\n> missing that would not be properly covered by that model?\n\nIt would not be guarenteed to be up to date with the state of the database. By\nimplementing the cache within the database, PostgreSQL could maintain the\nconsistency.\n\n> \n> > Also, your suggestion does not address the sub-select case, which I think is\n> > much bigger, performance wise, and more efficient than MySQL's cache.\n> \n> I'm really not sure what you mean by that. Doesn't address it but is\n> more efficient? Maybe it's because I've not had my morning coffee\n> yet... ;)\n\nIf an internal caching system can be implemented within PostgreSQL, and trust\nme, I undersand what a hairball it would be with multiversion concurrency,\nomplex queries such as:\n\nselect * from (select * from mytable where foo = 'bar' cacheable) as subset\nwhere subset.col = 'value'\n\nThe 'cacheable' keyword applied to the query would mean that PostgreSQL could\nkeep that result set handy for later use. If mytable and that subselect always\ndoes a table scan, no one can argue that this subquery caching could be a huge\nwin.\n\nAs a side note, I REALLY like the idea of a keyword for caching as apposed to\nautomated caching. t would allow the DBA or developer more control over\nPostgreSQL's behavior, and poentially make the fature easier to implement.\n\n> \n> >\n> > This whole discussion could be moot, and this could be developed as an\n> > extension, if there were a function API that could return sets of whole rows.\n> >\n> \nCurrently a function can only return one value or a setof a single type,\nimplemented as one function call for each entry in a set. If there could be a\nfunction interface which could return a row, and multiple rows similar to the\n'setof' return, that would be very cool. That way caching can be implemented\nas:\n\nselect * from pgcache('select * from mytable where foo='bar') as subset where\nsubset.col = 'value';\n",
"msg_date": "Mon, 18 Mar 2002 01:03:01 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "I think the notion that data is managed outside of the database is bogus. Query\ncaching can improve performance in some specific, but popular, scenarios.\nSaying it does not belong within the database and is the job of the\napplication, is like saying file caching is not a job of the file system but is\nthe job of the application.\n\nThis is a functionality many users want, and can be justified by some very\nspecific, but very common, scenarios. It is not me to say if it is worth the\nwork, or if it should be done. From the perspective of the user, having this\ncapability within the database is an important feature, I want to make the\nargument.\n\nGreg Copeland wrote:\n> \n> I previously replied to you vaguely describing a way you could implement\n> this by using a combination of client side caching and database tables\n> and triggers to allow you to determine if your cache is still valid.\n> Someone came right behind me, Tom maybe??, and indicated that the\n> proper/ideal way to do this would be to using postgres' asynchronous\n> database notification mechanisms (listen/notify I believe were the\n> semantics) to alert your application that your cache has become\n> invalid. Basically, a couple of triggers and the use of the list/notify\n> model, and you should be all set.\n> \n> Done properly, a client side cache which is asynchronously notified by\n> the database when it's contents become invalid should be faster than\n> relying on MySQL's database caching scheme. Basically, a strong client\n> side cache is going to prevent your database from even having to return\n> a cached result set while a database side cache is going to always\n> return a result set. Of course, one of the extra cool things you can do\n> is to cache a gzip'd copy of the data contents which would further act\n> as an optimization preventing the client or web server (in case they are\n> different) from having to recompress every result set.\n> \n> In the long run, again, if properly done, you should be able to beat\n> MySQL's implementation without too extra much effort. Why? Because a\n> client side cache can be much smarter in the way that it uses it's\n> cached contents much in the same way an application is able to better\n> cache it's data then what the file system is able to do. This is why an\n> client side cache should be preferred over that of a database result set\n> cache.\n> \n> Greg\n>\n",
"msg_date": "Mon, 18 Mar 2002 01:04:25 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Sat, Mar 16, 2002 at 09:01:28AM -0500, mlw wrote:\n\n> \"If it is mostly static data, why not just make it a static page?\"\n> Because a static page is a maintenance nightmare. One uses a database in a web\n> site to allow content to be changed and upgraded dynamically and with a minimum\n> of work.\n\n It's ugly argumentation for DB cache. What generate web page after data \n change and next time use it as static?\n\n> I was thinking that it could be implemented as a keyword or comment in a query.\n> Such as:\n> \n> select * from table where column = 'foo' cacheable\n\n You can insert \"mostly static data\" into temp table and in next queries \n use this temp table. After update/delete/insert can your application\n rebuild temp table (or by trigger?).\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 18 Mar 2002 11:01:37 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Karel Zak wrote:\n> \n> On Sat, Mar 16, 2002 at 09:01:28AM -0500, mlw wrote:\n> \n> > \"If it is mostly static data, why not just make it a static page?\"\n> > Because a static page is a maintenance nightmare. One uses a database in a web\n> > site to allow content to be changed and upgraded dynamically and with a minimum\n> > of work.\n> \n> It's ugly argumentation for DB cache. What generate web page after data\n> change and next time use it as static?\n> \n> > I was thinking that it could be implemented as a keyword or comment in a query.\n> > Such as:\n> >\n> > select * from table where column = 'foo' cacheable\n> \n> You can insert \"mostly static data\" into temp table and in next queries\n> use this temp table. After update/delete/insert can your application\n> rebuild temp table (or by trigger?).\n\nYes, I could, as could most of the guys reading these messages. I am thinking\nabout a feature in PostgreSQL that would make that easier for the average DBA\nor web producer.\n\nLets face it, MySQL wins a lot of people because they put in features that\npeople want. All the ways people have suggested to \"compete\" with MySQL's\ncaching have been ugly kludges. \n\nI understand the there is an amount of work involved with doing caching, and\nthe value of caching is debatable by some, however, it is demonstrable that\ncaching can improve a very common, albeit specific, set of deployments. Also,\nmanaging data is the job of the database, not the application. It does belong\nin PostgreSQL, if someone is forced to write a caching scheme around\nPostgreSQL, it is because PostgreSQL lacks that feature.\n",
"msg_date": "Mon, 18 Mar 2002 07:23:30 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Le Lundi 18 Mars 2002 13:23, mlw a écrit :\n> Lets face it, MySQL wins a lot of people because they put in features that\n> people want.\n\nMySQL is very interested in benchmarks.\nIt does not really care for data consistency.\n\nCheers, Jean-Michel POURE\n",
"msg_date": "Mon, 18 Mar 2002 14:32:40 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Yes. EVERY person that I've ever known which runs MySQL run for two\nvery simple reasons. First, they believe it to be wicked fast. Second,\nthey don't understand what ACID is, what a transaction is, or why\nrunning a single session against a database to perform a benchmark is a\ncompletely bogus concept. In case it's not obvious, these are usually\npeople that are trying to take a step up from Access. While I do\nbelieve MySQL, from a performance perspective, is a step up from Access\nI always tell my clients...if you wouldn't use an Access database for\nthis project, you shouldn't use MySQL either.\n\nTo me, this means we need better advertising, PR, and education rather\nthan a result set cache. :P\n\nSpeaking of which, I'm wondering if there are any design patterns we can\nlook at which would address client side caching...well, at least make it\neasier to implement as well as implement it in a consistent manner.\n\nGreg\n\n\nOn Mon, 2002-03-18 at 07:32, Jean-Michel POURE wrote:\n> Le Lundi 18 Mars 2002 13:23, mlw a écrit :\n> > Lets face it, MySQL wins a lot of people because they put in features that\n> > people want.\n> \n> MySQL is very interested in benchmarks.\n> It does not really care for data consistency.\n> \n> Cheers, Jean-Michel POURE\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "18 Mar 2002 07:58:29 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> \n> Le Lundi 18 Mars 2002 13:23, mlw a �crit :\n> > Lets face it, MySQL wins a lot of people because they put in features that\n> > people want.\n> \n> MySQL is very interested in benchmarks.\n> It does not really care for data consistency.\n\nIn no way am I suggesting we avoid ACID compliance. In no way am I suggesting\nthat PostgreSQL change. All I am suggesting is that tables which change\ninfrequently can and should be cached.\n\nselect * from table where foo = 'bar'\n\nNeed not be executed twice if the table has not changed. \n\nselect * from table1, (select * from table2 where foo='bar' cacheable) as\nsubset were subset.col1 = table1.col1;\n\nIn the above query, if table two changes 4 times a day, and it queried a couple\ntimes a minute or second, the caching of the subset could save a huge amount of\ndisk I/O.\n\nThis sort of query could improve many catalog based implementations, from\nmusic, to movies, to books. A library could implement a SQL query for book\nlookups like this:\n\nselect * from authors, (select * from books where genre = 'scifi' cacheable) as\nsubset where authors.id = subset.auhorid and authors.id in (....)\n\nYes it is arguable that index scans may work better, and obviously, summary\ntables may help, etc. but imagine a more complex join which produces fewer\nrecords, but is executed frequently. Caching could help the performance of\nPostgreSQL in some very real applications.\n\nMySQL's quest for benchmarking numbers, I agree, is shameful because they\ncreate numbers which are not really applicable in the real world. This time,\nhowever, I think they may be on to something.\n\n(1) PostgreSQL use a \"cacheable\" or \"iscacheable\" keyword.\n(2) If the query uses functions which are not marked as \"iscacheable,\" then it\nis not cached.\n(3) If any table contained within the cacheable portion of the query is\nmodified, the cache is marked as dirty.\n(4) No provisions are made to recreate the cache after an insert/update/delete.\n(5) The first query marked as \"iscacheable\" that encounters a \"dirty\" flag in a\ntable, does an exhaustive search on the cache and removes all entries that are\naffected.\n\n\nAs far as I can see, if the above parameters are used to define caching, it\ncould improve performance on sites where a high number of transactions are\nmade, where there is also a large amount of static data, i.e. a ecommerce site,\nlibrary, etc. If the \"iscacheable\" keyword is not used, PostgreSQL will not\nincur any performance degradation. However, if he \"iscacheable\" keyword is\nused, the performance loss could very well be made up by the benefits of\ncaching.\n",
"msg_date": "Mon, 18 Mar 2002 09:10:04 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "\"Mattew T. O'Connor\" wrote:\n> \n> > My big problem with putting the cache outside of the database is that it is\n> > now incumbent on the applications programmer to write a cache. A database\n> > should manage the data, the application should handle how the data is\n> > presented. Forcing the application to implement a cache feels wrong.\n> \n> I believe someone suggested a possible solution that was in the pg client\n> using NOTICE and triggers. The argument given against it, was that\n> it would not be ACID compliant. I say, who cares. I would think that the\n> \"select cachable\" would only be allowed for simple selects, it would not be\n> used for select for update or anything else. Anytime you are given the\n> result of a simple select, you are not guaranteed that the data won't change\n> underneath you. \n\nNot true, if you begin a transaction, you can be isolated of changes made to\nthe database.\n\n>The primary use that you have suggested is for web sites,\n> and they certainly won't mind of the cache is 0.3seconds out of date.\n\nAgain, if they don't care about accuracy, then they will use MySQL. PostgreSQL\nis a far better system. Making PostgreSQL less accurate, less \"correct\" takes\naway, IMHO, the very reasons to use it.\n",
"msg_date": "Mon, 18 Mar 2002 09:15:24 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Mon, 2002-03-18 at 08:15, mlw wrote:\n> \"Mattew T. O'Connor\" wrote:\n> > \n[snip]\n\n> \n> >The primary use that you have suggested is for web sites,\n> > and they certainly won't mind of the cache is 0.3seconds out of date.\n> \n> Again, if they don't care about accuracy, then they will use MySQL. PostgreSQL\n> is a far better system. Making PostgreSQL less accurate, less \"correct\" takes\n> away, IMHO, the very reasons to use it.\n> \n\nIf you are using a web site and you need real time data within 0.3s,\nyou've implemented on the wrong platform. It's as simple as that. In\nthe web world, there are few applications where a \"0.3s\" of a window is\nnotable. After all, that \"0.3s\" of a window can be anywhere within the\nsystem, including the web server, network, any front end caches, dns\nresolutions, etc.\n\nI tend to agree with Mettew. Granted, there are some application\ndomains where this can be critical...generally speaking, web serving\nisn't one of them.\n\nThat's why all of the solutions I offered were pointedly addressing a\nweb server scenario and not a generalized database cache. I completely\nagree with you on that. In a generalized situation, the database should\nbe managing and caching the data (which it already does).\n\nGreg",
"msg_date": "18 Mar 2002 09:40:05 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Greg Copeland wrote:\n> \n> On Mon, 2002-03-18 at 08:15, mlw wrote:\n> > \"Mattew T. O'Connor\" wrote:\n> > >\n> [snip]\n> \n> >\n> > >The primary use that you have suggested is for web sites,\n> > > and they certainly won't mind of the cache is 0.3seconds out of date.\n> >\n> > Again, if they don't care about accuracy, then they will use MySQL. PostgreSQL\n> > is a far better system. Making PostgreSQL less accurate, less \"correct\" takes\n> > away, IMHO, the very reasons to use it.\n> >\n> \n> If you are using a web site and you need real time data within 0.3s,\n> you've implemented on the wrong platform. It's as simple as that. In\n> the web world, there are few applications where a \"0.3s\" of a window is\n> notable. After all, that \"0.3s\" of a window can be anywhere within the\n> system, including the web server, network, any front end caches, dns\n> resolutions, etc.\n\nThis is totally wrong! An out of date cache can cause errors by returning\nresults that are no longer valid, thus causing lookup issues. That is what ACID\ncompliance is all about.\n\n> \n> I tend to agree with Mettew. Granted, there are some application\n> domains where this can be critical...generally speaking, web serving\n> isn't one of them.\n> \n> That's why all of the solutions I offered were pointedly addressing a\n> web server scenario and not a generalized database cache. I completely\n> agree with you on that. In a generalized situation, the database should\n> be managing and caching the data (which it already does).\n\nBut it does not cache a query. An expensive query which does an index range\nscan and filters by a where clause could invalidate a good number of buffers in\nthe buffer cache. If this or a number of queries like it are frequently\nrepeated, verbatim, in a seldom changed table, why not cache them within\nPostgreSQL? It would improve overall performance by preserving more blocks in\nthe buffer cache and eliminate a number of queries being executed.\n\nI don't see how caching can be an argument of applicability. I can understand\nit from a time/work point of view, but to debate that it is a useful feature\nseems ludicrous.\n",
"msg_date": "Mon, 18 Mar 2002 11:08:11 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Mon, 2002-03-18 at 10:08, mlw wrote:\n> Greg Copeland wrote:\n> > \n> > On Mon, 2002-03-18 at 08:15, mlw wrote:\n> > > \"Mattew T. O'Connor\" wrote:\n> > > >\n[snip]\n\n> > \n> > If you are using a web site and you need real time data within 0.3s,\n> > you've implemented on the wrong platform. It's as simple as that. In\n> > the web world, there are few applications where a \"0.3s\" of a window is\n> > notable. After all, that \"0.3s\" of a window can be anywhere within the\n> > system, including the web server, network, any front end caches, dns\n> > resolutions, etc.\n> \n> This is totally wrong! An out of date cache can cause errors by returning\n> results that are no longer valid, thus causing lookup issues. That is what ACID\n> compliance is all about.\n\nI understand what ACID is about. Question. Was the result set valid\nwhen it was cached? Yes. So will it be valid when it's returned as a\ncached result set? Yes. Might it be an out of date view. Sure...with\na horribly small window for becoming \"out of date\". Will it cause look\nup problems? Might. No more than what you are proposing. In the mean\ntime, the FE cached result set, performance wise, is beating the pants\noff of the database cached solution on both a specific work load and\nover all system performance.\n\nI should point out that once the FE cache has been notified that it's\ncache is invalid, the FE would no longer return the invalidated result\nset. I consider that to be a given, however, from some of your comments\nI get the impression that you think the invalid result set would\ncontinue to be served. Another way of thinking about that is...it's\nreally not any different from the notification acting as the result\nreturned result set...from a validity perspective. That is...if that\nhad been the returned result set (the notification) from the\ndatabase...it would be accurate (which in the case means the FE cache is\nnow dirty and treated as such)...if the query is refreshed because it is\nnow invalid..the result set is once again accurate and reflective of the\ndatabase.\n\nExample...\n\n\nDatabase cache\nQuery result set\n\t\tResult set returned (cached on database)\n\t\tlocal change to database (result set cache invalid)\nnew query based on out of date queried result set\n\n\nApplication cache\nQuery result set (cached)\n\t\tResult set returned\n\t\tlocal change to database (app cache invalid and signaled)\nnew query based on out of date queried result set\n\nBoth have that problem since transactional boundaries are hard to keep\nacross HTTP requests. This again, is why for web applications, a FE\ncache is perfectly acceptable for *most* needs. Also notice that your\nmargin for error is more or less the same.\n\n[snip]\n\n> I don't see how caching can be an argument of applicability. I can understand\n> it from a time/work point of view, but to debate that it is a useful feature\n> seems ludicrous.\n\nI don't think I'm arguing if it's applicable or useful. Rather, I'm\nsaying that faster results can be yielded by implementing it in the\nclient with far less effort than it would take to implement in the BE. \nI am arguing that it's impact on overall system performance (though I\nreally didn't do more than just touch on this topic) is\nquestionable...granted, it may greatly enhance specific work loads...at\nthe expense of others. Which shouldn't be too surprising as trade offs\nof some type are pretty common.\n\nAt this point in time, I think we've both pretty well beat this topic\nup. Obviously there are two primary ways of viewing the situation. I\ndon't think anyone is saying it's a bad idea...I think everyone is\nsaying that it's easier to address elsewhere and that overall, the net\nreturns may be at the expense of some other work loads. So, unless\nthere are new pearls to be shared and gleaned, I think the topics been\nfairly well addressed. Does more need to said?\n\nGreg",
"msg_date": "18 Mar 2002 10:43:18 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Sat, 2002-03-16 at 09:01, mlw wrote:\n> On a web site, a few specific queries get executed, unchanged, repeatedly.\n> Think about an ecommerce site, most of the time it is just a handful of basic\n> queries. These basic queries are usually against pretty large product tables. A\n> caching mechanism would make these queries pretty light weight.\n> \n> The arguments against caching:\n> \n> \"It is an application issue\"\n> This is completely wrong. Caching can not be done against a database without\n> knowledge of the database, i.e. when the data changes.\n\nBut can't this be achieved by using a LISTEN/NOTIFY model, with\nuser-created rules to NOTIFY the appropriate listener when a table\nchanges? With a good notification scheme like this, you don't need to\ncontinually poll the DB for changes. You don't need to teach your cache\na lot of things about the database, since most of that knowledge is\nencapsulated inside the rules, and supporting tables.\n\nMy impression (I could be wrong) is that LISTEN/NOTIFY doesn't get the\npress that it deserves. If this model isn't widely used because of some \ndeficiencies in the LISTEN/NOTIFY implementation, IMHO our time would be\nbetter spent fixing those problems than implementing the proposed\ncaching scheme.\n\nIf we're looking to provide a \"quick and easy\" caching scheme for users\nattracted to MySQL's query cache, why not provide this functionality\nthrough another application? I'm thinking about a generic \"caching\nlayer\" that would sit in between Postgres and the database client. It\ncould speak the FE/BE protocol as necessary; it would use LISTEN/NOTIFY\nto allow it to efficiently be aware of database changes; it would create\nthe necessary rules for the user, providing a simple interface to\nenabling query caching for a table or a set of tables?\n\nWhat does everyone think?\n\n> OK, let me have it, tell me how terrible an idea this is. tell me how wrong I\n> am.\n\nI think your goals are laudable (and I also appreciate the effort that\nyou and everyone else puts into Postgres); I just think we could get\nmost of the benefits without needing to implement potentially complex\nchanges to Postgres internals.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "18 Mar 2002 21:35:51 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Mon, 2002-03-18 at 20:35, Neil Conway wrote:\n[snip]\n\n> My impression (I could be wrong) is that LISTEN/NOTIFY doesn't get the\n> press that it deserves. If this model isn't widely used because of some \n> deficiencies in the LISTEN/NOTIFY implementation, IMHO our time would be\n> better spent fixing those problems than implementing the proposed\n> caching scheme.\n> \n> If we're looking to provide a \"quick and easy\" caching scheme for users\n> attracted to MySQL's query cache, why not provide this functionality\n> through another application? I'm thinking about a generic \"caching\n> layer\" that would sit in between Postgres and the database client. It\n> could speak the FE/BE protocol as necessary; it would use LISTEN/NOTIFY\n> to allow it to efficiently be aware of database changes; it would create\n> the necessary rules for the user, providing a simple interface to\n> enabling query caching for a table or a set of tables?\n> \n> What does everyone think?\n> \n\nYes...I was thinking that a generic library interface with a nice design\npattern might meet this need rather well. Done properly, I think we can\nmake it where all that, more or less, would be needed is application\nhooks which accept the result set to be cached and a mechanism to signal\ninvalidation of the current cache....obviously that's not an exhaustive\nlist... :)\n\nI haven't spent much time on this, but I'm fairly sure some library\nroutines can be put together which would greatly reduce the effort of\napplication coders to support fe-data caches and still be portable for\neven the Win32 port.\n\nGreg",
"msg_date": "18 Mar 2002 21:09:14 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "> Yes...I was thinking that a generic library interface with a nice design\n> pattern might meet this need rather well. Done properly, I think we can\n> make it where all that, more or less, would be needed is application\n> hooks which accept the result set to be cached and a mechanism to signal\n> invalidation of the current cache....obviously that's not an exhaustive\n> list... :)\n\nA library implies that the application is running long enough to actually \nhear the notofication. Web apps start up, read from the database, and before \nany cache is needed they're done and the next one starts up, reading again \nfrom the database. Only currently open connections receive the notification.\n\nI think that you do need an entire layer... but that's not a bad thing \nnecessarily. Have a daemon that stays connected for a long time and when a \nnotification arrives, rewrite the cache (or mark it dirty). Other \napplications can read data from static files or shared memory, or even \nanother communication socket with your daemon.\n\nThere may be some way around running a daemon, so if you have a better \nsolution please let me know.\n\nI think I am in favor of client caching in general, but \"mlw\" (sorry, I can't \nfind your real name in the emails at hand) makes some good points. The most \nimportant one is that we don't want to change application architectures on \neveryone. It's easy if you just have to add \"iscachable\" to a query, it's \nhard if you have to start writing against a different set of routines (to \ngrab from your client cache rather than a database). \n\nHowever, I am perfectly happy writing a client-side cache or using temp \ntables to store a result set. I also don't care that much if someone chooses \nPostgreSQL for their website (unless I'm responsible for it's success in some \nway :) ). That's me personally, if you want to attract more users from mysql, \n\"iscachable\" is very likely an attractive feature.\n\nRegards,\n\tJeff\n\n",
"msg_date": "Tue, 19 Mar 2002 05:17:20 -0800",
"msg_from": "Jeff Davis <list-pgsql-hackers@dynworks.com>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Jeff Davis wrote:\n> \n> > Yes...I was thinking that a generic library interface with a nice design\n> > pattern might meet this need rather well. Done properly, I think we can\n> > make it where all that, more or less, would be needed is application\n> > hooks which accept the result set to be cached and a mechanism to signal\n> > invalidation of the current cache....obviously that's not an exhaustive\n> > list... :)\n> \n> A library implies that the application is running long enough to actually\n> hear the notofication. Web apps start up, read from the database, and before\n> any cache is needed they're done and the next one starts up, reading again\n> from the database. Only currently open connections receive the notification.\n> \n> I think that you do need an entire layer... but that's not a bad thing\n> necessarily. Have a daemon that stays connected for a long time and when a\n> notification arrives, rewrite the cache (or mark it dirty). Other\n> applications can read data from static files or shared memory, or even\n> another communication socket with your daemon.\n> \n> There may be some way around running a daemon, so if you have a better\n> solution please let me know.\n> \n> I think I am in favor of client caching in general, but \"mlw\" (sorry, I can't\n> find your real name in the emails at hand) makes some good points. The most\n> important one is that we don't want to change application architectures on\n> everyone. It's easy if you just have to add \"iscachable\" to a query, it's\n> hard if you have to start writing against a different set of routines (to\n> grab from your client cache rather than a database).\n> \n> However, I am perfectly happy writing a client-side cache or using temp\n> tables to store a result set. I also don't care that much if someone chooses\n> PostgreSQL for their website (unless I'm responsible for it's success in some\n> way :) ). That's me personally, if you want to attract more users from mysql,\n> \"iscachable\" is very likely an attractive feature.\n\nI was thinking about this. There seems to be a consensus that caching means no\nACID compliance. And everyone seems to think it needs to be limited. We can\nimplement a non-ACID cache as a contrib function with some work to the function\nmanager.\n\nRight now, the function manager can only return one value, or one set of values\nfor a column. It should be possible, but require a lot of research, to enable\nthe function manager to return a set of rows. If we could get that working, it\ncould be fairly trivial to implement a cache as a contrib project. It would\nwork something like this:\n\nselect querycache(\"select * from mytable where foo='bar') ;\n\nThis does two things that I would like to see: The ability to cache subselects\nindependent of the full query. The ability to control which queries get cached.\n\nIf we can get this row functionality in the function manager for 7.3, we could\nthen implement MANY MANY enterprise level functionalities. Remote queries,\nquery caching, external tables, etc. as contrib projects rather than full blown\nmodifications to PostgreSQL.\n",
"msg_date": "Tue, 19 Mar 2002 08:46:01 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function manager)"
},
{
"msg_contents": "On a side note, is it possible that we could add the \"iscachable\" which\nfor now, would give cache bias? That is, allow for a mechanism to\nindicate that the pages that are required for this query will be\nfrequently needed. I've not looked at the buffer cache implementation. \nIs it possible to somehow weight the corresponding pages in the cache so\nthat it works something like this? For each query that does not use the\nbiased pages ages the biased pages. Once the age threshold has been\nhit, allow the pages to be flushed per normal page replacement\nstrategy. Every time the biased pages get a hit, renew the bias on the\npages.\n\nI'm not sure this holds water but I'm thinking it would at least help\ninsure that the pages in question are quickly available without having\nto constantly re-read them from disk. \n\nWhat ya think? Cache already work like this? Doable?\n\n\nOn Tue, 2002-03-19 at 07:17, Jeff Davis wrote:\n> > Yes...I was thinking that a generic library interface with a nice design\n> > pattern might meet this need rather well. Done properly, I think we can\n> > make it where all that, more or less, would be needed is application\n> > hooks which accept the result set to be cached and a mechanism to signal\n> > invalidation of the current cache....obviously that's not an exhaustive\n> > list... :)\n> \n> A library implies that the application is running long enough to actually \n> hear the notofication. Web apps start up, read from the database, and before \n> any cache is needed they're done and the next one starts up, reading again \n> from the database. Only currently open connections receive the notification.\n\nI think you misunderstood me. My intension was the creation of a\npattern library, whereby, creation of your \"layer\", without regard for\nthe implementation requirements, can more easily be implemented. In\nother words, every time you need to implement this \"layer\" for various\napplications which address various problem domains, the library would\nserve as the heart of it reducing the amount of common code that would\notherwise ave to be put in place in one form or another. Thus my\nreference to a design pattern. \n\nShould also be noted that some fast and slick web applications\narchitecture often support some form of context shared persistence which\nwould allow for caching to be implementing even in web application\nspace.\n\n> \n> I think that you do need an entire layer... but that's not a bad thing \n> necessarily. Have a daemon that stays connected for a long time and when a \n> notification arrives, rewrite the cache (or mark it dirty). Other \n> applications can read data from static files or shared memory, or even \n> another communication socket with your daemon.\n\nExactly...all of which, I'm thinking, can be encompassed within a design\npattern, greatly reducing the effort required for a new \"layer\"\napplication requirement.\n\n> \n> There may be some way around running a daemon, so if you have a better \n> solution please let me know.\n\nI hadn't spent enough time thinking out it. My initial thought was to\nprovide the support functionality and let the coder determine his own\nroute to achieve his goal. Does he need a new daemon or can it be built\ninto his application? This is why a library is appealing. \n\n> \n> I think I am in favor of client caching in general, but \"mlw\" (sorry, I can't \n> find your real name in the emails at hand) makes some good points. The most \n> important one is that we don't want to change application architectures on \n> everyone. It's easy if you just have to add \"iscachable\" to a query, it's \n> hard if you have to start writing against a different set of routines (to \n> grab from your client cache rather than a database). \n\nYes. I completely agree with that.\n\n> \n> However, I am perfectly happy writing a client-side cache or using temp \n> tables to store a result set. I also don't care that much if someone chooses \n> PostgreSQL for their website (unless I'm responsible for it's success in some \n> way :) ). That's me personally, if you want to attract more users from mysql, \n> \"iscachable\" is very likely an attractive feature.\n\nAnd/or provide language bindings to a caching library which greatly\nhelps facilitate this. Granted, \"iscachable\" concept is certainly a\npowerful concept.\n\nGreg",
"msg_date": "19 Mar 2002 08:02:26 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Tue, 2002-03-19 at 07:46, mlw wrote:\n[snip]\n\n> Right now, the function manager can only return one value, or one set of values\n> for a column. It should be possible, but require a lot of research, to enable\n> the function manager to return a set of rows. If we could get that working, it\n> could be fairly trivial to implement a cache as a contrib project. It would\n> work something like this:\n> \n> select querycache(\"select * from mytable where foo='bar') ;\n\nInteresting concept...but how would you know when the cache has become\ndirty? That would give you a set of rows...but I don't understand what\nwould let you know your result set is invalid?\n\nPerhaps: select querycache( foobar_event, \"select * from my table where\nfoo='bar'\" ) ; would automatically create a listen for you??\n\n> \n> This does two things that I would like to see: The ability to cache subselects\n> independent of the full query. The ability to control which queries get cached.\n> \n> If we can get this row functionality in the function manager for 7.3, we could\n> then implement MANY MANY enterprise level functionalities. Remote queries,\n> query caching, external tables, etc. as contrib projects rather than full blown\n> modifications to PostgreSQL.\n\nCorrect me if I'm wrong, but this concept would also be applicable to\nsome clustering/distributed query (that what you meant by remote\nqueries) needs too?\n\nGreg",
"msg_date": "19 Mar 2002 08:16:41 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function"
},
{
"msg_contents": "On Tue, 2002-03-19 at 07:46, mlw wrote:\n> I was thinking about this. There seems to be a consensus that caching means no\n> ACID compliance. And everyone seems to think it needs to be limited. We can\n> implement a non-ACID cache as a contrib function with some work to the function\n> manager.\n\nUntil know, I hadn't really thought about it...I just took it for\ngranted since it was asserted...however, what isn't ACID about the\napproach that I offered?\n\nA - Not effected...it's read only and provided directly from the\ndatabase, thus, it's still a function of the database. Any change\nresulting from atomic changes are notified to the cache, whereby it is\nrepopulated.\nC - Not effected...the database is still responsible for keeping\nconsistency. The cache is still read only. State is ensured as\ninvalidation is notified by the database and data set should be returned\nconsistent by the database or the database is broken.\nI - Again, the database is still performing this task and notifies the\ncache when updates need to take place. Again, Isolation isn't an issue\nbecause the cache is still read-only.\nD - Durability isn't a question either as, again, the database is still\ndoing this. In the event of cache failure...it would be repopulated\nfrom the database...so it would be as durable as is the database.\n\nPlease help me understand.\n\nThanks,\n\tGreg",
"msg_date": "19 Mar 2002 08:33:46 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> My impression (I could be wrong) is that LISTEN/NOTIFY doesn't get\n> the press that it deserves. If this model isn't widely used because\n> of some deficiencies in the LISTEN/NOTIFY implementation, IMHO our\n> time would be better spent fixing those problems than implementing\n> the proposed caching scheme.\n\nI would have to say I think a large part of the problem is lack of\npress---I've been hanging around pgsql-hackers for two or three years\nnow, and until all this discussion, had never heard of LISTEN/NOTIFY.\n\nThat doesn't mean I didn't miss prior mentions, but it certainly\ndoesn't seem to come up often or get a lot of discussion when it does.\n\nNow that I know about it, well, it looks like it would be trivial to\nuse it to implement cache invalidation logic in my HTML::Mason-based\napplication---I need only have a long-lived process running on the web\nserver(s) that uses the perl Pg interface, and sits listening, and\nwhen it sees notifies on given conditions, flush the appropriate local\ncaches.\n\nI'd actually been contemplating cramming logic to do this down into\nthe library I use for implementing the system logic, but had resisted\ndoing it because it would make the library too tied to the web---this\nwould be much easier.\n\nI won't even have to re-grab the results from the DB and reformat and\nall that crap, I can just spew the output from the last time the page\nwas assembled---sounds better to me than what MySQL provides. Of\ncourse, I get a lot of this for free as a result of the tools I'm\nusing, but surely this sort of thing shouldn't be all that hard to\nimplement in other systems.\n\nMike.\n",
"msg_date": "19 Mar 2002 09:48:52 -0500",
"msg_from": "Michael Alan Dorman <mdorman@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Jeff Davis <list-pgsql-hackers@dynworks.com> writes:\n\n> A library implies that the application is running long enough to actually \n> hear the notofication. Web apps start up, read from the database, and before \n> any cache is needed they're done and the next one starts up, reading again \n> from the database. Only currently open connections receive the notification.\n\nIf your web app works this way than you already don't care about\nperformance. People doing scalable web apps these days use connection\npooling and session data kept in memory, so you already have a\npersistent layer running (whether it's your JVM, Apache process for\nmod_perl or PHP, or whatever). Really big apps definitely have a\nlong-running daemon process that handles caching, session management\n(so you can have multiple webservers) etc etc...\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "19 Mar 2002 10:00:19 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On 19 Mar 2002, Greg Copeland wrote:\n\n> On Tue, 2002-03-19 at 07:46, mlw wrote:\n> [snip]\n> \n> > Right now, the function manager can only return one value, or one set of values\n> > for a column. It should be possible, but require a lot of research, to enable\n> > the function manager to return a set of rows. If we could get that working, it\n> > could be fairly trivial to implement a cache as a contrib project. It would\n> > work something like this:\n> > \n> > select querycache(\"select * from mytable where foo='bar') ;\n> \n> Interesting concept...but how would you know when the cache has become\n> dirty? That would give you a set of rows...but I don't understand what\n> would let you know your result set is invalid?\n> \n> Perhaps: select querycache( foobar_event, \"select * from my table where\n> foo='bar'\" ) ; would automatically create a listen for you??\n\n\nPersonally, I think this method of providing query caching is very\nmessy. Why not just implement this along side the system relation\ncache? This maybe be slightly more time consuming but it will perform\nbetter and will be able to take advantage of Postgres's current MVCC.\n\nThere would be three times when the cache would be interacted with\n\n1) add a new result set\n\nExecRetrieve() would need to be modified to handle a\nprepare-for-cache-update kind of feature. This would involve adding the\ntuple table slot data into a linked list.\n\nAt the end of processing/transaction and if the query was successfuly, the\nprepare-for-cache-update list could be processed by AtCommit_Cache() \n(called from CommitTransaction()) and the shared cache updated.\n\n2) attempt to get result set from cache\n\nBefore planning in postgres.c, test if the query will produce an already\ncached result set. If so, send the data off from cache.\n\n3) modification of underlying heap\n\nLike (1), produce a list inside the executor (ExecAppend(), ExecDelete(),\nExecReplace() -> RelationInvalidateHeapTuple() ->\nPrepareForTupleInvalidation()) which gets processed by\nAtEOXactInvalidationMessages(). This results in the affected entries being\npurged.\n\n---\n\nI'm not sure that cached results is a direction postgres need move in. But\nif it does, I think this a better way to do it (given that I may have\noverlooked something) than modifying the function manager (argh!).\n\nGavin\n\n",
"msg_date": "Wed, 20 Mar 2002 02:17:09 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function"
},
{
"msg_contents": "Gavin Sherry wrote:\n> I'm not sure that cached results is a direction postgres need move in. But\n> if it does, I think this a better way to do it (given that I may have\n> overlooked something) than modifying the function manager (argh!).\n\nI actually had an anterior motive.\n\nYour comment about caching not being a direction in which PostgreSQL needs to\nmove, says it all. The general rank and file seems to agree. I think caching\ncould speed up a number of things, certainly some of the stuff I have been\nworking on. I think it would be more likely to get some sort of caching from a\ncontrib project rather than to sway the core team.\n\nIMHO modifying the function manager to allow the return of a full row, and a\n\"set of\" full rows, answers a LOT of issues I have seen over the years with\nPostgreSQL extensibility.\n\nWith a full row function API we can implement:\n\n(1) Remote Queries\nselect remotequery(hostname, port, 'select * from foo');\n\n(2) External queries\nselect mysqlquery(hostname, port, 'select * from foo');\n\n(3) Cached queries\nselect cachedquery('select * from foo');\n\n(4) Full text search\nselect ftssquery(hostname, port, 'word1 and word2 and word3 not word4');\n\nAgain, with full row functions, we could prototype/implement many advanced\nfeatures in PostgreSQL as contrib projects.\n",
"msg_date": "Tue, 19 Mar 2002 10:34:20 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function"
},
{
"msg_contents": "On Mon, Mar 18, 2002 at 09:35:51PM -0500, Neil Conway wrote:\n> > \n> > \"It is an application issue\"\n> > This is completely wrong. Caching can not be done against a database without\n> > knowledge of the database, i.e. when the data changes.\n> \n> But can't this be achieved by using a LISTEN/NOTIFY model, with\n> user-created rules to NOTIFY the appropriate listener when a table\n> changes? With a good notification scheme like this, you don't need to\n> continually poll the DB for changes. You don't need to teach your cache\n> a lot of things about the database, since most of that knowledge is\n> encapsulated inside the rules, and supporting tables.\n> \n> My impression (I could be wrong) is that LISTEN/NOTIFY doesn't get the\n> press that it deserves. If this model isn't widely used because of some \n> deficiencies in the LISTEN/NOTIFY implementation, IMHO our time would be\n> better spent fixing those problems than implementing the proposed\n> caching scheme.\n> \n> If we're looking to provide a \"quick and easy\" caching scheme for users\n> attracted to MySQL's query cache, why not provide this functionality\n> through another application? I'm thinking about a generic \"caching\n> layer\" that would sit in between Postgres and the database client. It\n> could speak the FE/BE protocol as necessary; it would use LISTEN/NOTIFY\n> to allow it to efficiently be aware of database changes; it would create\n> the necessary rules for the user, providing a simple interface to\n> enabling query caching for a table or a set of tables?\n> \n> What does everyone think?\n\nNeil, this sounds like exactly the approach to follow up on: the one part\nof caching that _is_ the backends domain is knowing about invalidation\nevents. And LISTEN/NOTIFY has _exactly_ the right behavior for that -\nyou don't get out of transaction NOTIFYs, for example. As it stands,\nthe application developer has to have intimate knowledge of the schema\nto set up the correct NOTIFY triggers for any given query. This works\nagainst developing a generic middleware solution, since one would have\nto parse the SQL to guess at the affected tables.\n\nHow about an extension that autocreates INSERT/UPDATE/DELETE triggers\nthat send NOTIFYs, based on all tables accessed by a given SELECT? As\nan example, I could see extending the Zope PsycoPG database adaptor,\n(which already tries some simple timeout based caching) to tack on\nsomething like:\n\nSELECT foo,bar FROM baz CACHENOTIFY <notifyname>\n\nwhenever it creates a cache fora given query, then setting up the correct\nLISTEN to invalidate that cache. Alternatively, the LISTEN could be\nautomatic. The name might be autogenerated, as well, to avoid collision\nprobelms. Or perhaps _allow_ collisions to extend the notification\nset? (I could see setting _all_ the queries that generate one web page\nto NOTIFY together, since the entire page needs to be regenerated on cache\ninvalidation)\n\nThen, the existing interface to SQL queries would allow the app developer\nto set logical caching policies for each query, independently. The backend\ndoes only the part that it alone can do: determine all the tables touched\nby a given query. The middleware and app developer are then free to cache\nat the appropriate level (SQL result set, fully formatted web page, etc.)\nThis clearly is only useful in a connection pooling environment,\nso the long lived backends are around to receive the NOTIFYs. Hmm, no,\nI think it would be possible with this to have a seperate process do\nthe LISTEN and cache invalidation, while a pool of other backends are\nused for general access, no?\n\nSeems like a win all around. Anyone else have comments? How insane\nwould the auto trigger creation get? It seems to me that this would be\nsimilar in spirit to the referential integrity work, but more dynamic,\nsince simple SELECTs would be creating backend triggers. Potential for\nDOS attacks, for ex. but not much worse I suppose than firing off big\nnasty cartesian cross product queries.\n\nRoss\n",
"msg_date": "Tue, 19 Mar 2002 12:12:52 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "mlw wrote:\n> [...]\n>\n> IMHO modifying the function manager to allow the return of a full row, and a\n> \"set of\" full rows, answers a LOT of issues I have seen over the years with\n> PostgreSQL extensibility.\n\n Sure. Actually I think you'll have an easy project with this\n one, because all the work has been done by Tom already.\n\n The function manager isn't the problem any more. It is that\n you cannot have such a \"set of\" function in the rangetable.\n So you have no mechanism to USE the result.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 19 Mar 2002 13:13:56 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> mlw wrote:\n> > [...]\n> >\n> > IMHO modifying the function manager to allow the return of a full row, and a\n> > \"set of\" full rows, answers a LOT of issues I have seen over the years with\n> > PostgreSQL extensibility.\n> \n> Sure. Actually I think you'll have an easy project with this\n> one, because all the work has been done by Tom already.\n> \n> The function manager isn't the problem any more. It is that\n> you cannot have such a \"set of\" function in the rangetable.\n> So you have no mechanism to USE the result.\n\nI'm not sure I follow you. OK, maybe I identified the wrong portion of code. \n\nThe idea is that the first return value could return an array of varlenas, one\nfor each column, then a set of varlenas, one for each column.\n\nIs there a way to return this to PostgreSQL?\n",
"msg_date": "Tue, 19 Mar 2002 13:51:39 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function"
},
{
"msg_contents": "On Tue, 19 Mar 2002 12:12:52 CST, \"Ross J. Reedstrom\" wrote:\n> On Mon, Mar 18, 2002 at 09:35:51PM -0500, Neil Conway wrote:\n> > > \n> > > \"It is an application issue\"\n> > > This is completely wrong. Caching can not be done against a database without\n> > > knowledge of the database, i.e. when the data changes.\n> > ...\n> > \n> > If we're looking to provide a \"quick and easy\" caching scheme for users\n> > attracted to MySQL's query cache, why not provide this functionality\n> > through another application?\n> > ...\n> > \n> > What does everyone think?\n> \n> Neil, this sounds like exactly the approach to follow up on: \n> ...\n> \n> Seems like a win all around. Anyone else have comments?\n> ...\n\n I'm not certain the full direction of the thinking here, however, it\nseems to me that there are a few considerations that I would like to\nsee/keep in mind:\n\nI feel that the caching should be SQL transparent. If it is\nimplemented reasonably well, the performance gain should be pretty\nmuch universal. Yes, a large number of queries would never be called\nagain, however, the results still need to be fetched into memory and\n\"caching\" them for later reuse should involve little more than a\nskipped free (think filesystem cache). It makes more sense to specify\n\"non-cachable\" in a query for tuning than \"cacheable\". This also\nmeans that just switching databases to PostgreSQL improves my\nperformance.\n\nAlso, it is very important that the caching should be transparent to\nthe application. This means that the application should be able to\nconnect to the database using the \"standard\" application interface\n(i.e., ODBC, PHP, Perl/DBI, etc.) This allows me to port my existing\nOracle/DB2/MySQL/etc. application to pgsql through normal porting. If\nI have to implement a non-standard interface, I can likely gain even\nmore performance through custom code and maintain reasonable database\nindependence.\n\nWhile I am a strong believer in PostgreSQL, many of my customers have\nother demands/requirements. I still want to be able to use my\nexisting code and libraries when using their database. Sticking with\nthe \"standards\" allows me to develop best of class applications and\ninterface to best of class databases. It also allows others to easily\nrealize the value of PostgreSQL.\n\nThanks,\nF Harvell\n\n\n\n",
"msg_date": "Tue, 19 Mar 2002 19:20:48 -0500",
"msg_from": "F Harvell <fharvell@fts.net>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching. "
},
{
"msg_contents": "On Tue, 2002-03-19 at 19:20, F Harvell wrote:\n> I feel that the caching should be SQL transparent. If it is\n> implemented reasonably well, the performance gain should be pretty\n> much universal.\n\nWell, the simple query cache scheme that is currently being proposed\nwould use a byte-by-byte comparison of the incoming query. I think the\nconsensus is that for a lot of workloads, this would be a bad idea.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "19 Mar 2002 20:28:19 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Neil Conway wrote:\n> \n> On Tue, 2002-03-19 at 19:20, F Harvell wrote:\n> > I feel that the caching should be SQL transparent. If it is\n> > implemented reasonably well, the performance gain should be pretty\n> > much universal.\n> \n> Well, the simple query cache scheme that is currently being proposed\n> would use a byte-by-byte comparison of the incoming query. I think the\n> consensus is that for a lot of workloads, this would be a bad idea.\n\nAnd this is what I have been trying to argue. Many SQL deployments execute a\nset of hard coded queries as the majority of the work load. The dynamic\nqueries, obviously, will not be cached, but the vast majority of work will come\nout of the cache.\n",
"msg_date": "Tue, 19 Mar 2002 20:42:59 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "> > Well, the simple query cache scheme that is currently being proposed\n> > would use a byte-by-byte comparison of the incoming query. I think the\n> > consensus is that for a lot of workloads, this would be a bad idea.\n>\n> And this is what I have been trying to argue. Many SQL\n> deployments execute a\n> set of hard coded queries as the majority of the work load. The dynamic\n> queries, obviously, will not be cached, but the vast majority of\n> work will come\n> out of the cache.\n\nI feel like jumping into the fray!\n\nI think first we need to decide on some facts:\n\n1. Implementing a byte-compatible query cache WILL improve the speed of\nrepetitive queries over static data.\n\n2. This can be incredibly useful for some web applications.\n\n3. It is really hard to implement such a cache whilst keeping postgres\nmaintainable and ACID compliant.\n\n4. An application layer cache can be smarter and faster than a database\nlayer cache, and this is currently the standard way of doing things. MySQL\nis bringing db layer caches to the mainstream. In a few years time -\neveryone might be doing it...\n\n5. The main developers, or in fact the people with the ability to implement\nsuch a thing, either won't do it or can't be stuffed doing it...\n\n6. Implementing prepared statements in postgres is a reasonable, valid and\nstandard addition that will improve performance all over the place. This\nmight also lead to \"prepared views\" - another performance improvement.\n\n7. Improving the buffer manager's LRU policy can reduce problem of seq. scan\nwiping out cache.\n\nSo, given the above it seems to me that:\n\n1. The main developers are more interested in implementing prepared\nstatements - which is cool, as this is a good performance improvement.\n\n2. The main developers can look at replacing LRU to futher improve cache\nuse.\n\n3. We agree that such a query cache can be useful in some circumstances and\ncould help postgres's performance in certain environments, but the will\ndoesn't exist to implement it at the moment and it would also be difficult\nand messy. Put it on the TODO list maybe.\n\n4. If someone happens to submit a magic patch that implements query caching\nin a perfectly ACID-compliant way, then it should be considered for\ninclusion. Why the heck not?\n\nChris\n\n",
"msg_date": "Wed, 20 Mar 2002 10:40:25 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Tue, 2002-03-19 at 21:40, Christopher Kings-Lynne wrote:\n> 1. Implementing a byte-compatible query cache WILL improve the speed of\n> repetitive queries over static data.\n\nFor some specific workloads, I think it has the potential to\nsignificantly improve performance.\n\n> 3. It is really hard to implement such a cache whilst keeping postgres\n> maintainable and ACID compliant.\n\nIf we only consider implementations within Postgres itself, this is\nprobably true. However, I haven't seen anyone comment that there are\nACID-related concerns with the NOTIFY/LISTEN scheme that has been\nsuggested (and therefore, with the middle-tier caching daemon I\nproposed).\n\n> 5. The main developers, or in fact the people with the ability to implement\n> such a thing, either won't do it or can't be stuffed doing it...\n\nI don't think it's a particularly good idea to implement the query cache\nwithin the database itself. As for the middle-tier caching daemon I\nsuggested, I'm working on a design but I haven't begun implementation\nyet.\n\n> 3. We agree that such a query cache can be useful in some circumstances and\n> could help postgres's performance in certain environments, but the will\n> doesn't exist to implement it at the moment and it would also be difficult\n> and messy. Put it on the TODO list maybe.\n\nI agree that a query cache implemented inside Postgres proper would be\nmessy and of dubious value, but I haven't heard of any show-stoppers WRT\nmy proposal (of course, if anyone knows of one, please speak up).\n\nCheers,\n\nNeil\n \n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "19 Mar 2002 23:43:49 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "On Tue, Mar 19, 2002 at 08:28:19PM -0500, Neil Conway wrote:\n> On Tue, 2002-03-19 at 19:20, F Harvell wrote:\n> > I feel that the caching should be SQL transparent. If it is\n> > implemented reasonably well, the performance gain should be pretty\n> > much universal.\n> \n> Well, the simple query cache scheme that is currently being proposed\n> would use a byte-by-byte comparison of the incoming query. I think the\n> consensus is that for a lot of workloads, this would be a bad idea.\n\nApparently, no one actually read _my_ proposal, they just replied to it.\nAll the arguements about if this kind of cache is any good have been\nthrashed out, up-thread. Apparently Mr. Harvell didn't feel the need\nto go back and read them. Going over them again is not productive -\nthe next step is to see if there is anything to actually _code_ here.\n\nCaching is a hard problem, once you start digging into it. Going from\nno cache to some cache is (almost) always a win, but multiple caches in\na datapath can interact in non-intuitive ways. And we _already_ have\nseveral, well tuned, clever caches in place. Anything that messes with\nthem is going to be rejected, for sure. \n\nWhat I proposed is a sort of compromise: it is clear to me that the core\ndevelopers are not very interested in the kind of cache Neil is talking\nabout above, and would rather see query caching done in the app. What I\nproposed is extending the backends support for client-side caching, to\nmake it easier (or possible) for middleware to automate the task.\n\nThe bare bones are: flag a query in some way so the backend auto generates\nthe appropriate NOTIFY triggers, so the middleware can do proper cache\nmaintenance by LISTENing. \n\nI think I'll go away and write up my compromise proposal a little more\nclearly, and post it under a new subject, later. Perhaps we can then\nhave a productive discussion about _it_, and not rehash old arguments.\n\nRoss\n\nP.S. \n\nHACKER sociological opinion below - feel free to skip - \n\nThere are only three reasons to discuss features on HACKERS: to\nsee if a proposed feature would be rejected, so you don't waste time\nimplementing it; to refine a proposed implementation, so it doesn't have\nto be reworked; and to discuss an actual in-hand implementation. Notice\nthat there's no way to skip step one: if the CVS committers don't like\nthe feature, arguing for it on HACKERS won't make it magically better:\nproviding an implementation that doesn't do bad things _might_. And you\ncan always maintain an independent patch, or fork.\n\nSo, we have a number of people who think a query cache would be a\ngood idea. And core developers who are not convinced. I think one\nof the reasons is that, while it might be useful in some situations\n(even fairly common situations) it's neither elegant nor flexible. The\nPostgreSQL project has a long tradition of turning down narrow, 'good\nenough - it works for me' solutions, while looking for a better, more\ninclusive solution. Sometimes this has been a problem with missing\nfeatures, but in the long run, it's been a win.\n",
"msg_date": "Wed, 20 Mar 2002 10:34:00 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "mlw wrote:\n> Jan Wieck wrote:\n> >\n> > mlw wrote:\n> > > [...]\n> > >\n> > > IMHO modifying the function manager to allow the return of a full row, and a\n> > > \"set of\" full rows, answers a LOT of issues I have seen over the years with\n> > > PostgreSQL extensibility.\n> >\n> > Sure. Actually I think you'll have an easy project with this\n> > one, because all the work has been done by Tom already.\n> >\n> > The function manager isn't the problem any more. It is that\n> > you cannot have such a \"set of\" function in the rangetable.\n> > So you have no mechanism to USE the result.\n>\n> I'm not sure I follow you. OK, maybe I identified the wrong portion of code.\n>\n> The idea is that the first return value could return an array of varlenas, one\n> for each column, then a set of varlenas, one for each column.\n>\n> Is there a way to return this to PostgreSQL?\n\n There is a way to return anything. The problem in PostgreSQL\n is to actually USE it.\n\n Our idea originally was to extend the capabilities of a\n rangetable entry. Currently, rangetable entries can only\n hold a relation, which is a table or a view. After rewriting,\n they are down to real tables only.\n\n But basically, a rangetable entry should just be a row-\n source, so that a function returning a row-set could occur\n there too.\n\n In order to avoid multiple calls to the function because of\n nestloops and the like, I think when a set function occurs in\n a RTE, it's result should be dumped into a sort-tape and that\n is used as the row source in the rest of the plan.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 22 Mar 2002 13:17:52 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function"
},
{
"msg_contents": "Jan wrote:\n\n> There is a way to return anything. The problem in PostgreSQL\n> is to actually USE it.\n>\n> Our idea originally was to extend the capabilities of a\n> rangetable entry. Currently, rangetable entries can only\n> hold a relation, which is a table or a view. After rewriting,\n> they are down to real tables only.\n>\n> But basically, a rangetable entry should just be a row-\n> source, so that a function returning a row-set could occur\n> there too.\n>\n> In order to avoid multiple calls to the function because of\n> nestloops and the like, I think when a set function occurs in\n> a RTE, it's result should be dumped into a sort-tape and that\n> is used as the row source in the rest of the plan.\n\nHmmm...now that my SET NOT NULL patch is on the list, I'm thinking about\nwhat to tackle next. This is something that would be incredibly useful to\nme, but sounds pretty difficult (for someone unfamiliar with the code).\n\nSo, some questions:\n\n1. Can someone give me some pointers as to whereabouts I should look in the\nsource code, and what I should be looking for, given that I've never played\nin the rewriter/executor before?\n\n2. Maybe a general plan-of-attack? ie. What things would need to be changed\nand what order should I change them in...\n\n3. Tell me it's worth me spending time on this - that it's not something a\nmain developer could just code up in an evening?\n\n4. What stuff has Tom done that should make it 'easy'?\n\nCheers,\n\nChris\n\n(Sick of returning arrays and comma delimited lists from functions!)\n\n",
"msg_date": "Thu, 28 Mar 2002 14:03:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Procedures returning row sources"
},
{
"msg_contents": "Greg Copeland wrote:\n> At this point in time, I think we've both pretty well beat this topic\n> up. Obviously there are two primary ways of viewing the situation. I\n> don't think anyone is saying it's a bad idea...I think everyone is\n> saying that it's easier to address elsewhere and that overall, the net\n> returns may be at the expense of some other work loads. So, unless\n> there are new pearls to be shared and gleaned, I think the topics been\n> fairly well addressed. Does more need to said?\n\nWith a PREPARE/EXECUTE patch now out for approval, can I assume we will\ngo with that first and see how far it gets us, and then revisit the idea\nof cached results. In this case, we are caching the query plan. The\nquery still executes again in the executor, so the data is always fresh.\nIn a sense, the buffer cache and disk are the caches, which don't need\nseparate invalidation if some data changes in the table.\n\nThe plan can get invalid if it calls a non-cachable function or the\nschema changes, or the constants used to generate the plan in the\noptimizer would generate a different plan from the constants used in a\nlater query, or the analyze statistics changed.\n\nThe MVCC ramifications of cached results and invalidation could be quite\ncomplex. The commit of a transaction could change tuple visibility\nrules even if the data modify statement was executed much earlier in the\ntransaction.\n\nAlso, on the NOTIFY/trigger idea, triggers are called on statement end,\nnot transaction end, so if an UPDATE query is in a multi-statement\ntransaction, another backend looking for the NOTIFY will receive it\nbefore the transaction commits, meaning it will not see the update. \nThat seems like a problem. We do have deferrable constraints which will\nonly do checking on transaction end, but I am not sure if that can be\nused for NOTIFY on transaction commit. Anyone?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 14 Apr 2002 10:38:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Also, on the NOTIFY/trigger idea, triggers are called on statement end,\n> not transaction end, so if an UPDATE query is in a multi-statement\n> transaction, another backend looking for the NOTIFY will receive it\n> before the transaction commits, meaning it will not see the update. \n\nNo it won't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Apr 2002 13:08:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching. "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Also, on the NOTIFY/trigger idea, triggers are called on statement end,\n> > not transaction end, so if an UPDATE query is in a multi-statement\n> > transaction, another backend looking for the NOTIFY will receive it\n> > before the transaction commits, meaning it will not see the update. \n> \n> No it won't.\n\nIs this because NOTIFY is held for transaction end or because the\ntriggers are held until transaction commit?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 14 Apr 2002 13:11:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> No it won't.\n\n> Is this because NOTIFY is held for transaction end or because the\n> triggers are held until transaction commit?\n\nThe former.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Apr 2002 13:16:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching. "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> No it won't.\n> \n> > Is this because NOTIFY is held for transaction end or because the\n> > triggers are held until transaction commit?\n> \n> The former.\n\nThanks. I see it in the NOTIFY manual page now. Very nice.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 14 Apr 2002 13:20:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Again, sorry, caching."
}
] |
[
{
"msg_contents": "This is probably a language looking for a purpose before adding it to\nthe core. Here's what I use it for; probably abusively too! Could use\nuntrusted perl to spawn system calls, but shell scripts are much nicer\nfor shell work not to mention transactional updates of structure and\ncontrol scripts make for minimall impact upgrade periods.\n\n- On demand PDFs as generated by Docbook for offline reports initiated\nby the database. Ie. Inventory updates to management every N sales\nmade or when stock is running low.\n- Updating static HTML pages with Docbook HTML output when the stored\ndata changes.\n- System provisioning initiation. Rollbacks don't work, but it's not\nreally important that things are undone immediatly, just that they're\ninitiated immediatly. Using DB for this removes requirement of\nmiddleware.\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n",
"msg_date": "Sat, 16 Mar 2002 11:56:05 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "plsql as an officially supported language?"
},
{
"msg_contents": "\nWe have this in the TODO:\n\n\to Add plsh server-side shell language (Peter E)\n\nThis is Peter's language that allows shell calls. I think Peter wants\nto add it for 7.3 and I think it is a good idea.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> This is probably a language looking for a purpose before adding it to\n> the core. Here's what I use it for; probably abusively too! Could use\n> untrusted perl to spawn system calls, but shell scripts are much nicer\n> for shell work not to mention transactional updates of structure and\n> control scripts make for minimall impact upgrade periods.\n> \n> - On demand PDFs as generated by Docbook for offline reports initiated\n> by the database. Ie. Inventory updates to management every N sales\n> made or when stock is running low.\n> - Updating static HTML pages with Docbook HTML output when the stored\n> data changes.\n> - System provisioning initiation. Rollbacks don't work, but it's not\n> really important that things are undone immediatly, just that they're\n> initiated immediatly. Using DB for this removes requirement of\n> middleware.\n> \n> --\n> Rod Taylor\n> \n> Your eyes are weary from staring at the CRT. You feel sleepy. Notice\n> how restful it is to watch the cursor blink. Close your eyes. The\n> opinions stated above are yours. You cannot imagine why you ever felt\n> otherwise.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 18 Mar 2002 18:19:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plsql as an officially supported language?"
}
] |
[
{
"msg_contents": "7.2 crashes with the below function:\n\nCREATE OR REPLACE FUNCTION runMaintenance()\nRETURNS BOOL AS '\n VACUUM;\n SELECT TRUE;\n' LANGUAGE sql;\n\nI was going to toss a bunch of system maintenance stuff in a database\nfunction to make administration for those who administer the boxes\n(not me -- I just tell how).\n\nAnyway, any crash is a bad crash I suppose.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n",
"msg_date": "Sat, 16 Mar 2002 19:56:48 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "7.2 crash..."
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> 7.2 crashes with the below function:\n\n> CREATE OR REPLACE FUNCTION runMaintenance()\n> RETURNS BOOL AS '\n> VACUUM;\n> SELECT TRUE;\n> ' LANGUAGE sql;\n\nUgh. The problem is that VACUUM's implicit CommitTransaction calls\nwipe out all the transient memory allocated by the function evaluation.\nI don't see any reasonable way to support VACUUM inside a function\ncall; I think we have to prohibit it.\n\nUnfortunately I don't see any clean way to test for this situation\neither. VACUUM's IsTransactionBlock() test obviously doesn't get the\njob done. Any ideas how to catch this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 00:25:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 crash... "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> 7.2 crashes with the below function:\n> CREATE OR REPLACE FUNCTION runMaintenance()\n> RETURNS BOOL AS '\n> VACUUM;\n> SELECT TRUE;\n> ' LANGUAGE sql;\n\nAFAICS there is no way that we can support VACUUM inside a function;\nthe forced transaction commits that VACUUM performs will recycle any\nmemory allocated by the function executor, leading to death and\ndestruction upon return from VACUUM.\n\nAccordingly, what we really need is a way of preventing VACUUM from\nexecuting in the above scenario. The IsTransactionBlock() test it\nalready has isn't sufficient.\n\nI have thought of something that probably would be sufficient:\n\n\tif (!MemoryContextContains(QueryContext, vacstmt))\n\t\telog(ERROR, \"VACUUM cannot be executed from a function\");\n\nThis is truly, horribly ugly ... but it'd get the job done, because only\ninteractive queries will generate parsetrees in QueryContext.\n\nCan someone think of a better way?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 23:22:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 crash... "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > 7.2 crashes with the below function:\n> > CREATE OR REPLACE FUNCTION runMaintenance()\n> > RETURNS BOOL AS '\n> > VACUUM;\n> > SELECT TRUE;\n> > ' LANGUAGE sql;\n> \n> AFAICS there is no way that we can support VACUUM inside a function;\n> the forced transaction commits that VACUUM performs will recycle any\n> memory allocated by the function executor, leading to death and\n> destruction upon return from VACUUM.\n> \n> Accordingly, what we really need is a way of preventing VACUUM from\n> executing in the above scenario. The IsTransactionBlock() test it\n> already has isn't sufficient.\n> \n> I have thought of something that probably would be sufficient:\n> \n> \tif (!MemoryContextContains(QueryContext, vacstmt))\n> \t\telog(ERROR, \"VACUUM cannot be executed from a function\");\n> \n> This is truly, horribly ugly ... but it'd get the job done, because only\n> interactive queries will generate parsetrees in QueryContext.\n> \n> Can someone think of a better way?\n\nWell, this could would be in vacuum.c, right? Seems like a nice\ncentral location for it. I don't see it as terribly ugly only because\nthe issue is that we can't run vacuum inside a memory context that can't\nbe free'ed by vacuum.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 00:04:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 crash..."
},
{
"msg_contents": "Here is a patch with a fix outlined by Tom:\n\t\n\ttest=> CREATE OR REPLACE FUNCTION runMaintenance()\n\ttest-> RETURNS BOOL AS '\n\ttest'> VACUUM;\n\ttest'> SELECT TRUE;\n\ttest'> ' LANGUAGE sql;\n\tCREATE\n\ttest=> \n\ttest=> select runMaintenance();\n\tERROR: VACUUM cannot be executed from a function\n\nLooks good. Will commit after typical delay.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > 7.2 crashes with the below function:\n> > CREATE OR REPLACE FUNCTION runMaintenance()\n> > RETURNS BOOL AS '\n> > VACUUM;\n> > SELECT TRUE;\n> > ' LANGUAGE sql;\n> \n> AFAICS there is no way that we can support VACUUM inside a function;\n> the forced transaction commits that VACUUM performs will recycle any\n> memory allocated by the function executor, leading to death and\n> destruction upon return from VACUUM.\n> \n> Accordingly, what we really need is a way of preventing VACUUM from\n> executing in the above scenario. The IsTransactionBlock() test it\n> already has isn't sufficient.\n> \n> I have thought of something that probably would be sufficient:\n> \n> \tif (!MemoryContextContains(QueryContext, vacstmt))\n> \t\telog(ERROR, \"VACUUM cannot be executed from a function\");\n> \n> This is truly, horribly ugly ... but it'd get the job done, because only\n> interactive queries will generate parsetrees in QueryContext.\n> \n> Can someone think of a better way?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/commands/vacuum.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/commands/vacuum.c,v\nretrieving revision 1.223\ndiff -c -r1.223 vacuum.c\n*** src/backend/commands/vacuum.c\t12 Apr 2002 20:38:25 -0000\t1.223\n--- src/backend/commands/vacuum.c\t14 Apr 2002 16:41:37 -0000\n***************\n*** 181,186 ****\n--- 181,189 ----\n \tif (IsTransactionBlock())\n \t\telog(ERROR, \"%s cannot run inside a BEGIN/END block\", stmttype);\n \n+ \tif (!MemoryContextContains(QueryContext, vacstmt))\n+ \t\telog(ERROR, \"VACUUM cannot be executed from a function\");\n+ \n \t/*\n \t * Send info about dead objects to the statistics collector\n \t */",
"msg_date": "Sun, 14 Apr 2002 12:52:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 crash..."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> *** src/backend/commands/vacuum.c\t12 Apr 2002 20:38:25 -0000\t1.223\n> --- src/backend/commands/vacuum.c\t14 Apr 2002 16:41:37 -0000\n> ***************\n> *** 181,186 ****\n> --- 181,189 ----\n> \tif (IsTransactionBlock())\n> \t\telog(ERROR, \"%s cannot run inside a BEGIN/END block\", stmttype);\n \n> + \tif (!MemoryContextContains(QueryContext, vacstmt))\n> + \t\telog(ERROR, \"VACUUM cannot be executed from a function\");\n> + \n> \t/*\n> \t * Send info about dead objects to the statistics collector\n> \t */\n\n> --ELM1018803173-10746-0_--\n\nCompare to immediately preceding error check. Isn't there something\nmissing here?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Apr 2002 13:15:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 crash... "
},
{
"msg_contents": "Oops, I see now. How is this?\n\nRemember, I am not incredibly capable, just persistent. :-)\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > *** src/backend/commands/vacuum.c\t12 Apr 2002 20:38:25 -0000\t1.223\n> > --- src/backend/commands/vacuum.c\t14 Apr 2002 16:41:37 -0000\n> > ***************\n> > *** 181,186 ****\n> > --- 181,189 ----\n> > \tif (IsTransactionBlock())\n> > \t\telog(ERROR, \"%s cannot run inside a BEGIN/END block\", stmttype);\n> \n> > + \tif (!MemoryContextContains(QueryContext, vacstmt))\n> > + \t\telog(ERROR, \"VACUUM cannot be executed from a function\");\n> > + \n> > \t/*\n> > \t * Send info about dead objects to the statistics collector\n> > \t */\n> \n> > --ELM1018803173-10746-0_--\n> \n> Compare to immediately preceding error check. Isn't there something\n> missing here?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/commands/vacuum.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/commands/vacuum.c,v\nretrieving revision 1.223\ndiff -c -r1.223 vacuum.c\n*** src/backend/commands/vacuum.c\t12 Apr 2002 20:38:25 -0000\t1.223\n--- src/backend/commands/vacuum.c\t14 Apr 2002 16:41:37 -0000\n***************\n*** 181,186 ****\n--- 181,189 ----\n \tif (IsTransactionBlock())\n \t\telog(ERROR, \"%s cannot run inside a BEGIN/END block\", stmttype);\n \n+ \tif (!MemoryContextContains(QueryContext, vacstmt))\n+ \t\telog(ERROR, \"%s cannot be executed from a function\", stmttype);\n+ \n \t/*\n \t * Send info about dead objects to the statistics collector\n \t */",
"msg_date": "Sun, 14 Apr 2002 13:22:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 crash..."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Oops, I see now. How is this?\n\nBetter. A comment explaining what the thing is doing would help too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Apr 2002 13:37:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 crash... "
},
{
"msg_contents": "\nI have applied a patch based on Tom's suggestion that will disable\nVACUUM in a function in 7.3.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> 7.2 crashes with the below function:\n> \n> CREATE OR REPLACE FUNCTION runMaintenance()\n> RETURNS BOOL AS '\n> VACUUM;\n> SELECT TRUE;\n> ' LANGUAGE sql;\n> \n> I was going to toss a bunch of system maintenance stuff in a database\n> function to make administration for those who administer the boxes\n> (not me -- I just tell how).\n> \n> Anyway, any crash is a bad crash I suppose.\n> --\n> Rod Taylor\n> \n> Your eyes are weary from staring at the CRT. You feel sleepy. Notice\n> how restful it is to watch the cursor blink. Close your eyes. The\n> opinions stated above are yours. You cannot imagine why you ever felt\n> otherwise.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Apr 2002 20:54:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 crash..."
}
] |
[
{
"msg_contents": "http://freshmeat.net/articles/view/426/\n\nThis article is quite poorly written. I dare say that I expected more from\npeople who run a site associated with the categorisation of software (how\ncan one discuss MySQL, Oracle, Postgres and Access in the same article?).\n\nBy point of reference, however, I think Postgres is chugging along\nnicely. I would much prefer the author make his point in diff -c format.\n\n\nGavin\n\n\n",
"msg_date": "Mon, 18 Mar 2002 00:50:07 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Another misinformed article"
}
] |
[
{
"msg_contents": "Attached is a pacth against 7.2 which adds locale awareness to the\ncharacter classes of the regular expression engine. Please consider\nincluding this feature to postgreSQL.\n\nRegards,\nManuel.",
"msg_date": "17 Mar 2002 17:14:18 -0600",
"msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>",
"msg_from_op": true,
"msg_subject": "regexp character class locale awareness patch"
},
{
"msg_contents": "Can someone who is multbyte-aware comment on this patch? Thanks.\n\n---------------------------------------------------------------------------\n\nManuel Sugawara wrote:\n> Attached is a pacth against 7.2 which adds locale awareness to\n> the character classes of the regular expression engine. Please\n> consider including this feature to postgreSQL.\n> \n> Regards, Manuel.\n\nContent-Description: regexp character class locale awareness patch\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n*** src/backend/regex/regcomp.c.org\tSun Mar 17 16:39:13 2002\n--- src/backend/regex/regcomp.c\tSun Mar 17 16:53:43 2002\n***************\n*** 47,53 ****\n--- 47,64 ----\n #include \"regex/regex.h\"\n #include \"regex/utils.h\"\n #include \"regex/regex2.h\"\n+ #ifdef USE_LOCALE\n+ struct cclass\n+ {\n+ char *name;\n+ char *chars;\n+ char *multis;\n+ };\n+ static struct cclass* cclasses = NULL;\n+ static struct cclass* cclass_init(void);\n+ #else\n #include \"regex/cclass.h\"\n+ #endif /* USE_LOCALE */\n #include \"regex/cname.h\"\n \n /*\n***************\n*** 174,179 ****\n--- 185,195 ----\n \tpg_wchar *wcp;\n #endif\n \n+ #ifdef USE_LOCALE\n+ if ( cclasses == NULL )\n+ cclasses = cclass_init();\n+ #endif /* USE_LOCALE */\n+ \n #ifdef REDEBUG\n #define GOODFLAGS(f)\t (f)\n #else\n***************\n*** 884,890 ****\n \tstruct cclass *cp;\n \tsize_t\t\tlen;\n \tchar\t *u;\n! \tchar\t\tc;\n \n \twhile (MORE() && pg_isalpha(PEEK()))\n \t\tNEXT();\n--- 900,906 ----\n \tstruct cclass *cp;\n \tsize_t\t\tlen;\n \tchar\t *u;\n! \tunsigned char\t\tc;\n \n \twhile (MORE() && pg_isalpha(PEEK()))\n \t\tNEXT();\n***************\n*** 905,911 ****\n \n \tu = cp->chars;\n \twhile ((c = *u++) != '\\0')\n! \t\tCHadd(cs, c);\n \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n \t\tMCadd(p, cs, u);\n }\n--- 921,927 ----\n \n \tu = cp->chars;\n \twhile ((c = *u++) != '\\0')\n! \t\tCHadd(cs, c); \n \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n \t\tMCadd(p, cs, u);\n }\n***************\n*** 1716,1718 ****\n--- 1732,1796 ----\n \treturn (islower((unsigned char) c));\n #endif\n }\n+ \n+ #ifdef USE_LOCALE\n+ static struct cclass *\n+ cclass_init(void)\n+ {\n+ struct cclass *cp = NULL;\n+ struct cclass *classes = NULL;\n+ struct cclass_factory\n+ {\n+ char *name;\n+ int (*func)(int);\n+ char *chars;\n+ } cclass_factories [] =\n+ {\n+ { \"alnum\", isalnum, NULL },\n+ { \"alpha\", isalpha, NULL },\n+ { \"blank\", NULL, \" \\t\" },\n+ { \"cntrl\", iscntrl, NULL },\n+ { \"digit\", NULL, \"0123456789\" },\n+ { \"graph\", isgraph, NULL },\n+ { \"lower\", islower, NULL },\n+ { \"print\", isprint, NULL },\n+ { \"punct\", ispunct, NULL },\n+ { \"space\", NULL, \"\\t\\n\\v\\f\\r \" },\n+ { \"upper\", isupper, NULL },\n+ { \"xdigit\", isxdigit, NULL },\n+ { NULL, NULL, NULL }\n+ };\n+ struct cclass_factory *cf = NULL;\n+ \n+ classes = malloc(sizeof(struct cclass) * (sizeof(cclass_factories) / sizeof(struct cclass_factory)));\n+ if (classes == NULL)\n+ elog(ERROR,\"cclass_init: out of memory\");\n+ \n+ cp = classes;\n+ for(cf = cclass_factories; cf->name != NULL; cf++)\n+ {\n+ cp->name = strdup(cf->name);\n+ if ( cf->chars )\n+ cp->chars = strdup(cf->chars);\n+ else\n+ {\n+ int x = 0, y = 0;\n+ cp->chars = malloc(sizeof(char) * 256);\n+ if (cp->chars == NULL)\n+ elog(ERROR,\"cclass_init: out of memory\");\n+ for (x = 0; x < 256; x++)\n+ {\n+ if((cf->func)(x))\n+ *(cp->chars + y++) = x; \n+ }\n+ *(cp->chars + y) = '\\0';\n+ }\n+ cp->multis = \"\";\n+ cp++;\n+ }\n+ cp->name = cp->chars = NULL;\n+ cp->multis = \"\";\n+ \n+ return classes;\n+ }\n+ #endif /* USE_LOCALE */",
"msg_date": "Sun, 14 Apr 2002 12:55:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "> Can someone who is multbyte-aware comment on this patch? Thanks.\n\nI thought the patch is not relevant to multibyte support?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 15 Apr 2002 11:28:04 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> > Can someone who is multbyte-aware comment on this patch? Thanks.\n> \n> I thought the patch is not relevant to multibyte support?\n\nSorry, yes, it is for locale.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 14 Apr 2002 22:36:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Whatever you do with this patch, remember that the USE_LOCALE symbol is\ngone.\n\nBruce Momjian writes:\n\n>\n> Can someone who is multbyte-aware comment on this patch? Thanks.\n>\n> ---------------------------------------------------------------------------\n>\n> Manuel Sugawara wrote:\n> > Attached is a pacth against 7.2 which adds locale awareness to\n> > the character classes of the regular expression engine. Please\n> > consider including this feature to postgreSQL.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 15 Apr 2002 00:23:38 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "> Whatever you do with this patch, remember that the USE_LOCALE symbol is\n> gone.\n\nI thought we have some way to tern off locale support at the configure\ntime.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 15 Apr 2002 15:07:43 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > Whatever you do with this patch, remember that the USE_LOCALE symbol is\n> > gone.\n>\n> I thought we have some way to tern off locale support at the configure\n> time.\n\nYou do it at initdb time now.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 15 Apr 2002 11:37:20 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "> Whatever you do with this patch, remember that the USE_LOCALE symbol is\n> gone.\n\nThen the patches should be modified.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 16 Apr 2002 10:09:07 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> > Whatever you do with this patch, remember that the USE_LOCALE symbol is\n> > gone.\n> \n> Then the patches should be modified.\n\nYes, I am not quite sure how to do that. I will research it unless\nsomeone else lends a hand.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Apr 2002 21:13:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Tatsuo Ishii wrote:\n> > > Whatever you do with this patch, remember that the USE_LOCALE symbol is\n> > > gone.\n> >\n> > Then the patches should be modified.\n>\n> Yes, I am not quite sure how to do that. I will research it unless\n> someone else lends a hand.\n\nBasically, you manually preprocess the patch to include the USE_LOCALE\nbranch and remove the not USE_LOCALE branch. However, if the no-locale\nbranches have significant performance benefits then it might be worth\npondering setting up some optimizations.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 15 Apr 2002 21:51:56 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "> According to POSIX -regex (7)-, standard character class are:\n> \n> alnum digit punct\n> alpha graph space\n> blank lower upper\n> cntrl print xdigi\n> \n> Many of that classes are different in different locales, and currently\n> all work as if the localization were C. Many of those tests have\n> multibyte issues, however with the patch postgres will work for\n> one-byte encondings, which is better than nothing. If someone\n> (Tatsuo?) gives some advice I will work in the multibyte version.\n\nI don't think character classes are applicable for most mutibyte\nencodings. Maybe only the exeception is Unicode?\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> >\n> > Basically, you manually preprocess the patch to include the\n> > USE_LOCALE branch and remove the not USE_LOCALE branch.\n> \n> Yeah, that should work. You may also remove include/regex/cclass.h\n> since it will not be used any more.\n\nBut I don't like cclass_init() routine runs every time when reg_comp\ncalled. In my understanding the result of cclass_init() is always\nsame. What about running cclass_init() in postmaster, not postgres? Or\neven better in initdb time?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 16 Apr 2002 11:42:47 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "According to POSIX -regex (7)-, standard character class are:\n\n alnum digit punct\n alpha graph space\n blank lower upper\n cntrl print xdigi\n\nMany of that classes are different in different locales, and currently\nall work as if the localization were C. Many of those tests have\nmultibyte issues, however with the patch postgres will work for\none-byte encondings, which is better than nothing. If someone\n(Tatsuo?) gives some advice I will work in the multibyte version.\n\nPeter Eisentraut <peter_e@gmx.net> writes:\n>\n> Basically, you manually preprocess the patch to include the\n> USE_LOCALE branch and remove the not USE_LOCALE branch.\n\nYeah, that should work. You may also remove include/regex/cclass.h\nsince it will not be used any more.\n\n> However, if the no-locale branches have significant performance\n> benefits then it might be worth pondering setting up some\n> optimizations.\n\nThis is not the case.\n\nRegards,\nManuel.\n",
"msg_date": "15 Apr 2002 21:32:30 -0600",
"msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>",
"msg_from_op": true,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n\n> I don't think character classes are applicable for most mutibyte\n> encodings. Maybe only the exeception is Unicode?\n\nMaybe, and is the only one I need ;-)\n\n> \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > >\n> > > Basically, you manually preprocess the patch to include the\n> > > USE_LOCALE branch and remove the not USE_LOCALE branch.\n> > \n> > Yeah, that should work. You may also remove include/regex/cclass.h\n> > since it will not be used any more.\n> \n> But I don't like cclass_init() routine runs every time when reg_comp\n> called.\n\nActually it is called once per backend and only if it uses the regular\nexpression engine.\n\n> In my understanding the result of cclass_init() is always\n> same. \n\nYes, if localization does not change. Karel once talked about the\npossibility of being able to have different locales in the same\nDB.\n\n> What about running cclass_init() in postmaster, not postgres? Or\n> even better in initdb time?\n\nIt might be, but ... I think that it would be nice if we leave the\ndoor open to the possibility of having mixed locale configurations,\nacross data bases or even across columns of the same table.\n\nRegards,\nManuel.\n",
"msg_date": "15 Apr 2002 23:11:50 -0600",
"msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>",
"msg_from_op": true,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Manuel Sugawara wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> >\n> > Basically, you manually preprocess the patch to include the\n> > USE_LOCALE branch and remove the not USE_LOCALE branch.\n> \n> Yeah, that should work. You may also remove include/regex/cclass.h\n> since it will not be used any more.\n> \n> > However, if the no-locale branches have significant performance\n> > benefits then it might be worth pondering setting up some\n> > optimizations.\n> \n> This is not the case.\n\nHere is a patch based on this discussion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/regex/regcomp.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/regex/regcomp.c,v\nretrieving revision 1.28\ndiff -c -r1.28 regcomp.c\n*** src/backend/regex/regcomp.c\t28 Oct 2001 06:25:49 -0000\t1.28\n--- src/backend/regex/regcomp.c\t16 Apr 2002 23:12:38 -0000\n***************\n*** 47,53 ****\n #include \"regex/regex.h\"\n #include \"regex/utils.h\"\n #include \"regex/regex2.h\"\n! #include \"regex/cclass.h\"\n #include \"regex/cname.h\"\n \n /*\n--- 47,60 ----\n #include \"regex/regex.h\"\n #include \"regex/utils.h\"\n #include \"regex/regex2.h\"\n! struct cclass\n! {\n! char *name;\n! char *chars;\n! char *multis;\n! };\n! static struct cclass* cclasses = NULL;\n! static struct cclass* cclass_init(void);\n #include \"regex/cname.h\"\n \n /*\n***************\n*** 174,179 ****\n--- 181,189 ----\n \tpg_wchar *wcp;\n #endif\n \n+ if ( cclasses == NULL )\n+ cclasses = cclass_init();\n+ \n #ifdef REDEBUG\n #define GOODFLAGS(f)\t (f)\n #else\n***************\n*** 884,890 ****\n \tstruct cclass *cp;\n \tsize_t\t\tlen;\n \tchar\t *u;\n! \tchar\t\tc;\n \n \twhile (MORE() && pg_isalpha(PEEK()))\n \t\tNEXT();\n--- 894,900 ----\n \tstruct cclass *cp;\n \tsize_t\t\tlen;\n \tchar\t *u;\n! \tunsigned char\t\tc;\n \n \twhile (MORE() && pg_isalpha(PEEK()))\n \t\tNEXT();\n***************\n*** 905,911 ****\n \n \tu = cp->chars;\n \twhile ((c = *u++) != '\\0')\n! \t\tCHadd(cs, c);\n \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n \t\tMCadd(p, cs, u);\n }\n--- 915,921 ----\n \n \tu = cp->chars;\n \twhile ((c = *u++) != '\\0')\n! \t\tCHadd(cs, c); \n \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n \t\tMCadd(p, cs, u);\n }\n***************\n*** 1715,1718 ****\n--- 1725,1788 ----\n #else\n \treturn (islower((unsigned char) c));\n #endif\n+ }\n+ \n+ static struct cclass *\n+ cclass_init(void)\n+ {\n+ struct cclass *cp = NULL;\n+ struct cclass *classes = NULL;\n+ struct cclass_factory\n+ {\n+ char *name;\n+ int (*func)(int);\n+ char *chars;\n+ } cclass_factories [] =\n+ {\n+ { \"alnum\", isalnum, NULL },\n+ { \"alpha\", isalpha, NULL },\n+ { \"blank\", NULL, \" \\t\" },\n+ { \"cntrl\", iscntrl, NULL },\n+ { \"digit\", NULL, \"0123456789\" },\n+ { \"graph\", isgraph, NULL },\n+ { \"lower\", islower, NULL },\n+ { \"print\", isprint, NULL },\n+ { \"punct\", ispunct, NULL },\n+ { \"space\", NULL, \"\\t\\n\\v\\f\\r \" },\n+ { \"upper\", isupper, NULL },\n+ { \"xdigit\", isxdigit, NULL },\n+ { NULL, NULL, NULL }\n+ };\n+ struct cclass_factory *cf = NULL;\n+ \n+ classes = malloc(sizeof(struct cclass) * (sizeof(cclass_factories) / sizeof(struct cclass_factory)));\n+ if (classes == NULL)\n+ elog(ERROR,\"cclass_init: out of memory\");\n+ \n+ cp = classes;\n+ for(cf = cclass_factories; cf->name != NULL; cf++)\n+ {\n+ cp->name = strdup(cf->name);\n+ if ( cf->chars )\n+ cp->chars = strdup(cf->chars);\n+ else\n+ {\n+ int x = 0, y = 0;\n+ cp->chars = malloc(sizeof(char) * 256);\n+ if (cp->chars == NULL)\n+ elog(ERROR,\"cclass_init: out of memory\");\n+ for (x = 0; x < 256; x++)\n+ {\n+ if((cf->func)(x))\n+ *(cp->chars + y++) = x; \n+ }\n+ *(cp->chars + y) = '\\0';\n+ }\n+ cp->multis = \"\";\n+ cp++;\n+ }\n+ cp->name = cp->chars = NULL;\n+ cp->multis = \"\";\n+ \n+ return classes;\n }",
"msg_date": "Tue, 16 Apr 2002 19:21:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "En Tue, 16 Apr 2002 19:21:50 -0400 (EDT)\nBruce Momjian <pgman@candle.pha.pa.us> escribi�:\n\n> Here is a patch based on this discussion.\n\nI still think the xdigit class could be handled the same way the digit\nclass is (by enumeration rather than using the isxdigit function). That\nsaves you a cicle, and I don't think there's any loss.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"The ability to monopolize a planet is insignificant\nnext to the power of the source\"\n",
"msg_date": "Tue, 16 Apr 2002 20:03:41 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nBruce Momjian wrote:\n> Manuel Sugawara wrote:\n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > >\n> > > Basically, you manually preprocess the patch to include the\n> > > USE_LOCALE branch and remove the not USE_LOCALE branch.\n> > \n> > Yeah, that should work. You may also remove include/regex/cclass.h\n> > since it will not be used any more.\n> > \n> > > However, if the no-locale branches have significant performance\n> > > benefits then it might be worth pondering setting up some\n> > > optimizations.\n> > \n> > This is not the case.\n> \n> Here is a patch based on this discussion.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n> Index: src/backend/regex/regcomp.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/backend/regex/regcomp.c,v\n> retrieving revision 1.28\n> diff -c -r1.28 regcomp.c\n> *** src/backend/regex/regcomp.c\t28 Oct 2001 06:25:49 -0000\t1.28\n> --- src/backend/regex/regcomp.c\t16 Apr 2002 23:12:38 -0000\n> ***************\n> *** 47,53 ****\n> #include \"regex/regex.h\"\n> #include \"regex/utils.h\"\n> #include \"regex/regex2.h\"\n> ! #include \"regex/cclass.h\"\n> #include \"regex/cname.h\"\n> \n> /*\n> --- 47,60 ----\n> #include \"regex/regex.h\"\n> #include \"regex/utils.h\"\n> #include \"regex/regex2.h\"\n> ! struct cclass\n> ! {\n> ! char *name;\n> ! char *chars;\n> ! char *multis;\n> ! };\n> ! static struct cclass* cclasses = NULL;\n> ! static struct cclass* cclass_init(void);\n> #include \"regex/cname.h\"\n> \n> /*\n> ***************\n> *** 174,179 ****\n> --- 181,189 ----\n> \tpg_wchar *wcp;\n> #endif\n> \n> + if ( cclasses == NULL )\n> + cclasses = cclass_init();\n> + \n> #ifdef REDEBUG\n> #define GOODFLAGS(f)\t (f)\n> #else\n> ***************\n> *** 884,890 ****\n> \tstruct cclass *cp;\n> \tsize_t\t\tlen;\n> \tchar\t *u;\n> ! \tchar\t\tc;\n> \n> \twhile (MORE() && pg_isalpha(PEEK()))\n> \t\tNEXT();\n> --- 894,900 ----\n> \tstruct cclass *cp;\n> \tsize_t\t\tlen;\n> \tchar\t *u;\n> ! \tunsigned char\t\tc;\n> \n> \twhile (MORE() && pg_isalpha(PEEK()))\n> \t\tNEXT();\n> ***************\n> *** 905,911 ****\n> \n> \tu = cp->chars;\n> \twhile ((c = *u++) != '\\0')\n> ! \t\tCHadd(cs, c);\n> \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n> \t\tMCadd(p, cs, u);\n> }\n> --- 915,921 ----\n> \n> \tu = cp->chars;\n> \twhile ((c = *u++) != '\\0')\n> ! \t\tCHadd(cs, c); \n> \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n> \t\tMCadd(p, cs, u);\n> }\n> ***************\n> *** 1715,1718 ****\n> --- 1725,1788 ----\n> #else\n> \treturn (islower((unsigned char) c));\n> #endif\n> + }\n> + \n> + static struct cclass *\n> + cclass_init(void)\n> + {\n> + struct cclass *cp = NULL;\n> + struct cclass *classes = NULL;\n> + struct cclass_factory\n> + {\n> + char *name;\n> + int (*func)(int);\n> + char *chars;\n> + } cclass_factories [] =\n> + {\n> + { \"alnum\", isalnum, NULL },\n> + { \"alpha\", isalpha, NULL },\n> + { \"blank\", NULL, \" \\t\" },\n> + { \"cntrl\", iscntrl, NULL },\n> + { \"digit\", NULL, \"0123456789\" },\n> + { \"graph\", isgraph, NULL },\n> + { \"lower\", islower, NULL },\n> + { \"print\", isprint, NULL },\n> + { \"punct\", ispunct, NULL },\n> + { \"space\", NULL, \"\\t\\n\\v\\f\\r \" },\n> + { \"upper\", isupper, NULL },\n> + { \"xdigit\", isxdigit, NULL },\n> + { NULL, NULL, NULL }\n> + };\n> + struct cclass_factory *cf = NULL;\n> + \n> + classes = malloc(sizeof(struct cclass) * (sizeof(cclass_factories) / sizeof(struct cclass_factory)));\n> + if (classes == NULL)\n> + elog(ERROR,\"cclass_init: out of memory\");\n> + \n> + cp = classes;\n> + for(cf = cclass_factories; cf->name != NULL; cf++)\n> + {\n> + cp->name = strdup(cf->name);\n> + if ( cf->chars )\n> + cp->chars = strdup(cf->chars);\n> + else\n> + {\n> + int x = 0, y = 0;\n> + cp->chars = malloc(sizeof(char) * 256);\n> + if (cp->chars == NULL)\n> + elog(ERROR,\"cclass_init: out of memory\");\n> + for (x = 0; x < 256; x++)\n> + {\n> + if((cf->func)(x))\n> + *(cp->chars + y++) = x; \n> + }\n> + *(cp->chars + y) = '\\0';\n> + }\n> + cp->multis = \"\";\n> + cp++;\n> + }\n> + cp->name = cp->chars = NULL;\n> + cp->multis = \"\";\n> + \n> + return classes;\n> }\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 17:55:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "\nOK, once I apply the original patch, please submit a patch for this and\npeople can comment on it. Thanks.\n\n\n---------------------------------------------------------------------------\n\nAlvaro Herrera wrote:\n> En Tue, 16 Apr 2002 19:21:50 -0400 (EDT)\n> Bruce Momjian <pgman@candle.pha.pa.us> escribi?:\n> \n> > Here is a patch based on this discussion.\n> \n> I still think the xdigit class could be handled the same way the digit\n> class is (by enumeration rather than using the isxdigit function). That\n> saves you a cicle, and I don't think there's any loss.\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"The ability to monopolize a planet is insignificant\n> next to the power of the source\"\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 17:56:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> En Tue, 16 Apr 2002 19:21:50 -0400 (EDT)\n> Bruce Momjian <pgman@candle.pha.pa.us> escribi?:\n> \n> > Here is a patch based on this discussion.\n> \n> I still think the xdigit class could be handled the same way the digit\n> class is (by enumeration rather than using the isxdigit function). That\n> saves you a cicle, and I don't think there's any loss.\n\nIn fact, I will email you when I apply the original patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 17:57:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Alvaro Herrera wrote:\n> > En Tue, 16 Apr 2002 19:21:50 -0400 (EDT)\n> > Bruce Momjian <pgman@candle.pha.pa.us> escribi?:\n> > \n> > > Here is a patch based on this discussion.\n> > \n> > I still think the xdigit class could be handled the same way the digit\n> > class is (by enumeration rather than using the isxdigit function). That\n> > saves you a cicle, and I don't think there's any loss.\n> \n> In fact, I will email you when I apply the original patch.\n\nI miss that case :-(. Here is the pached patch.\n\nRegards,\nManuel.\n\nIndex: src/backend/regex/regcomp.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/regex/regcomp.c,v\nretrieving revision 1.28\ndiff -c -r1.28 regcomp.c\n*** src/backend/regex/regcomp.c\t28 Oct 2001 06:25:49 -0000\t1.28\n--- src/backend/regex/regcomp.c\t16 Apr 2002 23:12:38 -0000\n***************\n*** 47,53 ****\n #include \"regex/regex.h\"\n #include \"regex/utils.h\"\n #include \"regex/regex2.h\"\n! #include \"regex/cclass.h\"\n #include \"regex/cname.h\"\n \n /*\n--- 47,60 ----\n #include \"regex/regex.h\"\n #include \"regex/utils.h\"\n #include \"regex/regex2.h\"\n! struct cclass\n! {\n! char *name;\n! char *chars;\n! char *multis;\n! };\n! static struct cclass* cclasses = NULL;\n! static struct cclass* cclass_init(void);\n #include \"regex/cname.h\"\n \n /*\n***************\n*** 174,179 ****\n--- 181,189 ----\n \tpg_wchar *wcp;\n #endif\n \n+ if ( cclasses == NULL )\n+ cclasses = cclass_init();\n+ \n #ifdef REDEBUG\n #define GOODFLAGS(f)\t (f)\n #else\n***************\n*** 884,890 ****\n \tstruct cclass *cp;\n \tsize_t\t\tlen;\n \tchar\t *u;\n! \tchar\t\tc;\n \n \twhile (MORE() && pg_isalpha(PEEK()))\n \t\tNEXT();\n--- 894,900 ----\n \tstruct cclass *cp;\n \tsize_t\t\tlen;\n \tchar\t *u;\n! \tunsigned char\t\tc;\n \n \twhile (MORE() && pg_isalpha(PEEK()))\n \t\tNEXT();\n***************\n*** 905,911 ****\n \n \tu = cp->chars;\n \twhile ((c = *u++) != '\\0')\n! \t\tCHadd(cs, c);\n \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n \t\tMCadd(p, cs, u);\n }\n--- 915,921 ----\n \n \tu = cp->chars;\n \twhile ((c = *u++) != '\\0')\n! \t\tCHadd(cs, c); \n \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n \t\tMCadd(p, cs, u);\n }\n***************\n*** 1715,1718 ****\n--- 1725,1788 ----\n #else\n \treturn (islower((unsigned char) c));\n #endif\n+ }\n+ \n+ static struct cclass *\n+ cclass_init(void)\n+ {\n+ struct cclass *cp = NULL;\n+ struct cclass *classes = NULL;\n+ struct cclass_factory\n+ {\n+ char *name;\n+ int (*func)(int);\n+ char *chars;\n+ } cclass_factories [] =\n+ {\n+ { \"alnum\", isalnum, NULL },\n+ { \"alpha\", isalpha, NULL },\n+ { \"blank\", NULL, \" \\t\" },\n+ { \"cntrl\", iscntrl, NULL },\n+ { \"digit\", NULL, \"0123456789\" },\n+ { \"graph\", isgraph, NULL },\n+ { \"lower\", islower, NULL },\n+ { \"print\", isprint, NULL },\n+ { \"punct\", ispunct, NULL },\n+ { \"space\", NULL, \"\\t\\n\\v\\f\\r \" },\n+ { \"upper\", isupper, NULL },\n+ { \"xdigit\",NULL, \"abcdefABCDEF0123456789\" },\n+ { NULL, NULL, NULL }\n+ };\n+ struct cclass_factory *cf = NULL;\n+ \n+ classes = malloc(sizeof(struct cclass) * (sizeof(cclass_factories) / sizeof(struct cclass_factory)));\n+ if (classes == NULL)\n+ elog(ERROR,\"cclass_init: out of memory\");\n+ \n+ cp = classes;\n+ for(cf = cclass_factories; cf->name != NULL; cf++)\n+ {\n+ cp->name = strdup(cf->name);\n+ if ( cf->chars )\n+ cp->chars = strdup(cf->chars);\n+ else\n+ {\n+ int x = 0, y = 0;\n+ cp->chars = malloc(sizeof(char) * 256);\n+ if (cp->chars == NULL)\n+ elog(ERROR,\"cclass_init: out of memory\");\n+ for (x = 0; x < 256; x++)\n+ {\n+ if((cf->func)(x))\n+ *(cp->chars + y++) = x; \n+ }\n+ *(cp->chars + y) = '\\0';\n+ }\n+ cp->multis = \"\";\n+ cp++;\n+ }\n+ cp->name = cp->chars = NULL;\n+ cp->multis = \"\";\n+ \n+ return classes;\n }\n",
"msg_date": "17 Apr 2002 17:15:56 -0600",
"msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>",
"msg_from_op": true,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "\nOK, previous patch removed.\n\nThis patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n\n---------------------------------------------------------------------------\n\nManuel Sugawara wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Alvaro Herrera wrote:\n> > > En Tue, 16 Apr 2002 19:21:50 -0400 (EDT)\n> > > Bruce Momjian <pgman@candle.pha.pa.us> escribi?:\n> > > \n> > > > Here is a patch based on this discussion.\n> > > \n> > > I still think the xdigit class could be handled the same way the digit\n> > > class is (by enumeration rather than using the isxdigit function). That\n> > > saves you a cicle, and I don't think there's any loss.\n> > \n> > In fact, I will email you when I apply the original patch.\n> \n> I miss that case :-(. Here is the pached patch.\n> \n> Regards,\n> Manuel.\n> \n> Index: src/backend/regex/regcomp.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/backend/regex/regcomp.c,v\n> retrieving revision 1.28\n> diff -c -r1.28 regcomp.c\n> *** src/backend/regex/regcomp.c\t28 Oct 2001 06:25:49 -0000\t1.28\n> --- src/backend/regex/regcomp.c\t16 Apr 2002 23:12:38 -0000\n> ***************\n> *** 47,53 ****\n> #include \"regex/regex.h\"\n> #include \"regex/utils.h\"\n> #include \"regex/regex2.h\"\n> ! #include \"regex/cclass.h\"\n> #include \"regex/cname.h\"\n> \n> /*\n> --- 47,60 ----\n> #include \"regex/regex.h\"\n> #include \"regex/utils.h\"\n> #include \"regex/regex2.h\"\n> ! struct cclass\n> ! {\n> ! char *name;\n> ! char *chars;\n> ! char *multis;\n> ! };\n> ! static struct cclass* cclasses = NULL;\n> ! static struct cclass* cclass_init(void);\n> #include \"regex/cname.h\"\n> \n> /*\n> ***************\n> *** 174,179 ****\n> --- 181,189 ----\n> \tpg_wchar *wcp;\n> #endif\n> \n> + if ( cclasses == NULL )\n> + cclasses = cclass_init();\n> + \n> #ifdef REDEBUG\n> #define GOODFLAGS(f)\t (f)\n> #else\n> ***************\n> *** 884,890 ****\n> \tstruct cclass *cp;\n> \tsize_t\t\tlen;\n> \tchar\t *u;\n> ! \tchar\t\tc;\n> \n> \twhile (MORE() && pg_isalpha(PEEK()))\n> \t\tNEXT();\n> --- 894,900 ----\n> \tstruct cclass *cp;\n> \tsize_t\t\tlen;\n> \tchar\t *u;\n> ! \tunsigned char\t\tc;\n> \n> \twhile (MORE() && pg_isalpha(PEEK()))\n> \t\tNEXT();\n> ***************\n> *** 905,911 ****\n> \n> \tu = cp->chars;\n> \twhile ((c = *u++) != '\\0')\n> ! \t\tCHadd(cs, c);\n> \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n> \t\tMCadd(p, cs, u);\n> }\n> --- 915,921 ----\n> \n> \tu = cp->chars;\n> \twhile ((c = *u++) != '\\0')\n> ! \t\tCHadd(cs, c); \n> \tfor (u = cp->multis; *u != '\\0'; u += strlen(u) + 1)\n> \t\tMCadd(p, cs, u);\n> }\n> ***************\n> *** 1715,1718 ****\n> --- 1725,1788 ----\n> #else\n> \treturn (islower((unsigned char) c));\n> #endif\n> + }\n> + \n> + static struct cclass *\n> + cclass_init(void)\n> + {\n> + struct cclass *cp = NULL;\n> + struct cclass *classes = NULL;\n> + struct cclass_factory\n> + {\n> + char *name;\n> + int (*func)(int);\n> + char *chars;\n> + } cclass_factories [] =\n> + {\n> + { \"alnum\", isalnum, NULL },\n> + { \"alpha\", isalpha, NULL },\n> + { \"blank\", NULL, \" \\t\" },\n> + { \"cntrl\", iscntrl, NULL },\n> + { \"digit\", NULL, \"0123456789\" },\n> + { \"graph\", isgraph, NULL },\n> + { \"lower\", islower, NULL },\n> + { \"print\", isprint, NULL },\n> + { \"punct\", ispunct, NULL },\n> + { \"space\", NULL, \"\\t\\n\\v\\f\\r \" },\n> + { \"upper\", isupper, NULL },\n> + { \"xdigit\",NULL, \"abcdefABCDEF0123456789\" },\n> + { NULL, NULL, NULL }\n> + };\n> + struct cclass_factory *cf = NULL;\n> + \n> + classes = malloc(sizeof(struct cclass) * (sizeof(cclass_factories) / sizeof(struct cclass_factory)));\n> + if (classes == NULL)\n> + elog(ERROR,\"cclass_init: out of memory\");\n> + \n> + cp = classes;\n> + for(cf = cclass_factories; cf->name != NULL; cf++)\n> + {\n> + cp->name = strdup(cf->name);\n> + if ( cf->chars )\n> + cp->chars = strdup(cf->chars);\n> + else\n> + {\n> + int x = 0, y = 0;\n> + cp->chars = malloc(sizeof(char) * 256);\n> + if (cp->chars == NULL)\n> + elog(ERROR,\"cclass_init: out of memory\");\n> + for (x = 0; x < 256; x++)\n> + {\n> + if((cf->func)(x))\n> + *(cp->chars + y++) = x; \n> + }\n> + *(cp->chars + y) = '\\0';\n> + }\n> + cp->multis = \"\";\n> + cp++;\n> + }\n> + cp->name = cp->chars = NULL;\n> + cp->multis = \"\";\n> + \n> + return classes;\n> }\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 19:41:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "> I miss that case :-(. Here is the pached patch.\n> \n> Regards,\n> Manuel.\n\nI also suggest that cclass_init() is called only if the locale is not\n\"C\".\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 18 Apr 2002 09:55:12 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Tatsuo Ishii wrote:\n> > I miss that case :-(. Here is the pached patch.\n> > \n> > Regards,\n> > Manuel.\n> \n> I also suggest that cclass_init() is called only if the locale is not\n> \"C\".\n\nOK, patch on hold while this is addressed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 21:01:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Tatsuo Ishii wrote:\n> > > I miss that case :-(. Here is the pached patch.\n> > > \n> > > Regards,\n> > > Manuel.\n> > \n> > I also suggest that cclass_init() is called only if the locale is not\n> > \"C\".\n> \n> OK, patch on hold while this is addressed.\n\nHere is a patch which addresses Tatsuo's concerns (it does return an\nstatic struct instead of constructing it).\n\nRegards,\nManuel.",
"msg_date": "17 Apr 2002 22:53:32 -0600",
"msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>",
"msg_from_op": true,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nManuel Sugawara wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Tatsuo Ishii wrote:\n> > > > I miss that case :-(. Here is the pached patch.\n> > > > \n> > > > Regards,\n> > > > Manuel.\n> > > \n> > > I also suggest that cclass_init() is called only if the locale is not\n> > > \"C\".\n> > \n> > OK, patch on hold while this is addressed.\n> \n> Here is a patch which addresses Tatsuo's concerns (it does return an\n> static struct instead of constructing it).\n> \n> Regards,\n> Manuel.\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 18 Apr 2002 01:03:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "En 17 Apr 2002 22:53:32 -0600\nManuel Sugawara <masm@fciencias.unam.mx> escribi�:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Tatsuo Ishii wrote:\n> > > > I miss that case :-(. Here is the pached patch.\n> > > > \n> > > > Regards,\n> > > > Manuel.\n> > > \n> > > I also suggest that cclass_init() is called only if the locale is not\n> > > \"C\".\n> > \n> > OK, patch on hold while this is addressed.\n> \n> Here is a patch which addresses Tatsuo's concerns (it does return an\n> static struct instead of constructing it).\n\nIs there a reason to use \"\" instead of NULL in the \"multis\" member of\nthat static struct?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La virtud es el justo medio entre dos defectos\" (Aristoteles)\n",
"msg_date": "Thu, 18 Apr 2002 12:05:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: regexp character class locale awareness patch"
},
{
"msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n\n> En 17 Apr 2002 22:53:32 -0600\n> Manuel Sugawara <masm@fciencias.unam.mx> escribió:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \n> > > Tatsuo Ishii wrote:\n> > > > > I miss that case :-(. Here is the pached patch.\n> > > > > \n> > > > > Regards,\n> > > > > Manuel.\n> > > > \n> > > > I also suggest that cclass_init() is called only if the locale is not\n> > > > \"C\".\n> > > \n> > > OK, patch on hold while this is addressed.\n> > \n> > Here is a patch which addresses Tatsuo's concerns (it does return an\n> > static struct instead of constructing it).\n> \n> Is there a reason to use \"\" instead of NULL in the \"multis\" member of\n> that static struct?\n\nYes, read the code.\n\nRegards,\nManuel.\n",
"msg_date": "18 Apr 2002 11:11:20 -0600",
"msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>",
"msg_from_op": true,
"msg_subject": "Re: regexp character class locale awareness patch"
}
] |
[
{
"msg_contents": "I need to do some timezone manipulation, and I was wondering about this\ndifference:\n\naustralia=# select version();\n version\n--------------------------------------------------------------\n PostgreSQL 7.1.3 on i386--freebsd4.4, compiled by GCC 2.95.3\n(1 row)\naustralia=# select '2002-03-18 00:00:00' at time zone 'Australia/Sydney';\nERROR: Time zone 'australia/sydney' not recognized\naustralia=# set time zone 'Australia/Sydney';\nSET VARIABLE\naustralia=# select '2002-03-18 00:00:00';\n ?column?\n---------------------\n 2002-03-18 00:00:00\n(1 row)\n\n\nWhy can't I use 'australia/sydney' as a time zone in 'at time zone'\nnotation? Has it been fixed in 7.2?\n\nNow, say I do this:\n\nselect '2002-03-18 00:00:00' at time zone 'AEST';\n\nThat will give me aussie eastern time quite happily, but what if I don't\nknow when summer time starts? I don't want to have to manually choose\nbetween 'AEST' and 'AESST'??? To me, the way to do this would be to use\n'Australia/Sydney' as the time zone, but this doesn't work.\n\n7.2 seems to have the same behaviour...\n\nChris\n\n",
"msg_date": "Mon, 18 Mar 2002 13:51:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Time zone questions"
},
{
"msg_contents": "> australia=# select '2002-03-18 00:00:00' at time zone 'Australia/Sydney';\n> ERROR: Time zone 'australia/sydney' not recognized\n> australia=# set time zone 'Australia/Sydney';\n> SET VARIABLE\n> australia=# select '2002-03-18 00:00:00';\n> ?column?\n> ---------------------\n> 2002-03-18 00:00:00\n> Why can't I use 'australia/sydney' as a time zone in 'at time zone'\n> notation? Has it been fixed in 7.2?\n\nNot fixed, because not broken ;)\n\nPostgreSQL recognizes specific time zones such as GMT, PST, or, in your\ncase, EST (is that right? My zinc database on my Linux box seems to\nidentify both daylight and standard times as \"EST\").\n\nBut for input it only uses the zoneinfo database (or equivalent) if no\ntime zone is specified. Then it uses the system to obtain the local time\nzone.\n\n> select '2002-03-18 00:00:00' at time zone 'AEST';\n> That will give me aussie eastern time quite happily, but what if I don't\n> know when summer time starts? I don't want to have to manually choose\n> between 'AEST' and 'AESST'??? To me, the way to do this would be to use\n> 'Australia/Sydney' as the time zone, but this doesn't work.\n\nRight. To do what you suggest is probably *very* expensive, but I\nactually haven't tried it to confirm. It could require changing the\ndefault time zone every time a timestamp is evaluated, which would\nrequire file opens/closes, environment variable setting, etc etc.\n\nafaik there is no direct API to access time zone info; if there was we\ncould more easily think about supporting this.\n\nPresumably you are interested in this for an application where you want\nto support multiple time zones. But why is a combination of\n\nSET TIME ZONE 'Australia/Sydney';\n\nand\n\nSELECT '2002-03-18 00:00:00' not adequate for this kind of thing? btw,\nSQL9x only specifies numeric time zones, which of course have no concept\nof time zone rules at all :(\n\n - Tom\n",
"msg_date": "Mon, 18 Mar 2002 19:49:29 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Time zone questions"
}
] |
[
{
"msg_contents": "hi all\ni'm working with pg7.2 on irix6.5 platform and i've realized that postgres is using semop instead of tas, pg_config_os.h has define HAVE_TEST_AND_SET, and i don't kwow where could be the mistake.\nany suggestion?\nthanks and regards\n\n\n\n\n\n\n\nhi all\ni'm working with pg7.2 on irix6.5 platform and i've \nrealized that postgres is using semop instead of tas, pg_config_os.h has define \nHAVE_TEST_AND_SET, and i don't kwow where could be the mistake.\nany suggestion?\nthanks and regards",
"msg_date": "Mon, 18 Mar 2002 11:51:33 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "postgres is not using tas"
},
{
"msg_contents": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> i'm working with pg7.2 on irix6.5 platform and i've realized that postgres =\n> is using semop instead of tas, pg_config_os.h has define HAVE_TEST_AND_SET,=\n> and i don't kwow where could be the mistake.\n\ns_lock.h seems to think that __sgi is predefined on IRIX. Perhaps that\nis not true in your setup? What compiler are you using?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Mar 2002 10:29:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "hi tom\nIt is compiled with mips pro compilers\nI've tried to remove if defined in s_lock.h, but it's still using semop, is\nthere any other side it could be defined.\nthanks and regards.\n\n\n",
"msg_date": "Mon, 18 Mar 2002 17:31:31 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "As i know, it's only using semop, even with TAS_AND_SET defined, this is an\nextract from postmaster's process registry\n 2515.934mS(+ 5914uS)[ 4] postgres(38089): read(25, <00 00 00 00 68\na9 6e 10 00 00 00 22 00 a8 00 c8>..., 8192) = 8192\n 2520.497mS(+ 4562uS)[ 4] postgres(38089): read(25, <00 00 00 00 68\na9 9a 18 00 00 00 22 00 a8 00 c8>..., 8192) = 8192\n 2526.496mS(+ 5998uS)[ 4] postgres(38089): read(25, <00 00 00 00 68\na9 c6 38 00 00 00 22 00 a8 00 c8>..., 8192) = 8192\n 2527.115mS(+ 619uS)[ 4] postgres(38089): semop(1568, 0x7fff1c70,\n1) OK\n 2527.314mS(+ 198uS)[ 4] postgres(38089): semop(1568, 0x7fff1c70,\n1) OK\n 2527.390mS(+ 76uS)[ 4] postgres(38089): semop(1568, 0x7fff1c70,\n1) OK\n 2532.199mS(+ 4809uS)[ 4] postgres(38089): read(25, <00 00 00 00 68\na9 f2 40 00 00 00 22 00 a8 00 c8>..., 8192) = 8192\n 2537.896mS(+ 5696uS)[ 4] postgres(38089): read(25, <00 00 00 00 68\naa 1e 48 00 00 00 22 00 a8 00 c8>..., 8192) = 8192\n 2543.147mS(+ 5251uS)[ 4] postgres(38089): read(25, <00 00 00 00 68\naa 4a 68 00 00 00 22 00 a8 00 c8>..., 8192) = 8192\nThanks and regards\n\n",
"msg_date": "Mon, 18 Mar 2002 17:49:28 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "here is the execution of one backend summary:\nSystem call summary:\n Average Total\nName #Calls Time(ms) Time(ms)\n-----------------------------------------\nsemop 39305 0.06 2497.53\nselect 7 19.86 139.01\nunlink 1 22.96 22.96\nclose 49 0.04 2.06\nrecv 1 0.72 0.72\nsend 1 0.11 0.11\nfsync 1 0.07 0.07\nprctl 1 0.01 0.01\nexit 1 0.00 0.00\n\nAs u can see it's amazing\nThanks and regards\n\n",
"msg_date": "Mon, 18 Mar 2002 18:37:07 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> As i know, it's only using semop, even with TAS_AND_SET defined, this is an\n> extract from postmaster's process registry\n\nThe fact that there are some semops in the strace doesn't prove\nanything. We do use semaphores when we have to block the current\nprocess.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Mar 2002 13:19:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "hi tom\ncould you please tell me where to find info on when and why is semop used,\nthis thread began because i had excessive sem usage as u can see\nthanks and regards\n\n\n",
"msg_date": "Mon, 18 Mar 2002 19:26:22 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> could you please tell me where to find info on when and why is semop used,\n\nIt's used when we need to block the current process (or to unblock\nanother process that had been waiting). Look for calls to\nIpcSemaphoreLock and IpcSemaphoreUnlock.\n\nA large number of semops may mean that you have excessive contention on\nsome lockable resource, but I don't have enough info to guess what resource.\nHave you tried doing profiling of the backend?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Mar 2002 13:31:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "Theoriginal problem was the low cpu usage due to semaphores, most of orange\nzone is due to sems\nthanks and regards",
"msg_date": "Mon, 18 Mar 2002 19:32:59 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "hi tom\nIf i track a single backend during an 8 read-only queries parallel execution\nthese are the results(\nSystem call summary:\n Average Total\nName #Calls Time(ms) Time(ms)\n-----------------------------------------\nsemop 3803 0.20 774.03\nselect 4 19.58 78.33\nrecv 1 2.41 2.41\nbrk 6 0.08 0.48\nclose 1 0.14 0.14\nsend 1 0.14 0.14\nsemctl 1 0.05 0.05\nprctl 1 0.01 0.01\nexit 1 0.00 0.00\n\nI think it's a bit excessive for a 8 SMP\nwhat do u think?\nthanks and regards\n\n",
"msg_date": "Mon, 18 Mar 2002 19:58:45 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> writes:\n> If i track a single backend during an 8 read-only queries parallel execution\n> these are the results(\n> System call summary:\n> Average Total\n> Name #Calls Time(ms) Time(ms)\n> -----------------------------------------\n> semop 3803 0.20 774.03\n> select 4 19.58 78.33\n> recv 1 2.41 2.41\n> brk 6 0.08 0.48\n> close 1 0.14 0.14\n> send 1 0.14 0.14\n> semctl 1 0.05 0.05\n> prctl 1 0.01 0.01\n> exit 1 0.00 0.00\n\nConsidering that there are no read() or write() calls listed, and that\n8 client queries would surely require at least one send() and one recv()\napiece, I don't think I believe a word of those stats. Well, maybe the\n1 exit() is correct ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Mar 2002 14:13:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "hi tom\nhow may we have believable statistics?\nwhat do u think about the graph i've sent to you, there are retrieved using\nhardware counters, i believe they are exact.\nAny idea?\nThanks and regards\n\n",
"msg_date": "Mon, 18 Mar 2002 20:23:05 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: postgres is not using tas "
},
{
"msg_contents": "postgres is compiled with Mipspro compiler, how may i prepare it for\nprofiling.\nThanks and regards\n\n",
"msg_date": "Wed, 20 Mar 2002 10:17:45 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Re: postgres is not using tas "
}
] |
[
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>\nTo: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\nSent: Monday, March 18, 2002 4:08 PM\nSubject: Re: [HACKERS] bad performance on irix\n\n\n> Dear Luis,\n> >\n> > Dear Bob:\n> > I've removed ifdefs from s_lock.h trying if semop using was define\nproblem,\n> > but it's still using semop\n> > any suggest?\n>\n> No, I see the same compilation as you do with 7.2. It's using the\nspinlocks\n> for some locks, but semaphores for others. I don't know what to\n> do next. Alas... --Bob\n>\n> +-----------------------------+------------------------------------+\n> | Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n> | P.O. Box 314 | URL: http://www.congen.com/~bruc |\n> | Pennington, NJ 08534 | |\n> +-----------------------------+------------------------------------+\n>\n\n",
"msg_date": "Mon, 18 Mar 2002 17:32:34 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Fw: bad performance on irix"
},
{
"msg_contents": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> forwards:\n>> It's using the spinlocks\n>> for some locks, but semaphores for others.\n\nThat doesn't make any sense to me. For one thing, if HAS_TEST_AND_SET\nis defined in the config header, the executable will just plain fail to\nbuild if there's no tas implementation, because lmgr/spin.c won't be\ncompiled. And I sure don't see how some of the locks might be\nimplemented one way and some the other.\n\nWhich ones do you think are being implemented as semaphores, and what's\nyour evidence?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Mar 2002 11:36:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: bad performance on irix "
}
] |
[
{
"msg_contents": "After reading way too many threads on this (probably too many on pgsql-*\nin general) I'll just go over how I feel about the caching \"issue\".\n\nIt seems that MySQL has implemented a system that allows the database to\ncache queries that are frequently used and reduce latency for them. This,\nto me, seems like a very nice low-hanging fruit optimization, especially\nfor web systems.\n\n=== Examples\n\nSearch Pages:\n\tI implented a bug database. The main entry point was a \"Define your\nsearch\" page which presented quite a few options. Many of them were\ndrop-down lists. This page did five or six queries to do things like find\nthe list of engineers in the company, categories for bugs, and versions of\nsoftware. The results of those queries probably changed once per month,\nbut were done several times/day. While they are simple and may not have\ncost much, I can see how a simple cache would make them cost less.\n\nHome Pages:\n\tFrequently, in the 'blog case (such as my home page), a lookup is done\nevery time the page is hit. I update that table every couple of days, but\nit is accessed much more often. Once again, this is a fairly common\nusage pattern in the web environment that /may/ be a good candiate for this\nsort of caching.\n\n\tThese are two frequently-used design patterns which I think would\nbenefit from this optimization. MySQL, and some of their customers seem\nto think so, too.\n\n=== Common Arguments\n\n\t\"This shouldn't be in the database!\"\n\n\tArguably, yes. This is something that might be better handled by the\napplication server. The app server may or may not have a unified connection\npool to the database and can better organize the queries and caching.\n\tOn the other hand, for the case of a database that is not on the same machine\nas the webserver, this is a good chance to reduce bandwidth.\n\n\n\t\"This is going to make things ugly/hard to implement/etc\"...\n\t\n\tPersonally, I feel that too many of PostgreSQL's potential features get\nrejected out-of-hand on the grounds that implementation will be difficult or\nthat it will \"make things gross\" (as though parts of PostgreSQL aren't gross\nalready). While I've not looked /too/ closely, it seems that if one were\nto create a way for the system to maintain the results of a query, keyed by\nthe text of the query itself, it would be easy for something in the query\nsequence to check and see if the query has already been done, and access it.\n\n\tWe already hold resultsets between queries in order to handle cursors,\nso most of the framework must already be in there. Just keep each 'cacheable'\nquery.\n\tNOTE: This probably implies that in the simple case, the cache cannot\nbe used between different connections.\n\n\tThe other issue is the expiration of the cache entries. Once again, for\nthe \"Home Pages\" case above, I would be perfectly satisified if the cache\nwas entirely blown away every time any UPDATE query was executed. This would\nhandle most cases, except for triggers on non-UPDATE queries. Otherwise, we\nwould need to less simple-case the issue by tracking when tables are actually\nupdated, and for even more bonus points, track which tables affect which\ncache entries.\n\n===\n\nEditorial:\n\n\tPostgreSQL seems to spend a lot of time stressing ACID, and I believe\nthis is a very good thing. I simply don't trust MySQL any more then I trust\nany other SQL interface to a flat datafile. Also, PostgreSQL has some very\nhandy features involving datatypes, triggers, and stored procedures. But\nyou all know that.\n\n\tMySQL is doing some things right. They are providing useful documentation.\nThey are adding features that target the web market (they may be adding them\nincorrectly, however). If we expect PostgreSQL to beat MySQL in anything\nbut \"My database is transactionally secure\" and \"We have a GECO optimizer\"\npissing wars, we'll need to start becoming a little more competitive in the\nraw speed arena. I feel that this optimization, while it may not be trivial,\nis fairly low-hanging fruit that can help. I may even try to implement it,\nbut I make no guarantees.\n\n\n-- \nAdam Haberlach | Who buys an eight-processor machine and then\nadam@newsnipple.com | watches 30 movies on it all at the same time?\nhttp://newsnipple.com | Beats me. They told us they could sell it, so\n | we made it. -- George Hoffman, Be Engineer\n",
"msg_date": "Mon, 18 Mar 2002 09:58:07 -0800",
"msg_from": "Adam Haberlach <adam@newsnipple.com>",
"msg_from_op": true,
"msg_subject": "My only post with regard to query caching"
},
{
"msg_contents": "Adam Haberlach <adam@newsnipple.com> writes:\n\n> MySQL is doing some things right. They are providing useful\n> documentation. They are adding features that target the web market\n> (they may be adding them incorrectly, however). If we expect\n> PostgreSQL to beat MySQL in anything but \"My database is\n> transactionally secure\" and \"We have a GECO optimizer\" pissing wars,\n> we'll need to start becoming a little more competitive in the raw\n> speed arena. I feel that this optimization, while it may not be\n> trivial, is fairly low-hanging fruit that can help. I may even try\n> to implement it, but I make no guarantees.\n\nLooks like the onus is on you and mlw to come up with a design for the\nquery cache mechanism, based on knowledge of PG internals, that\nintelligently addresses ACID and MVCC issues, and propose it. I think\nthe core developers would certainly be willing to look at such a\ndesign proposal. Then, if they like it, you get to implement it. ;)\n\nIn other words, and I say this in the nicest possible way, talk is\ncheap.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "18 Mar 2002 13:17:27 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: My only post with regard to query caching"
}
] |
[
{
"msg_contents": "\n>> \n>> I'd want it to error out on \"INSERT foo (bar.col)\", though ;-)\n>> \n>\n> And on \"INSERT foo (bar.foo.col)\" as well.\n\nWhy accept above at all ? Seems much too error prone, I would eighter\naccept table with schema or without schema, mixing both cases seems \nunnecessarily confusing and error prone to me.\n\nIf at all, I would allow:\nINSERT bar.foo (bar.foo.col)\nINSERT foo (foo.col)\n\nWould that be enough for the initial problem case ?\n\nAndreas\n",
"msg_date": "Mon, 18 Mar 2002 19:49:45 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: insert statements"
}
] |
[
{
"msg_contents": "Dear Tom,\n\n\tThe evidence is from the Process Activity Recorder, an Irix utility\nsimilar to strace the reports syscall usage. A number of semop's are performed\nin the operation of backend. Luis can send you specifics. --Bob\n\nLuis Alberto Amigo Navarro writes:\n> \n> \n> ----- Original Message -----\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> To: \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>\n> Cc: <pgsql-hackers@postgresql.org>; \"Robert E. Bruccoleri\"\n> <bruc@stone.congenomics.com>\n> Sent: Monday, March 18, 2002 5:36 PM\n> Subject: Re: Fw: [HACKERS] bad performance on irix\n> \n> \n> > \"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es> forwards:\n> > >> It's using the spinlocks\n> > >> for some locks, but semaphores for others.\n> >\n> > That doesn't make any sense to me. For one thing, if HAS_TEST_AND_SET\n> > is defined in the config header, the executable will just plain fail to\n> > build if there's no tas implementation, because lmgr/spin.c won't be\n> > compiled. And I sure don't see how some of the locks might be\n> > implemented one way and some the other.\n> >\n> > Which ones do you think are being implemented as semaphores, and what's\n> > your evidence?\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> \n> \n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n",
"msg_date": "Mon, 18 Mar 2002 16:04:02 -0500 (EST)",
"msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Hi all\nThere is no doubt, in fact it uses test_and _set, but it is still doing a\nlot of semops, I send u and extract from another execution, it is 6 streams\nof read-only queries+a stream of inserts and deletes(with 5 sec between each\nstream)+a stream of vacuum on modified tables each 10 secs.\nThanks and regards",
"msg_date": "Wed, 20 Mar 2002 12:53:01 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Hi all:\nagain on performance, here is an extract from an 8 read-only queries, notice\nthat total time is 179s and it is expending about 80secs only in semaphores\nIsn't there any other way to improve ipc-locks?\nthanks and regards.\n\n",
"msg_date": "Wed, 20 Mar 2002 17:48:09 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Makes me wonder... perhaps now someone will be convinced to take a look\nat the POSIX IPC patch. On some platforms (not on Linux I am afraid)\nPOSIX mutexes might be quite a bit faster than SYSV semaphores.\n\nLuis Alberto Amigo Navarro wrote:\n> \n> Hi all:\n> again on performance, here is an extract from an 8 read-only queries, notice\n> that total time is 179s and it is expending about 80secs only in semaphores\n> Isn't there any other way to improve ipc-locks?\n> thanks and regards.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Wed, 20 Mar 2002 14:33:52 -0600",
"msg_from": "Igor Kovalenko <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Dear Igor,\n\nIgor Kovalenko writes:\n\n> Makes me wonder... perhaps now someone will be convinced to take a look\n> at the POSIX IPC patch. On some platforms (not on Linux I am afraid)\n> POSIX mutexes might be quite a bit faster than SYSV semaphores.\n\nYes, but on the SGI platform, the MIPS test_and_set instructions are\nreally fast and should be used.\n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n",
"msg_date": "Wed, 20 Mar 2002 16:52:57 -0500 (EST)",
"msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "I am confused to hell. I always thought MIPS does NOT have TAS\ninstruction ;)\n\n\"Robert E. Bruccoleri\" wrote:\n> \n> Dear Igor,\n> \n> Igor Kovalenko writes:\n> \n> > Makes me wonder... perhaps now someone will be convinced to take a look\n> > at the POSIX IPC patch. On some platforms (not on Linux I am afraid)\n> > POSIX mutexes might be quite a bit faster than SYSV semaphores.\n> \n> Yes, but on the SGI platform, the MIPS test_and_set instructions are\n> really fast and should be used.\n> \n> +-----------------------------+------------------------------------+\n> | Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n> | P.O. Box 314 | URL: http://www.congen.com/~bruc |\n> | Pennington, NJ 08534 | |\n> +-----------------------------+------------------------------------+\n",
"msg_date": "Wed, 20 Mar 2002 16:18:57 -0600",
"msg_from": "Igor Kovalenko <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Dear Igor,\n\n> I am confused to hell. I always thought MIPS does NOT have TAS\n> instruction ;)\n\nOn the SGI platform, there are very high speed implementations of test\nand set which allow large number of processes to safely and quickly\naccess shared memory. SGI has a hardware team that specifies MIPS\nprocessor variants that are used in their servers so the machines can\nscale.\n\nI've tried to get SGI interested in putting some internal engineering\neffort to improve PostgreSQL performance on operations which could\nbenefit from its shared memory parallel architecture (like index\ncreation and sorting), but without success.\n\n+-----------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+-----------------------------+------------------------------------+\n",
"msg_date": "Wed, 20 Mar 2002 17:30:28 -0500 (EST)",
"msg_from": "\"Robert E. Bruccoleri\" <bruc@stone.congenomics.com>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Okay. Anyway, the semaphores are apparently used for purposes other than\nTAS. That can be made faster too, on platforms which support POSIX\nmutexes (shared between processes).\n\n\"Robert E. Bruccoleri\" wrote:\n> \n> Dear Igor,\n> \n> > I am confused to hell. I always thought MIPS does NOT have TAS\n> > instruction ;)\n> \n> On the SGI platform, there are very high speed implementations of test\n> and set which allow large number of processes to safely and quickly\n> access shared memory. SGI has a hardware team that specifies MIPS\n> processor variants that are used in their servers so the machines can\n> scale.\n> \n> I've tried to get SGI interested in putting some internal engineering\n> effort to improve PostgreSQL performance on operations which could\n> benefit from its shared memory parallel architecture (like index\n> creation and sorting), but without success.\n> \n> +-----------------------------+------------------------------------+\n> | Robert E. Bruccoleri, Ph.D. | email: bruc@acm.org |\n> | P.O. Box 314 | URL: http://www.congen.com/~bruc |\n> | Pennington, NJ 08534 | |\n> +-----------------------------+------------------------------------+\n",
"msg_date": "Wed, 20 Mar 2002 16:32:38 -0600",
"msg_from": "Igor Kovalenko <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "\n\n> Makes me wonder... perhaps now someone will be convinced to take a look\n> at the POSIX IPC patch. On some platforms (not on Linux I am afraid)\n> POSIX mutexes might be quite a bit faster than SYSV semaphores.\n> \nIs there any current patch?\nRegards\n\n",
"msg_date": "Thu, 21 Mar 2002 09:35:02 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "I've done some meditions with timex, it uses sar(System activity register)\nto take workloads, it's not very relliable, but it allow us to see how it is\nbeen doing, it has been taken during an execution of a like tpc-h\nbenchmark, it performs inserts, deletes(about 5% of the time of the\nexecution) and a set of 8 continous streams of 22 read only queries, notice\nthat it only gives idle time (not the cause of idle), notice semafores/sec\nis up to 2700!!!!!!!\nRegards\n12:27:08 %usr %sys %intr %wio %idle %sbrk %wfs %wswp %wphy %wgsw %wfif\n12:55:39 32 3 0 9 56 0 100 0\n0 0 0 9% waiting for I/O which is\n100% file system\n\n12:27:08 device %busy avque r+w/s blks/s w/s wblks/s avwait\navserv\n12:55:39 dks0d5 0 0.0 0.0 0 0.0 0 0.0\n0.0\n dks1d1 1 3.1 0.7 19 0.7 16 27.8\n15.0\n dks1d2 0 1.0 0.0 0 0.0 0 0.0\n13.3\n dks1d3 0 0.0 0.0 0 0.0 0 0.0\n0.0\n dks1d4 23 15.3 9.1 1705 7.8 1553 519.7\n24.8\n\n12:27:08 bread/s lread/s %rcach bwrit/s lwrit/s wcncl/s %wcach pread/s\npwrit/s\n12:55:39 158 2372 93 1549 9072 1 83 0\n0 93% of read cache hits and 83% of write chache hits\n\n12:27:08 scall/s sread/s swrit/s fork/s exec/s rchar/s wchar/s\n12:55:39 4618 181 126 0.18 0.06 648854 580354\nsyscalls averages\n\n12:27:08 msg/s sema/s\n12:55:39 0.00 2704.28\n\n12:27:08 vflt/s dfill/s cache/s pgswp/s pgfil/s pflt/s cpyw/s\nsteal/s rclm/s notice that there aren't page swaps, so idle is not\nwaiting for paging\n12:55:39 862.58 58.31 804.24 0.00 0.04 5.70 3.11 60.90\n0.00\n\n12:27:08 CPU %usr %sys %intr %wio %idle %sbrk %wfs %wswp %wphy %wgsw\n%wfif\n12:55:39 0 25 3 0 8 63 0 100 0 0\n0 0 per CPU usage\n 1 25 3 0 9 62 0 100 0\n0 0 0\n 2 24 3 0 9 64 0 100 0 0\n0 0\n 3 30 3 0 8 59 0 100 0 0\n0 0\n 4 30 3 0 8 59 0 100 0 0\n0 0\n 5 39 3 0 8 50 0 100 0 0\n0 0\n 6 54 3 0 8 34 0 100 0 0\n0 0\n 7 33 3 0 8 55 0 100 0 0\n0 0\n\n\n",
"msg_date": "Thu, 21 Mar 2002 09:48:11 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "No, I've been told it is not gonna be considered for 7.2x and I shall\nwait till 7.3.\n\nLuis Alberto Amigo Navarro wrote:\n> \n> > Makes me wonder... perhaps now someone will be convinced to take a look\n> > at the POSIX IPC patch. On some platforms (not on Linux I am afraid)\n> > POSIX mutexes might be quite a bit faster than SYSV semaphores.\n> >\n> Is there any current patch?\n> Regards\n",
"msg_date": "Thu, 21 Mar 2002 11:31:12 -0600",
"msg_from": "Igor Kovalenko <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Igor Kovalenko wrote:\n\n> No, I've been told it is not gonna be considered for 7.2x and I shall\n> wait till 7.3.\n>\n> Luis Alberto Amigo Navarro wrote:\n> >\n> > > Makes me wonder... perhaps now someone will be convinced to take a look\n> > > at the POSIX IPC patch. On some platforms (not on Linux I am afraid)\n> > > POSIX mutexes might be quite a bit faster than SYSV semaphores.\n> > >\n> > Is there any current patch?\n> > Regards\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nI've been thinking, and I think it maybe possible that tuning kernel\nparameters could help, I'll keep you informed\nThanks and regards",
"msg_date": "Thu, 21 Mar 2002 19:30:31 +0100",
"msg_from": "Luis Amigo <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Just remember that patches for 7.3 are being accepted at this very moment...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org \n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Igor Kovalenko\n> Sent: Friday, 22 March 2002 1:31 AM\n> To: Luis Alberto Amigo Navarro\n> Cc: bruc@acm.org; tgl@sss.pgh.pa.us; pgsql-hackers@postgresql.org\n> Subject: Re: Fw: Fw: [HACKERS] bad performance on irix\n> \n> \n> No, I've been told it is not gonna be considered for 7.2x and I shall\n> wait till 7.3.\n> \n> Luis Alberto Amigo Navarro wrote:\n> > \n> > > Makes me wonder... perhaps now someone will be convinced to \n> take a look\n> > > at the POSIX IPC patch. On some platforms (not on Linux I am afraid)\n> > > POSIX mutexes might be quite a bit faster than SYSV semaphores.\n> > >\n> > Is there any current patch?\n> > Regards\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n",
"msg_date": "Fri, 22 Mar 2002 09:59:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "On a side note, I thought I would mention that the Next Generation POSIX\nThreading (NGPT) Project (IBM --\nhttp://www-124.ibm.com/developerworks/projects/pthreads) patches have\njust been accepted to the 2.5.x Linux kernel. A 2.4.x patch is also\navailable. So, it may be possible that POSIX mutexes may be a\nperformance reality for Linux sometime in the near future...\n\nGreg\n\n\nOn Thu, 2002-03-21 at 19:59, Christopher Kings-Lynne wrote:\n> Just remember that patches for 7.3 are being accepted at this very moment...\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org \n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Igor Kovalenko\n> > Sent: Friday, 22 March 2002 1:31 AM\n> > To: Luis Alberto Amigo Navarro\n> > Cc: bruc@acm.org; tgl@sss.pgh.pa.us; pgsql-hackers@postgresql.org\n> > Subject: Re: Fw: Fw: [HACKERS] bad performance on irix\n> > \n> > \n> > No, I've been told it is not gonna be considered for 7.2x and I shall\n> > wait till 7.3.\n> > \n> > Luis Alberto Amigo Navarro wrote:\n> > > \n> > > > Makes me wonder... perhaps now someone will be convinced to \n> > take a look\n> > > > at the POSIX IPC patch. On some platforms (not on Linux I am afraid)\n> > > > POSIX mutexes might be quite a bit faster than SYSV semaphores.\n> > > >\n> > > Is there any current patch?\n> > > Regards\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "22 Mar 2002 10:51:32 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Does that mean I should redo patch for 7.3 as is, or you guys want it to\ngo farther this time? The last version had compromises intended to make\nchanges minimal...\n\nAlso, does anyone from Darwin or BeOS camp care? You guys should not be\nworking through emulation of SysV ugliness. If someone is listening, we\ncould come up with a version suitable for you too...\n\n-- igor\n\nChristopher Kings-Lynne wrote:\n> \n> Just remember that patches for 7.3 are being accepted at this very moment...\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Igor Kovalenko\n> > Sent: Friday, 22 March 2002 1:31 AM\n> > To: Luis Alberto Amigo Navarro\n> > Cc: bruc@acm.org; tgl@sss.pgh.pa.us; pgsql-hackers@postgresql.org\n> > Subject: Re: Fw: Fw: [HACKERS] bad performance on irix\n> >\n> >\n> > No, I've been told it is not gonna be considered for 7.2x and I shall\n> > wait till 7.3.\n> >\n> > Luis Alberto Amigo Navarro wrote:\n> > >\n> > > > Makes me wonder... perhaps now someone will be convinced to\n> > take a look\n> > > > at the POSIX IPC patch. On some platforms (not on Linux I am afraid)\n> > > > POSIX mutexes might be quite a bit faster than SYSV semaphores.\n> > > >\n> > > Is there any current patch?\n> > > Regards\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n",
"msg_date": "Fri, 22 Mar 2002 11:46:03 -0600",
"msg_from": "Igor Kovalenko <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
},
{
"msg_contents": "Igor Kovalenko wrote:\n> Does that mean I should redo patch for 7.3 as is, or you guys want it to\n> go farther this time? The last version had compromises intended to make\n> changes minimal...\n> \n> Also, does anyone from Darwin or BeOS camp care? You guys should not be\n> working through emulation of SysV ugliness. If someone is listening, we\n> could come up with a version suitable for you too...\n\nYes, we should get started. I think the idea is to have two patches,\none for QNX6 and another to support Posix capabilities. The changes\ndon't have to be minimal anymore. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 13:32:32 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Fw: bad performance on irix"
}
] |
[
{
"msg_contents": "Hello Group,\n I need your help, in putting together a list of comparisons, and good solid technical reasons, to why to use PostgreSQL over using Microsoft SQL Server. Right now, we are using PostgreSQL for a back-end for some of our web stuff. A couple of our developers, which are Microsoft VB developers, are complaining about not being able to use proprietary MS stuff with PostgreSQL. I have told them to use standard SQL92 compliant programming techniques, and all will work just fine. They just don't seem to understand why a person wouldn't use SQL Server. If I could put together a list of good solid technical arguments, (Performance, Support, Reliability, ETC.), as to why PostgreSQL is better, I think I can make a good case in keeping PostreSQL. I just don't have any SQL Server experience to compare with. If any of you, who have SQL Server experience could send me good technical comparisons of SQL Server vs PostgreSQL, I would greatly appreciate it.\n\nThanks in advance,\nDale Anderson.\n\n\n",
"msg_date": "Mon, 18 Mar 2002 15:53:42 -0600",
"msg_from": "\"Dale Anderson\" <danderso@crystalsugar.com>",
"msg_from_op": true,
"msg_subject": "Platform comparison ..."
},
{
"msg_contents": "\"Dale Anderson\" <danderso@crystalsugar.com> writes:\n\n> Hello Group,\n\n[snip: why would PG be \"better\" than MSSQL?]\n\n\"Better\" isn't meaningful except in the context of the problem you're\ntrying to solve. There will be some problems where PG is right, some\nwhere MSSQL works better, and some where neither is the \"best\" choice.\n\nReasons you might prefer PG:\n\n* No licensing costs, period\n* Runs on free operating systems \n* Runs on Unix, if you prefer that as a server environment\n* Object-relational technology\n* Extensibility (not only functions, but datatypes, index types, etc)\n* Open Source (no vendor lockin)\n\nReasons you might prefer MSSQL:\n\n* Need for MS extensions\n* Easier setup (perhaps) for non-DBA/sysadmin types\n* Management's desire for \"single-source\"\n* Performance advantages for some workloads\n* Windows server environment (PG runs on Windows, but only through a\n Unix emulation layer--I personally wouldn't run it in production,\n but then again I wouldn't run Windows in production:)\n\nBoth offer commercial support, ACID compliance, stored\nprocedures/functions, and the other stuff that people expect from a\n\"real\" database. \n\nHope this helps...\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "18 Mar 2002 17:16:17 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Platform comparison ..."
},
{
"msg_contents": "Le Lundi 18 Mars 2002 22:53, Dale Anderson a écrit :\n> A couple of our developers, which are Microsoft VB developers, are\n> complaining about not being able to use proprietary MS stuff with\n> PostgreSQL.\n\nDear Dale,\n\nMaybe you could consider using pgAdmin2 (http://pgadmin.postgresql.org), \nwhich displays all database objects (tables, views, functions, triggers, \nrules, etc...) in a nice Window$ interface.\n\nAn MS SQL Server migration wizard is also included. Historically, several \npgAdmin2 developpers come from an microsoft-oracle background and wanted to \nget out of the matrix.\n\nThe most visible difference between MS SQL Server and PostgreSQL is that MS \nSQL Server can be programmed in VB, whereas PostgreSQL supports serveral \nserver-side languages : PLpgSQL, PLperl, PLpython, even C...\n\nPeople usually underestimate the power of server-side scripting. Oracle does \nnot and sell each server-side programming \"cartridge\" separately. PostgreSQL \nprovides them for free.\n \nFurthermore, pgAdmin2 is provided with an abstraction layer called pgSchema, \nwhich gives access to most database objects through an OCX technology. \npgSchema can be used in any VB project very easilly. \n\nTherefore, in my humble opinion, PostgreSQL provides a very reliable solution \nfor both client-side (VB) and server-side (PostgreSQL) programming needs. The \npower of PostgreSQL is to be able to do things smartly because we offer a \ncomplete development environment.\n\nThe only thing your developpers need is to install pgAdmin2 and start \nlearning a server-side language (like PLpgSQL which is very easy). There is \nprobably a lot of client-code in you applications to migrate server-side.\n\nPostgreSQL is also a great community of developpers. For help, the best place \nare pgsql-admin, pgsql-general and pgadmin-hackers mailing lists.\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Tue, 19 Mar 2002 10:00:39 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Platform comparison ..."
},
{
"msg_contents": "Dale Anderson wrote:\n> \n> Hello Group,\n> I need your help, in putting together a list of comparisons, and good solid technical reasons, to why to use PostgreSQL over using Microsoft SQL Server. Right now, we are using PostgreSQL for a back-end for some of our web stuff. A couple of our developers, which are Microsoft VB developers, are complaining about not being able to use proprietary MS stuff with PostgreSQL. I have told them to use standard SQL92 compliant programming techniques, and all will work just fine. They just don't seem to understand why a person wouldn't use SQL Server. If I could put together a list of good solid technical arguments, (Performance, Support, Reliability, ETC.), as to why PostgreSQL is better, I think I can make a good case in keeping PostreSQL. I just don't have any SQL Server experience to compare with. If any of you, who have SQL Server experience could send me good technical comparisons of SQL Server vs PostgreSQL, I would greatly appreciate it.\n\nI have worked with MSSQL, Oracle, Sybase, MySQL, and PostgreSQL, I totally\nunderstand what you are going through.\n\nMSSQL has a huge advantage in the Windows environment in that the whole\nenvironment is controlled by the vendor that sells one of the SQL technologies.\nThis is not to be under estimated. Microsoft has a way of making it difficult\nfor non-micrsoft technologies. That being said, I can come up with a few\nreasons to use PostgreSQL over MSSQL.\n\nCost:\nI worked on a DICOM system, which used a web server and a database, a number of\nyears ago. The cost of Windows NT and SQL licenses was about $8000. By rewiring\nthe project using PostgreSQL and Apache, we were able to sell the system for\nslightly less, but make more money.\n\nPreservation of development work:\nMy biggest concern with using ANY Microsoft product is the routine changes that\noccur in the core APIs. Once you are on the Microsoft treadmill, it is very\ndifficult to get off. Every release there is some subtle change that will break\nsomething. \n\nStability:\nSay what you will, and believe what you want. MS Windows NT/2K/XP are not\nproduction ready operating systems. There are serious issues with uptime and\nperformance. PostgreSQL running on Linux or FreeBSD will be more reliable than\nanything running on any version of Windows.\n",
"msg_date": "Tue, 19 Mar 2002 09:52:29 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Platform comparison ..."
},
{
"msg_contents": "On Tue, 2002-03-19 at 15:52, mlw wrote:\n> Dale Anderson wrote:\n> > \n> > Hello Group,\n> > I need your help, in putting together a list of comparisons, and good solid technical reasons, to why to use PostgreSQL over using Microsoft SQL Server. Right now, we are using PostgreSQL for a back-end for some of our web stuff. A couple of our developers, which are Microsoft VB developers, are complaining about not being able to use proprietary MS stuff with PostgreSQL. I have told them to use standard SQL92 compliant programming techniques, and all will work just fine. They just don't seem to understand why a person wouldn't use SQL Server. If I could put together a list of good solid technical arguments, (Performance, Support, Reliability, ETC.), as to why PostgreSQL is better, I think I can make a good case in keeping PostreSQL. I just don't have any SQL Server experience to compare with. If any of you, who have SQL Server experience could send me good technical comparisons of SQL Server vs PostgreSQL, I would greatly appreciate it.\n> \n> I have worked with MSSQL, Oracle, Sybase, MySQL, and PostgreSQL, I totally\n> understand what you are going through.\n> \n> MSSQL has\n\nHi!\n\nI'm sure I'm not the only one interested in seeing your opinion\nregarding Oracle vs. pg and Sybase vs. pg. (But please not another mysql\nflamewar here :-)\n\nTx and greets\n-- vbi\n\n",
"msg_date": "19 Mar 2002 16:22:33 +0100",
"msg_from": "Adrian 'Dagurashibanipal' von Bidder <avbidder@fortytwo.ch>",
"msg_from_op": false,
"msg_subject": "Re: Platform comparison ..."
},
{
"msg_contents": "On Tue, 2002-03-19 at 15:52, mlw wrote:\n> Dale Anderson wrote:\n> > \n> > Hello Group,\n> > I need your help, in putting together a list of comparisons, and\ngood solid technical reasons, to why to use PostgreSQL over using\nMicrosoft SQL Server. Right now, we are using PostgreSQL for a back-end\nfor some of our web stuff. A couple of our developers, which are\nMicrosoft VB developers, are complaining about not being able to use\nproprietary MS stuff with PostgreSQL. I have told them to use standard\nSQL92 compliant programming techniques, and all will work just fine. \nThey just don't seem to understand why a person wouldn't use SQL\nServer. If I could put together a list of good solid technical\narguments, (Performance, Support, Reliability, ETC.), as to why\nPostgreSQL is better, I think I can make a good case in keeping\nPostreSQL. I just don't have any SQL Server experience to compare\nwith. If any of you, who have SQL Server experience could send me good\ntechnical comparisons of SQL Server vs PostgreSQL, I would greatly\nappreciate it.\n> \n> I have worked with MSSQL, Oracle, Sybase, MySQL, and PostgreSQL, I\ntotally\n> understand what you are going through.\n> \n> MSSQL has\n\nHi!\n\nI'm sure I'm not the only one interested in seeing your opinion\nregarding Oracle vs. pg and Sybase vs. pg. (But please not another mysql\nflamewar here :-)\n\nTx and greets\n-- vbi\n\n\n",
"msg_date": "19 Mar 2002 16:26:38 +0100",
"msg_from": "Adrian 'Dagurashibanipal' von Bidder <pgmail@fortytwo.ch>",
"msg_from_op": false,
"msg_subject": "Re: Platform comparison ..."
},
{
"msg_contents": "Adrian 'Dagurashibanipal' von Bidder wrote:\n> \n> On Tue, 2002-03-19 at 15:52, mlw wrote:\n> > Dale Anderson wrote:\n> > >\n> > > Hello Group,\n> > > I need your help, in putting together a list of comparisons, and good solid technical reasons, to why to use PostgreSQL over using Microsoft SQL Server. Right now, we are using PostgreSQL for a back-end for some of our web stuff. A couple of our developers, which are Microsoft VB developers, are complaining about not being able to use proprietary MS stuff with PostgreSQL. I have told them to use standard SQL92 compliant programming techniques, and all will work just fine. They just don't seem to understand why a person wouldn't use SQL Server. If I could put together a list of good solid technical arguments, (Performance, Support, Reliability, ETC.), as to why PostgreSQL is better, I think I can make a good case in keeping PostreSQL. I just don't have any SQL Server experience to compare with. If any of you, who have SQL Server experience could send me good technical comparisons of SQL Server vs PostgreSQL, I would greatly appreciate it.\n> >\n> > I have worked with MSSQL, Oracle, Sybase, MySQL, and PostgreSQL, I totally\n> > understand what you are going through.\n> >\n> > MSSQL has\n> \n> Hi!\n> \n> I'm sure I'm not the only one interested in seeing your opinion\n> regarding Oracle vs. pg and Sybase vs. pg. (But please not another mysql\n> flamewar here :-)\n\nI have not, compared mysql at all.\n",
"msg_date": "Tue, 19 Mar 2002 10:39:53 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Platform comparison ..."
}
] |
[
{
"msg_contents": "The OWNER production rules added to DROP DATABASE:\n\nDropdbStmt: DROP DATABASE database_name\n {\n DropdbStmt *n =\nmakeNode(DropdbStmt);\n n->dbname = $3;\n $$ = (Node *)n;\n }\n | OWNER opt_equal name\n {\n $$ = lconsi(4, makeList1($3));\n }\n | OWNER opt_equal DEFAULT\n {\n $$ = lconsi(4, makeList1(NULL));\n }\n ;\n\n\nCause compiler warnings and are clearly returning the wrong type\n(a List, instead of a Node).\n\n(...)/pgsql/src/backend/parser/gram.y: In function\n`yyparse':/home/fnasser/DEVO/pgsql/pgsql/src/backend/parser/gram.y:3205:\nwarning: assignment from incompatible pointer type\n(...)/pgsql/src/backend/parser/gram.y:3209: warning: assignment from\nincompatible pointer type\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Tue, 19 Mar 2002 02:49:55 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": true,
"msg_subject": "Broken code in gram.y"
},
{
"msg_contents": "Well,\n\nSomeone just dropped the DROP DATABASE statement rules right in the\nmiddle of the CREATE DATABASE production rules!!!\n\nFernando\n\n\n\nFernando Nasser wrote:\n> \n> The OWNER production rules added to DROP DATABASE:\n> \n> DropdbStmt: DROP DATABASE database_name\n> {\n> DropdbStmt *n =\n> makeNode(DropdbStmt);\n> n->dbname = $3;\n> $$ = (Node *)n;\n> }\n> | OWNER opt_equal name\n> {\n> $$ = lconsi(4, makeList1($3));\n> }\n> | OWNER opt_equal DEFAULT\n> {\n> $$ = lconsi(4, makeList1(NULL));\n> }\n> ;\n> \n> Cause compiler warnings and are clearly returning the wrong type\n> (a List, instead of a Node).\n> \n> (...)/pgsql/src/backend/parser/gram.y: In function\n> `yyparse':/home/fnasser/DEVO/pgsql/pgsql/src/backend/parser/gram.y:3205:\n> warning: assignment from incompatible pointer type\n> (...)/pgsql/src/backend/parser/gram.y:3209: warning: assignment from\n> incompatible pointer type\n> \n> --\n> Fernando Nasser\n> Red Hat Canada Ltd. E-Mail: fnasser@redhat.com\n> 2323 Yonge Street, Suite #300\n> Toronto, Ontario M4P 2C9\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Tue, 19 Mar 2002 03:16:19 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Broken code in gram.y"
},
{
"msg_contents": "\nThanks. Fixed.\n\n---------------------------------------------------------------------------\n\nFernando Nasser wrote:\n> Well,\n> \n> Someone just dropped the DROP DATABASE statement rules right in the\n> middle of the CREATE DATABASE production rules!!!\n> \n> Fernando\n> \n> \n> \n> Fernando Nasser wrote:\n> > \n> > The OWNER production rules added to DROP DATABASE:\n> > \n> > DropdbStmt: DROP DATABASE database_name\n> > {\n> > DropdbStmt *n =\n> > makeNode(DropdbStmt);\n> > n->dbname = $3;\n> > $$ = (Node *)n;\n> > }\n> > | OWNER opt_equal name\n> > {\n> > $$ = lconsi(4, makeList1($3));\n> > }\n> > | OWNER opt_equal DEFAULT\n> > {\n> > $$ = lconsi(4, makeList1(NULL));\n> > }\n> > ;\n> > \n> > Cause compiler warnings and are clearly returning the wrong type\n> > (a List, instead of a Node).\n> > \n> > (...)/pgsql/src/backend/parser/gram.y: In function\n> > `yyparse':/home/fnasser/DEVO/pgsql/pgsql/src/backend/parser/gram.y:3205:\n> > warning: assignment from incompatible pointer type\n> > (...)/pgsql/src/backend/parser/gram.y:3209: warning: assignment from\n> > incompatible pointer type\n> > \n> > --\n> > Fernando Nasser\n> > Red Hat Canada Ltd. E-Mail: fnasser@redhat.com\n> > 2323 Yonge Street, Suite #300\n> > Toronto, Ontario M4P 2C9\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> -- \n> Fernando Nasser\n> Red Hat Canada Ltd. E-Mail: fnasser@redhat.com\n> 2323 Yonge Street, Suite #300\n> Toronto, Ontario M4P 2C9\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 07:53:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Broken code in gram.y"
}
] |
[
{
"msg_contents": "In latest CVS:\n\ntemplate1=# create table test (a int4 not null);\nCREATE DOMAIN\ntemplate1=#\n\nChris\n\n",
"msg_date": "Tue, 19 Mar 2002 16:46:36 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "bug in domain support"
},
{
"msg_contents": "It looks like diff / patch got confused and applied the changes in the\nwrong places.\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Hackers\" <pgsql-hackers@postgresql.org>; <rbt@zort.ca>\nSent: Tuesday, March 19, 2002 3:46 AM\nSubject: bug in domain support\n\n\n> In latest CVS:\n>\n> template1=# create table test (a int4 not null);\n> CREATE DOMAIN\n> template1=#\n>\n> Chris\n>\n>\n\n",
"msg_date": "Tue, 19 Mar 2002 07:24:50 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: bug in domain support"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> In latest CVS:\n> \n> template1=# create table test (a int4 not null);\n> CREATE DOMAIN\n> template1=#\n\nFixed.\n\n\ttest=> create table texst(x int);\n\tCREATE\n\ttest=> \\q\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 07:52:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug in domain support"
}
] |
[
{
"msg_contents": "Hello,\n\nI've been wondering how pgsql goes about guaranteeing data \nintegrity in the face of soft failures. In particular \nwhether it uses an alternative to the double root block \ntechnique - which is writing, as a final indication of the \nvalidity of new log records, to alternate disk blocks at \nfixed disk locations some meta information including the \nlocation of the last log record written.\nThis is the only technique I know of - does pgsql use \nsomething analogous?\n\nAlso, I note from the developer docs the comment on cacheing \ndisk drives: can anyone supply a reference on this subject \n(I have been on the lookout for a long time without success) \nand perhaps more generally on the subject of what exactly \ncan go wrong with a disk write when struck by power failure.\n\nLastly, is there any form of integrity checking on disk \nblock level data? I have vague recollections of seeing \nmention of crc/xor in relation to Oracle or DB2.\nWhether or not pgsql uses any such scheme I am curious to \nknow a rationale for its use - it makes me wonder about \nwhat, if anything, can be relied on 100%!\n\nThanks,\nChris Quinn\n\n",
"msg_date": "Tue, 19 Mar 2002 09:49:24 +0000",
"msg_from": "Christopher Quinn <cq@htec.demon.co.uk>",
"msg_from_op": true,
"msg_subject": "fault tolerance..."
},
{
"msg_contents": "Christopher Quinn <cq@htec.demon.co.uk> writes:\n> I've been wondering how pgsql goes about guaranteeing data \n> integrity in the face of soft failures. In particular \n> whether it uses an alternative to the double root block \n> technique - which is writing, as a final indication of the \n> validity of new log records, to alternate disk blocks at \n> fixed disk locations some meta information including the \n> location of the last log record written.\n> This is the only technique I know of - does pgsql use \n> something analogous?\n\nThe WAL log uses per-record CRCs plus sequence numbers (both per-record\nand per-page) as a way of determining where valid information stops.\nI don't see any need for relying on a \"root block\" in the sense you\ndescribe.\n\n> Lastly, is there any form of integrity checking on disk \n> block level data? I have vague recollections of seeing \n> mention of crc/xor in relation to Oracle or DB2.\n\nAt present we rely on the disk drive to not drop data once it's been\nsuccessfully fsync'd (at least not without detecting a read error later).\nThere was some discussion of adding per-page CRCs as a second-layer\ncheck, but no one seems very excited about it. The performance costs\nwould be nontrivial and we have not seen all that many reports of field\nfailures in which a CRC would have improved matters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Mar 2002 10:39:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fault tolerance... "
},
{
"msg_contents": "Tom Lane wrote:\n> Christopher Quinn <cq@htec.demon.co.uk> writes:\n> \n> \n> The WAL log uses per-record CRCs plus sequence numbers (both per-record\n> and per-page) as a way of determining where valid information stops.\n> I don't see any need for relying on a \"root block\" in the sense you\n> describe.\n> \n\nYes I see.\nI imagine if a device were used for the log (non-file so no \nEOF to denote end of log/valid-data) there is the \npossibility that old record space after the last/valid \nrecord might contain bytes which appear to form another \nvalid record ... if it weren't for the security of a crc.\n\n\n> check, but no one seems very excited about it. The performance costs\n> would be nontrivial and we have not seen all that many reports of field\n> failures in which a CRC would have improved matters.\n> \n\nAccess to hard data on such corruption or its theoretical \nlikelihood would be nice!\nHave you referenced any material yourself in deciding what \nmeasures to implement to achieve the level of data security \npgsql currently offers?\n\nThanks,\nChris\n\n\n",
"msg_date": "Tue, 19 Mar 2002 19:30:21 +0000",
"msg_from": "Christopher Quinn <cq@htec.demon.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: fault tolerance..."
}
] |
[
{
"msg_contents": "I didn't have any messages from lists for ~ 2 months !\nWhat's the problem ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 19 Mar 2002 13:21:26 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "hackers list problem"
}
] |
[
{
"msg_contents": "Marc,\n\nI see no postings to hackers come to fts.postgresql.org for more than a\nmonth. Seems there is a problem, because I also didn't get *any* messages\nfrom psql mailing lists. I was subscribed to lists since 1995 and\nwant to stay in there. Could you please check the problem.\n\nI was patient because I thought developers get timeout after\n7.2 release.\n\n\tRegards,\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Tue, 19 Mar 2002 13:40:49 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Problems with mailing list"
},
{
"msg_contents": "\nMust have gotten unsubscribed from the list at some point ... just\nre-added it now ...\n\nOn Tue, 19 Mar 2002, Oleg Bartunov wrote:\n\n> Marc,\n>\n> I see no postings to hackers come to fts.postgresql.org for more than a\n> month. Seems there is a problem, because I also didn't get *any* messages\n> from psql mailing lists. I was subscribed to lists since 1995 and\n> want to stay in there. Could you please check the problem.\n>\n> I was patient because I thought developers get timeout after\n> 7.2 release.\n>\n> \tRegards,\n> \t\tOleg\n>\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n>\n\n",
"msg_date": "Tue, 19 Mar 2002 09:15:13 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Problems with mailing list"
},
{
"msg_contents": "Thanks.\nI'm getting messages now.\n\n\tOleg\nOn Tue, 19 Mar 2002, Marc G. Fournier wrote:\n\n>\n> Must have gotten unsubscribed from the list at some point ... just\n> re-added it now ...\n>\n> On Tue, 19 Mar 2002, Oleg Bartunov wrote:\n>\n> > Marc,\n> >\n> > I see no postings to hackers come to fts.postgresql.org for more than a\n> > month. Seems there is a problem, because I also didn't get *any* messages\n> > from psql mailing lists. I was subscribed to lists since 1995 and\n> > want to stay in there. Could you please check the problem.\n> >\n> > I was patient because I thought developers get timeout after\n> > 7.2 release.\n> >\n> > \tRegards,\n> > \t\tOleg\n> >\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> >\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 19 Mar 2002 16:18:08 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Problems with mailing list"
},
{
"msg_contents": "\ndo you know if any of the other lists are missing? or is it just -hackers\nthat got lost?\n\nOn Tue, 19 Mar 2002, Oleg Bartunov wrote:\n\n> Thanks.\n> I'm getting messages now.\n>\n> \tOleg\n> On Tue, 19 Mar 2002, Marc G. Fournier wrote:\n>\n> >\n> > Must have gotten unsubscribed from the list at some point ... just\n> > re-added it now ...\n> >\n> > On Tue, 19 Mar 2002, Oleg Bartunov wrote:\n> >\n> > > Marc,\n> > >\n> > > I see no postings to hackers come to fts.postgresql.org for more than a\n> > > month. Seems there is a problem, because I also didn't get *any* messages\n> > > from psql mailing lists. I was subscribed to lists since 1995 and\n> > > want to stay in there. Could you please check the problem.\n> > >\n> > > I was patient because I thought developers get timeout after\n> > > 7.2 release.\n> > >\n> > > \tRegards,\n> > > \t\tOleg\n> > >\n> > > _____________________________________________________________\n> > > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > > Sternberg Astronomical Institute, Moscow University (Russia)\n> > > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > > phone: +007(095)939-16-83, +007(095)939-23-83\n> > >\n> > >\n> > >\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n",
"msg_date": "Tue, 19 Mar 2002 10:42:18 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Problems with mailing list"
},
{
"msg_contents": "On Tue, 19 Mar 2002, Marc G. Fournier wrote:\n\n>\n> do you know if any of the other lists are missing? or is it just -hackers\n> that got lost?\n\nPersonaly me, I have missed all lists. fts.postgresql.org seems doesn't\nrecieving -hackers lists.\n\n>\n> On Tue, 19 Mar 2002, Oleg Bartunov wrote:\n>\n> > Thanks.\n> > I'm getting messages now.\n> >\n> > \tOleg\n> > On Tue, 19 Mar 2002, Marc G. Fournier wrote:\n> >\n> > >\n> > > Must have gotten unsubscribed from the list at some point ... just\n> > > re-added it now ...\n> > >\n> > > On Tue, 19 Mar 2002, Oleg Bartunov wrote:\n> > >\n> > > > Marc,\n> > > >\n> > > > I see no postings to hackers come to fts.postgresql.org for more than a\n> > > > month. Seems there is a problem, because I also didn't get *any* messages\n> > > > from psql mailing lists. I was subscribed to lists since 1995 and\n> > > > want to stay in there. Could you please check the problem.\n> > > >\n> > > > I was patient because I thought developers get timeout after\n> > > > 7.2 release.\n> > > >\n> > > > \tRegards,\n> > > > \t\tOleg\n> > > >\n> > > > _____________________________________________________________\n> > > > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > > > Sternberg Astronomical Institute, Moscow University (Russia)\n> > > > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > > > phone: +007(095)939-16-83, +007(095)939-23-83\n> > > >\n> > > >\n> > > >\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 19 Mar 2002 18:06:54 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Problems with mailing list"
}
] |
[
{
"msg_contents": "I truely don't know what I did to create the nasty patch (source file\ncertainly didn't look like what it resulted in) -- then again there\nhave been alot of changes since the domain patch was created.\n\nThis should fix gram.y\n\nIf anyone knows a better way of creating patches other than diff -rc ,\nplease speak up.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.",
"msg_date": "Tue, 19 Mar 2002 07:35:20 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Fixes gram.y"
},
{
"msg_contents": "Rod Taylor wrote:\n> I truely don't know what I did to create the nasty patch (source file\n> certainly didn't look like what it resulted in) -- then again there\n> have been alot of changes since the domain patch was created.\n> \n> This should fix gram.y\n> \n> If anyone knows a better way of creating patches other than diff -rc ,\n> please speak up.\n\nI have applied the following new patch. It moves DROP DATABASE as you\nsuggested, and fixes the CREATE TABLE tag to show just CREATE and not\nCREATE DOMAIN. Actually, CREATE DOMAIN should output just DOMAIN too,\nunless someone can tell my why that is not consistent. Patch applied to\nCVS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.292\ndiff -c -r2.292 gram.y\n*** src/backend/parser/gram.y\t19 Mar 2002 02:18:18 -0000\t2.292\n--- src/backend/parser/gram.y\t19 Mar 2002 12:46:30 -0000\n***************\n*** 3184,3189 ****\n--- 3184,3197 ----\n \t\t\t\t{\n \t\t\t\t\t$$ = lconsi(3, makeListi1(-1));\n \t\t\t\t}\n+ \t\t| OWNER opt_equal name \n+ \t\t\t\t{\n+ \t\t\t\t\t$$ = lconsi(4, makeList1($3));\n+ \t\t\t\t}\n+ \t\t| OWNER opt_equal DEFAULT\n+ \t\t\t\t{\n+ \t\t\t\t\t$$ = lconsi(4, makeList1(NULL));\n+ \t\t\t\t}\n \t\t;\n \n \n***************\n*** 3199,3212 ****\n \t\t\t\t\tDropdbStmt *n = makeNode(DropdbStmt);\n \t\t\t\t\tn->dbname = $3;\n \t\t\t\t\t$$ = (Node *)n;\n- \t\t\t\t}\n- \t\t| OWNER opt_equal name \n- \t\t\t\t{\n- \t\t\t\t\t$$ = lconsi(4, makeList1($3));\n- \t\t\t\t}\n- \t\t| OWNER opt_equal DEFAULT\n- \t\t\t\t{\n- \t\t\t\t\t$$ = lconsi(4, makeList1(NULL));\n \t\t\t\t}\n \t\t;\n \n--- 3207,3212 ----\nIndex: src/backend/tcop/postgres.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/tcop/postgres.c,v\nretrieving revision 1.255\ndiff -c -r1.255 postgres.c\n*** src/backend/tcop/postgres.c\t19 Mar 2002 02:18:20 -0000\t1.255\n--- src/backend/tcop/postgres.c\t19 Mar 2002 12:46:33 -0000\n***************\n*** 2213,2220 ****\n \t\t\tbreak;\n \n \t\tcase T_CreateDomainStmt:\n \t\tcase T_CreateStmt:\n! \t\t\ttag = \"CREATE DOMAIN\";\n \t\t\tbreak;\n \n \t\tcase T_DropStmt:\n--- 2213,2223 ----\n \t\t\tbreak;\n \n \t\tcase T_CreateDomainStmt:\n+ \t\t\ttag = \"CREATE\";\t\t\t/* CREATE DOMAIN */\n+ \t\t\tbreak;\n+ \n \t\tcase T_CreateStmt:\n! \t\t\ttag = \"CREATE\";\n \t\t\tbreak;\n \n \t\tcase T_DropStmt:",
"msg_date": "Tue, 19 Mar 2002 07:51:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
},
{
"msg_contents": "Rod Taylor wrote:\n> I truely don't know what I did to create the nasty patch (source file\n> certainly didn't look like what it resulted in) -- then again there\n> have been alot of changes since the domain patch was created.\n\nYes, that patch has been around for a while and I am sure went through\nseveral merges.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 07:52:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
},
{
"msg_contents": "...\n> I have applied the following new patch. It moves DROP DATABASE as you\n> suggested, and fixes the CREATE TABLE tag to show just CREATE and not\n> CREATE DOMAIN. Actually, CREATE DOMAIN should output just DOMAIN too,\n> unless someone can tell my why that is not consistent.\n\nConsistant or not, I'm not sure how only \"DOMAIN\" emitted as a result of\n\"CREATE DOMAIN\" could extend to the other operations such as \"DROP\nDOMAIN\". What would you return for that one? istm that \"CREATE\" needs to\nshow up as the first word in the response, and that if necessary we\nshould extend the other CREATE operations to qualify their return string\nalso.\n\n - Thomas\n",
"msg_date": "Tue, 19 Mar 2002 07:46:07 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I have applied the following new patch. It moves DROP DATABASE as you\n> suggested, and fixes the CREATE TABLE tag to show just CREATE and not\n> CREATE DOMAIN. Actually, CREATE DOMAIN should output just DOMAIN too,\n> unless someone can tell my why that is not consistent. Patch applied to\n> CVS.\n\nThere is a standard for this. CREATE DOMAIN shows CREATE DOMAIN.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 19 Mar 2002 10:56:46 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I have applied the following new patch. It moves DROP DATABASE as you\n> > suggested, and fixes the CREATE TABLE tag to show just CREATE and not\n> > CREATE DOMAIN. Actually, CREATE DOMAIN should output just DOMAIN too,\n> > unless someone can tell my why that is not consistent. Patch applied to\n> > CVS.\n> \n> There is a standard for this. CREATE DOMAIN shows CREATE DOMAIN.\n\nOK, CVS changed to emit CREATE DOMAIN.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 11:11:35 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > I have applied the following new patch. It moves DROP DATABASE as you\n> > suggested, and fixes the CREATE TABLE tag to show just CREATE and not\n> > CREATE DOMAIN. Actually, CREATE DOMAIN should output just DOMAIN too,\n\n ^^^^^^\n\nShould have been CREATE here. Sorry.\n\n\n> > unless someone can tell my why that is not consistent.\n> \n> Consistant or not, I'm not sure how only \"DOMAIN\" emitted as a result of\n> \"CREATE DOMAIN\" could extend to the other operations such as \"DROP\n> DOMAIN\". What would you return for that one? istm that \"CREATE\" needs to\n> show up as the first word in the response, and that if necessary we\n> should extend the other CREATE operations to qualify their return string\n> also.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 11:12:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Peter Eisentraut wrote:\n>> There is a standard for this. CREATE DOMAIN shows CREATE DOMAIN.\n\n> OK, CVS changed to emit CREATE DOMAIN.\n\nWhat's standard about it? I count 9 existing statements that use\n\"CREATE\", vs 4 that use \"CREATE xxx\". (And of those four, CREATE\nVERSION is dead code...) The closest existing statement, CREATE\nTYPE, emits \"CREATE\".\n\nPlain \"CREATE\" seems like the conforming choice, unless we'd like\nto do a wholesale revision of existing command tags. Which is\nnot necessarily an unreasonable thing to do. But just making CREATE\nDOMAIN emit \"CREATE DOMAIN\" isn't improving consistency at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Mar 2002 11:54:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Peter Eisentraut wrote:\n> >> There is a standard for this. CREATE DOMAIN shows CREATE DOMAIN.\n> \n> > OK, CVS changed to emit CREATE DOMAIN.\n> \n> What's standard about it? I count 9 existing statements that use\n> \"CREATE\", vs 4 that use \"CREATE xxx\". (And of those four, CREATE\n> VERSION is dead code...) The closest existing statement, CREATE\n> TYPE, emits \"CREATE\".\n> \n> Plain \"CREATE\" seems like the conforming choice, unless we'd like\n> to do a wholesale revision of existing command tags. Which is\n> not necessarily an unreasonable thing to do. But just making CREATE\n> DOMAIN emit \"CREATE DOMAIN\" isn't improving consistency at all.\n\nI assumed Peter meant some kind of ANSI SQL standard, but I am kind of\nlost how they define that level of detail in the standard. I agree a\nwholesale cleanup there would be a good idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 11:56:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
},
{
"msg_contents": "Tom Lane writes:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Peter Eisentraut wrote:\n> >> There is a standard for this. CREATE DOMAIN shows CREATE DOMAIN.\n>\n> > OK, CVS changed to emit CREATE DOMAIN.\n>\n> What's standard about it?\n\nISO/IEC 9075-2:1999 clause 19.1 general rule 1 c) to be exact. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 19 Mar 2002 12:21:26 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> What's standard about it?\n\n> ISO/IEC 9075-2:1999 clause 19.1 general rule 1 c) to be exact. ;-)\n\nHmm. Looks like we need a wholesale revision of command tags, indeed.\nAt least if we want to consider command tags to be the data that\nsatisfies this spec requirement.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 00:27:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> What's standard about it?\n>\n> > ISO/IEC 9075-2:1999 clause 19.1 general rule 1 c) to be exact. ;-)\n>\n> Hmm. Looks like we need a wholesale revision of command tags, indeed.\n> At least if we want to consider command tags to be the data that\n> satisfies this spec requirement.\n\nWe would need to do:\n\nALTER -> ALTER <type of object>\nDROP -> DROP <type of object>\nCREATE -> CREATE <type of object>\n\nThose look reasonable, and we already do that in some cases.\n\nCLOSE -> CLOSE CURSOR\nDECLARE -> DECLARE CURSOR\n\nNo opinion here.\n\nCOMMIT -> COMMIT WORK\nROLLBACK -> ROLLBACK WORK\n\nDoesn't matter to me.\n\nDELETE -> DELETE WHERE\nUPDATE -> UPDATE WHERE\n\nI'd prefer not to do those.\n\nSET CONSTRAINTS -> SET CONSTRAINT [sic]\nSET VARIABLE -> SET TIME ZONE\nSET VARIABLE -> SET TRANSACTION\nSET VARIABLE -> SET SESSION AUTHORIZATION\n\nThe first one looks like a mistake. The other ones we could work on.\n\nIt also seems to me that CREATE TABLE AS should not print \"SELECT\". I\nthought Fernando Nasser had fixed that. Maybe I'm not completely up to\ndate in my sources.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 20 Mar 2002 12:15:19 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Hmm. Looks like we need a wholesale revision of command tags, indeed.\n\n> We would need to do:\n\n> ALTER -> ALTER <type of object>\n> DROP -> DROP <type of object>\n> CREATE -> CREATE <type of object>\n> Those look reasonable, and we already do that in some cases.\n\nThese seem okay to me.\n\n> CLOSE -> CLOSE CURSOR\n> DECLARE -> DECLARE CURSOR\n> No opinion here.\n\nNo strong feeling here either.\n\n> COMMIT -> COMMIT WORK\n> ROLLBACK -> ROLLBACK WORK\n> Doesn't matter to me.\n\nI'd vote against changing these.\n\n> DELETE -> DELETE WHERE\n> UPDATE -> UPDATE WHERE\n> I'd prefer not to do those.\n\nIf we change these we will break existing client code that expects a\nparticular format for these tags (so it can pull out the row count).\nDefinitely a \"no change\" vote here.\n\n> SET CONSTRAINTS -> SET CONSTRAINT [sic]\n> SET VARIABLE -> SET TIME ZONE\n> SET VARIABLE -> SET TRANSACTION\n> SET VARIABLE -> SET SESSION AUTHORIZATION\n> The first one looks like a mistake. The other ones we could work on.\n\nI'd say leave them all as \"SET VARIABLE\". There's no real information\ngain here, and I'm a tad worried about overflowing limited command-tag\nbuffers in clients.\n\n> It also seems to me that CREATE TABLE AS should not print \"SELECT\". I\n> thought Fernando Nasser had fixed that.\n\nNo, I think it's still on his to-do list (we didn't like his first\nproposed patch for it).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 12:45:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> > It also seems to me that CREATE TABLE AS should not print \"SELECT\". I\n> > thought Fernando Nasser had fixed that.\n> \n> No, I think it's still on his to-do list (we didn't like his first\n> proposed patch for it).\n> \n\nYes, I am supposed to see if I can fix this and get rid of the \"into\"\nfield\nin SelectStmt at the same time. Right Tom?\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Wed, 20 Mar 2002 13:14:29 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Yes, I am supposed to see if I can fix this and get rid of the \"into\"\n> field in SelectStmt at the same time. Right Tom?\n\nYeah, we had talked about that ... but I'm not sure it's worth the\ntrouble. I don't see any clean way for the SELECT grammar rule to\nreturn info about an INTO clause, other than by including it in\nSelectStmt.\n\nProbably the easiest answer is for CreateCommandTag to just deal with\ndrilling down into the parsetree to see if INTO appears.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 14:47:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> Hmm. Looks like we need a wholesale revision of command tags, indeed.\n> \n> > We would need to do:\n> \n> > ALTER -> ALTER <type of object>\n> > DROP -> DROP <type of object>\n> > CREATE -> CREATE <type of object>\n> > Those look reasonable, and we already do that in some cases.\n> \n> These seem okay to me.\n\nYep, makes sense.\n\n> > CLOSE -> CLOSE CURSOR\n> > DECLARE -> DECLARE CURSOR\n> > No opinion here.\n> \n> No strong feeling here either.\n\nSeems like extra noise. Not sure either.\n\n> \n> > COMMIT -> COMMIT WORK\n> > ROLLBACK -> ROLLBACK WORK\n> > Doesn't matter to me.\n> \n> I'd vote against changing these.\n\nOK.\n\n> > DELETE -> DELETE WHERE\n> > UPDATE -> UPDATE WHERE\n> > I'd prefer not to do those.\n> \n> If we change these we will break existing client code that expects a\n> particular format for these tags (so it can pull out the row count).\n> Definitely a \"no change\" vote here.\n> \n\nHard to imagine what logic you would use to add the word WHERE. What if\nthey do a DELETE without a WHERE?\n\n\n> > SET CONSTRAINTS -> SET CONSTRAINT [sic]\n> > SET VARIABLE -> SET TIME ZONE\n> > SET VARIABLE -> SET TRANSACTION\n> > SET VARIABLE -> SET SESSION AUTHORIZATION\n> > The first one looks like a mistake. The other ones we could work on.\n> \n> I'd say leave them all as \"SET VARIABLE\". There's no real information\n> gain here, and I'm a tad worried about overflowing limited command-tag\n> buffers in clients.\n\nYes, the problem here is that we have so many SET variables that aren't\nstandard, do we print the standard tags for the standard ones and just\nSET VARIABLE for the others? Doesn't seem worth it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 20 Mar 2002 15:25:13 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Fixes gram.y"
}
] |
[
{
"msg_contents": "Oleg Bartunov wrote:\n> On Tue, 19 Mar 2002, Bruce Momjian wrote:\n> \n> > Oleg Bartunov wrote:\n> > > Bruce,\n> > >\n> > > we have something to add. It's quite important for users of our tsearch module.\n> > > Too late ?\n> >\n> > For 7.2.1, I don't think it is too late but I don't think we can wait\n> > days.\n> \n> Don't wait. It's below:\n> \n> Users of contrib/tsearch needs after upgrading of module (compiling, installing)\n> to perform sql command:\n> update pg_amop set amopreqcheck = true where amopclaid =\n> (select oid from pg_opclass where opcname = 'gist_txtidx_ops');\n\nOleg, sorry, I don't understand where this should appear. In the README\nfile, and if so, where? Is this something only for people upgrading\nfrom 7.2?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 08:20:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "On Tue, 19 Mar 2002, Bruce Momjian wrote:\n\n> Oleg Bartunov wrote:\n> > On Tue, 19 Mar 2002, Bruce Momjian wrote:\n> >\n> > > Oleg Bartunov wrote:\n> > > > Bruce,\n> > > >\n> > > > we have something to add. It's quite important for users of our tsearch module.\n> > > > Too late ?\n> > >\n> > > For 7.2.1, I don't think it is too late but I don't think we can wait\n> > > days.\n> >\n> > Don't wait. It's below:\n> >\n> > Users of contrib/tsearch needs after upgrading of module (compiling, installing)\n> > to perform sql command:\n> > update pg_amop set amopreqcheck = true where amopclaid =\n> > (select oid from pg_opclass where opcname = 'gist_txtidx_ops');\n>\n> Oleg, sorry, I don't understand where this should appear. In the README\n> file, and if so, where? Is this something only for people upgrading\n> from 7.2?\n\nSorry Bruce, I was unclear. I have attached patch to Readme.tsearch\nAlso, It'd be worth to mention in Changes to point users of tsearch\nabout importang upgrade notices.\n\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Tue, 19 Mar 2002 16:40:34 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Time for 7.2.1?"
},
{
"msg_contents": "\nOK, patch applied to 7.2.1 only --- no need to have that mentioned in\n7.3 README.tsearch. HISTORY/release.sgml updated in both branches:\n\n contrib/tsearch dictionary improvements, see README.tsearch for\n an additional installation step (Thomas T. Thai, Teodor Sigaev)\n\n---------------------------------------------------------------------------\n\nOleg Bartunov wrote:\n> On Tue, 19 Mar 2002, Bruce Momjian wrote:\n> \n> > Oleg Bartunov wrote:\n> > > On Tue, 19 Mar 2002, Bruce Momjian wrote:\n> > >\n> > > > Oleg Bartunov wrote:\n> > > > > Bruce,\n> > > > >\n> > > > > we have something to add. It's quite important for users of our tsearch module.\n> > > > > Too late ?\n> > > >\n> > > > For 7.2.1, I don't think it is too late but I don't think we can wait\n> > > > days.\n> > >\n> > > Don't wait. It's below:\n> > >\n> > > Users of contrib/tsearch needs after upgrading of module (compiling, installing)\n> > > to perform sql command:\n> > > update pg_amop set amopreqcheck = true where amopclaid =\n> > > (select oid from pg_opclass where opcname = 'gist_txtidx_ops');\n> >\n> > Oleg, sorry, I don't understand where this should appear. In the README\n> > file, and if so, where? Is this something only for people upgrading\n> > from 7.2?\n> \n> Sorry Bruce, I was unclear. I have attached patch to Readme.tsearch\n> Also, It'd be worth to mention in Changes to point users of tsearch\n> about importang upgrade notices.\n> \n> >\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 19 Mar 2002 09:18:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Time for 7.2.1?"
}
] |
[
{
"msg_contents": "I am curious, why does notify not support a string argument of some kind, to \npass to the other connections? It seems it would be a little more useful.\n\nMy application does not exactly require this feature, but it seems more \nintuitive. After all, the current implementation requires a separate \"LISTEN\" \nfor each possible event.\n\nIs this due to oracle compatibility issues? Is it too difficult for it's \nusefulness?\n\nThanks,\n\tJeff\n",
"msg_date": "Tue, 19 Mar 2002 05:22:48 -0800",
"msg_from": "Jeff Davis <list-pgsql-general@dynworks.com>",
"msg_from_op": true,
"msg_subject": "Notify argument?"
},
{
"msg_contents": "On Tue, 2002-03-19 at 08:22, Jeff Davis wrote:\n> I am curious, why does notify not support a string argument of some kind, to \n> pass to the other connections? It seems it would be a little more useful.\n\nYou can pass data around using temp tables.\n\n> Is this due to oracle compatibility issues?\n\nActually, I think that Oracle's implementation of this feature actually\nallows for a user-specified string argument.\n\n> Is it too difficult for it's usefulness?\n\nAFAICT it shouldn't be too difficult. However, there is a note in the\nTODO list referring to breaking backwards compatability with the\n\"pgNotify API\". Exactly how backwards compatible do we need to be?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "20 Mar 2002 00:48:52 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Notify argument?"
},
{
"msg_contents": "Neil Conway wrote:\n> \n> > I am curious, why does notify not support a string argument of some kind, to\n> > pass to the other connections? It seems it would be a little more useful.\n> Actually, I think that Oracle's implementation of this feature actually\n> allows for a user-specified string argument.\n\nCommercial Ingres allowed one to specify a string also. I'm guessing\nthat the feature was not implemented in PostgreSQL *not* because there\nis some good database design reason to leave it out, but rather because\nsomeone did not bother to put it in.\n\n> AFAICT it shouldn't be too difficult. However, there is a note in the\n> TODO list referring to breaking backwards compatability with the\n> \"pgNotify API\". Exactly how backwards compatible do we need to be?\n\nimho not much in this case (though of course we may find a way to be\nvery compatible when someone actually implements it). I had found the\nequivalent feature very useful when building a large data handling\nsystem a few years ago, and I'd think that it would be useful in\nPostgreSQL also. Comments?\n\n - Thomas\n",
"msg_date": "Wed, 20 Mar 2002 07:24:17 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Notify argument?"
},
{
"msg_contents": "On Wed, 2002-03-20 at 10:24, Thomas Lockhart wrote:\n> > AFAICT it shouldn't be too difficult. However, there is a note in the\n> > TODO list referring to breaking backwards compatability with the\n> > \"pgNotify API\". Exactly how backwards compatible do we need to be?\n> \n> imho not much in this case (though of course we may find a way to be\n> very compatible when someone actually implements it). I had found the\n> equivalent feature very useful when building a large data handling\n> system a few years ago, and I'd think that it would be useful in\n> PostgreSQL also.\n\nOkay, I'll implement this.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "20 Mar 2002 12:09:16 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Notify argument?"
},
{
"msg_contents": "Neil Conway wrote:\n> > Is it too difficult for it's usefulness?\n> \n> AFAICT it shouldn't be too difficult. However, there is a note in the\n> TODO list referring to breaking backwards compatability with the\n> \"pgNotify API\". Exactly how backwards compatible do we need to be?\n\nThe breakage will come when we lengthen NAMEDATALEN, which I plan to\ntackle for 7.3. We will need to re-order the NOTIFY structure and put\nthe NAMEDATALEN string at the end of the struct so differing namedatalen\nbackend/clients will work. If you want to break it, 7.3 would probably\nbe the time to do it. :-) Users will need a recompile pre-7.3 to use\nnotify for 7.3 and later anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 20 Mar 2002 15:27:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Notify argument?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The breakage will come when we lengthen NAMEDATALEN, which I plan to\n> tackle for 7.3. We will need to re-order the NOTIFY structure and put\n> the NAMEDATALEN string at the end of the struct so differing namedatalen\n> backend/clients will work. If you want to break it, 7.3 would probably\n> be the time to do it. :-) Users will need a recompile pre-7.3 to use\n> notify for 7.3 and later anyway.\n\nIf we're going to change the structure anyway, let's fix it to be\nindependent of NAMEDATALEN. Instead of\n\n char relname[NAMEDATALEN];\n int be_pid;\n\nlet's do\n\n char *relname;\n int be_pid;\n\nThis should require no source-level changes in calling C code, thanks\nto C's equivalence between pointers and arrays. We can preserve the\nfact that freeing a PQnotifies result takes only one free() with a\nlittle hacking to make the string be allocated in the same malloc call:\n\n newNotify = (PGnotify *) malloc(sizeof(PGnotify) + strlen(str) + 1);\n newNotify->relname = (char *) newNotify + sizeof(PGnotify);\n strcpy(newNotify->relname, str);\n\nThus, with one line of extra ugliness inside the library, we solve the\nproblem permanently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 16:10:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Notify argument? "
},
{
"msg_contents": "Bruce Momjian wrote:\n> Neil Conway wrote:\n> > > Is it too difficult for it's usefulness?\n> >\n> > AFAICT it shouldn't be too difficult. However, there is a note in the\n> > TODO list referring to breaking backwards compatability with the\n> > \"pgNotify API\". Exactly how backwards compatible do we need to be?\n>\n> The breakage will come when we lengthen NAMEDATALEN, which I plan to\n> tackle for 7.3. We will need to re-order the NOTIFY structure and put\n> the NAMEDATALEN string at the end of the struct so differing namedatalen\n> backend/clients will work. If you want to break it, 7.3 would probably\n> be the time to do it. :-) Users will need a recompile pre-7.3 to use\n> notify for 7.3 and later anyway.\n\nHmmm,\n\n seems I have to get a little more familiar with the FE/BE\n stuff again. Have been pretty good at that years ago.\n\n IIRC, the FE/BE protocol itself does not limit any length or\n depends on definitions like that. So that should be an\n arbitrary (read bogus) usage in libpq. The TODO entry\n therefore should read\n\n Fix Notify API's usage of NAMEDATALEN.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 20 Mar 2002 18:23:01 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Notify argument?"
},
{
"msg_contents": "On Wed, Mar 20, 2002 at 04:10:14PM -0500, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The breakage will come when we lengthen NAMEDATALEN, which I plan to\n> > tackle for 7.3. We will need to re-order the NOTIFY structure and put\n> > the NAMEDATALEN string at the end of the struct so differing namedatalen\n> > backend/clients will work. If you want to break it, 7.3 would probably\n> > be the time to do it. :-) Users will need a recompile pre-7.3 to use\n> > notify for 7.3 and later anyway.\n> \n> If we're going to change the structure anyway, let's fix it to be\n> independent of NAMEDATALEN.\n\nSounds good. If we're making other backwards-incompatible changes to\npgNotify, one thing that bugs me about the API is the use of \"relname\"\nto refer to name of the NOTIFY/LISTEN condition. This is incorrect\nbecause the name of a condition is _not_ the name of a relation -- there\nis no necessary correspondence between these. The names of NOTIFY\nconditions are arbitrary, and chosen by the application developer.\n\nThe same terminology is used in the backend NOTIFY/LISTEN code (e.g.\npg_listener), but the major source of incompatibility will be the change\nto libpq. Would it be a good idea to rename \"relname\" to something more\nsensible?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Wed, 20 Mar 2002 23:55:42 -0500",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Notify argument?"
},
{
"msg_contents": "Neil Conway wrote:\n> On Wed, Mar 20, 2002 at 04:10:14PM -0500, Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > The breakage will come when we lengthen NAMEDATALEN, which I plan to\n> > > tackle for 7.3. We will need to re-order the NOTIFY structure and put\n> > > the NAMEDATALEN string at the end of the struct so differing namedatalen\n> > > backend/clients will work. If you want to break it, 7.3 would probably\n> > > be the time to do it. :-) Users will need a recompile pre-7.3 to use\n> > > notify for 7.3 and later anyway.\n> > \n> > If we're going to change the structure anyway, let's fix it to be\n> > independent of NAMEDATALEN.\n> \n> Sounds good. If we're making other backwards-incompatible changes to\n> pgNotify, one thing that bugs me about the API is the use of \"relname\"\n> to refer to name of the NOTIFY/LISTEN condition. This is incorrect\n> because the name of a condition is _not_ the name of a relation -- there\n> is no necessary correspondence between these. The names of NOTIFY\n> conditions are arbitrary, and chosen by the application developer.\n> \n> The same terminology is used in the backend NOTIFY/LISTEN code (e.g.\n> pg_listener), but the major source of incompatibility will be the change\n> to libpq. Would it be a good idea to rename \"relname\" to something more\n> sensible?\n\nRenaming the column would make an API change. I was talking only about\nrequiring a recompile.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 00:14:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Notify argument?"
},
{
"msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n>> If we're going to change the structure anyway, let's fix it to be\n>> independent of NAMEDATALEN.\n\n> Sounds good. If we're making other backwards-incompatible changes to\n> pgNotify, one thing that bugs me about the API is the use of \"relname\"\n> to refer to name of the NOTIFY/LISTEN condition.\n\nI hear you ... but my proposal only requires a recompile, *not* any\nsource code changes. Renaming the field would break client source code.\nI doubt it's worth that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 00:16:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Notify argument? "
},
{
"msg_contents": "On Thu, 2002-03-21 at 00:16, Tom Lane wrote:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n> >> If we're going to change the structure anyway, let's fix it to be\n> >> independent of NAMEDATALEN.\n> \n> > Sounds good. If we're making other backwards-incompatible changes to\n> > pgNotify, one thing that bugs me about the API is the use of \"relname\"\n> > to refer to name of the NOTIFY/LISTEN condition.\n> \n> I hear you ... but my proposal only requires a recompile, *not* any\n> source code changes. Renaming the field would break client source code.\n> I doubt it's worth that.\n\nOkay, that's fair -- I'll leave it as it is.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "21 Mar 2002 00:19:47 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Notify argument?"
},
{
"msg_contents": "Here is a patch that implements Tom's suggestion of mallocing the\nrelation name string as part of PQnotify and not depending on\nNAMEDATALEN. Nice trick.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The breakage will come when we lengthen NAMEDATALEN, which I plan to\n> > tackle for 7.3. We will need to re-order the NOTIFY structure and put\n> > the NAMEDATALEN string at the end of the struct so differing namedatalen\n> > backend/clients will work. If you want to break it, 7.3 would probably\n> > be the time to do it. :-) Users will need a recompile pre-7.3 to use\n> > notify for 7.3 and later anyway.\n> \n> If we're going to change the structure anyway, let's fix it to be\n> independent of NAMEDATALEN. Instead of\n> \n> char relname[NAMEDATALEN];\n> int be_pid;\n> \n> let's do\n> \n> char *relname;\n> int be_pid;\n> \n> This should require no source-level changes in calling C code, thanks\n> to C's equivalence between pointers and arrays. We can preserve the\n> fact that freeing a PQnotifies result takes only one free() with a\n> little hacking to make the string be allocated in the same malloc call:\n> \n> newNotify = (PGnotify *) malloc(sizeof(PGnotify) + strlen(str) + 1);\n> newNotify->relname = (char *) newNotify + sizeof(PGnotify);\n> strcpy(newNotify->relname, str);\n> \n> Thus, with one line of extra ugliness inside the library, we solve the\n> problem permanently.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/interfaces/libpq/fe-exec.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.118\ndiff -c -r1.118 fe-exec.c\n*** src/interfaces/libpq/fe-exec.c\t8 Apr 2002 03:48:10 -0000\t1.118\n--- src/interfaces/libpq/fe-exec.c\t15 Apr 2002 00:15:29 -0000\n***************\n*** 1510,1517 ****\n \t\treturn EOF;\n \tif (pqGets(&conn->workBuffer, conn))\n \t\treturn EOF;\n! \tnewNotify = (PGnotify *) malloc(sizeof(PGnotify));\n! \tstrncpy(newNotify->relname, conn->workBuffer.data, NAMEDATALEN);\n \tnewNotify->be_pid = be_pid;\n \tDLAddTail(conn->notifyList, DLNewElem(newNotify));\n \treturn 0;\n--- 1510,1525 ----\n \t\treturn EOF;\n \tif (pqGets(&conn->workBuffer, conn))\n \t\treturn EOF;\n! \n! \t/*\n! \t * Store the relation name right after the PQnotify structure so it can\n! \t * all be freed at once. We don't use NAMEDATALEN because we don't\n! \t * want to tie this interface to a specific server name length.\n! \t */\n! \tnewNotify = (PGnotify *) malloc(sizeof(PGnotify) +\n! \t\t\t\tstrlen(conn->workBuffer.data) + 1);\n! \tnewNotify->relname = (char *)newNotify + sizeof(PGnotify);\n! \tstrcpy(newNotify->relname, conn->workBuffer.data);\n \tnewNotify->be_pid = be_pid;\n \tDLAddTail(conn->notifyList, DLNewElem(newNotify));\n \treturn 0;\nIndex: src/interfaces/libpq/libpq-fe.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.83\ndiff -c -r1.83 libpq-fe.h\n*** src/interfaces/libpq/libpq-fe.h\t5 Mar 2002 06:07:26 -0000\t1.83\n--- src/interfaces/libpq/libpq-fe.h\t15 Apr 2002 00:15:40 -0000\n***************\n*** 105,112 ****\n */\n typedef struct pgNotify\n {\n! \tchar\t\trelname[NAMEDATALEN];\t/* name of relation containing\n! \t\t\t\t\t\t\t\t\t\t * data */\n \tint\t\t\tbe_pid;\t\t\t/* process id of backend */\n } PGnotify;\n \n--- 105,111 ----\n */\n typedef struct pgNotify\n {\n! \tchar\t\t*relname;\t\t/* name of relation containing data */\n \tint\t\t\tbe_pid;\t\t\t/* process id of backend */\n } PGnotify;",
"msg_date": "Sun, 14 Apr 2002 20:24:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Notify argument?"
},
{
"msg_contents": "\nFix applied.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The breakage will come when we lengthen NAMEDATALEN, which I plan to\n> > tackle for 7.3. We will need to re-order the NOTIFY structure and put\n> > the NAMEDATALEN string at the end of the struct so differing namedatalen\n> > backend/clients will work. If you want to break it, 7.3 would probably\n> > be the time to do it. :-) Users will need a recompile pre-7.3 to use\n> > notify for 7.3 and later anyway.\n> \n> If we're going to change the structure anyway, let's fix it to be\n> independent of NAMEDATALEN. Instead of\n> \n> char relname[NAMEDATALEN];\n> int be_pid;\n> \n> let's do\n> \n> char *relname;\n> int be_pid;\n> \n> This should require no source-level changes in calling C code, thanks\n> to C's equivalence between pointers and arrays. We can preserve the\n> fact that freeing a PQnotifies result takes only one free() with a\n> little hacking to make the string be allocated in the same malloc call:\n> \n> newNotify = (PGnotify *) malloc(sizeof(PGnotify) + strlen(str) + 1);\n> newNotify->relname = (char *) newNotify + sizeof(PGnotify);\n> strcpy(newNotify->relname, str);\n> \n> Thus, with one line of extra ugliness inside the library, we solve the\n> problem permanently.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Apr 2002 19:36:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Notify argument?"
}
] |
[
{
"msg_contents": "In language bindings which wrap around the libpq C interface, should the\nfe_getauthname() function be used?\n\nIt's not declared in libpq-fe.h, which AFAIK is the only header file\nthat libpq applications should be using.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "19 Mar 2002 15:37:49 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "libpq: fe_getauthname()"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> In language bindings which wrap around the libpq C interface, should the\n> fe_getauthname() function be used?\n\nSeems like an internal routine to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Mar 2002 19:19:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq: fe_getauthname() "
}
] |
[
{
"msg_contents": "The DOMAIN patch is completely broken when it comes to type coercion\nbehavior. For one thing, it doesn't know that any operators or\nfunctions on a domain's base type can be used with a domain:\n\ndomain=# create domain zip as char(2);\nCREATE\ndomain=# create table foo (f1 zip);\nCREATE\ndomain=# select f1 || 'z' from foo;\nERROR: Unable to identify an operator '||' for types 'zip' and 'unknown'\n You will have to retype this query using an explicit cast\n\nand casting does not help:\n\ndomain=# select f1::char(2) || 'z' from foo;\nERROR: Cannot cast type 'zip' to 'character'\n\nThere are more subtle problems too. Among other things, it will\ngenerate expressions that are supposed to be labeled with the domain\ntype but are actually labeled with the domain's base type, leading to\nall sorts of confusion. (The reason we had to introduce RelabelType\nexpression nodes a couple years ago was to avoid just this scenario.)\n\nI am thinking that a non-broken approach would involve (1) treating\na domain as binary-compatible with its base type, and therefore with\nall other domains on the same base type, and (2) allowing a coercion\nfunction that produces the base type to be used to produce the domain\ntype. (The patch tries to do (2), but does it in the wrong places,\nleading to the mislabeled-expression problem.)\n\nAn implication of this is that one could not define functions and\noperators that implement any interesting domain-type-specific behavior.\nThis strikes me as okay --- it seems like domains are a shortcut to save\nhaving to invent a real type, and so people wouldn't care about defining\ndomain-specific functions. If we don't accept binary equivalence of\ndomains to base types, then creating a useful domain will be nearly as\nnontrivial as creating a new base type.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Mar 2002 16:14:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Domains and type coercion"
},
{
"msg_contents": "> I am thinking that a non-broken approach would involve (1) treating\n> a domain as binary-compatible with its base type, and therefore with\n> all other domains on the same base type, and (2) allowing a coercion\n> function that produces the base type to be used to produce the\ndomain\n> type. (The patch tries to do (2), but does it in the wrong places,\n> leading to the mislabeled-expression problem.)\n\n2 was the goal, and it worked enough for any default expression I\ncould come up with -- so I thought it did pretty good. Guess not. It\nshould be binary equivelent to the base type it's made out of.\n\n\n",
"msg_date": "Tue, 19 Mar 2002 17:09:17 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Domains and type coercion"
},
{
"msg_contents": "I wrote:\n> I am thinking that a non-broken approach would involve (1) treating\n> a domain as binary-compatible with its base type, and therefore with\n> all other domains on the same base type, and (2) allowing a coercion\n> function that produces the base type to be used to produce the domain\n> type.\n\nI've committed code that does this, and it seems to handle the basic\ncases okay. However, there are still some corner cases that are\nunfriendly:\n\nregression=# create domain mydom as numeric(7,2);\nCREATE DOMAIN\nregression=# create table foo (f1 numeric(7,2), f2 mydom);\nCREATE\nregression=# insert into foo values(111,222);\nINSERT 139780 1\nregression=# select f1 + 42 from foo;\n ?column?\n----------\n 153.00\n(1 row)\n\nregression=# select f2 + 42 from foo;\nERROR: Unable to identify an operator '+' for types 'mydom' and 'integer'\n You will have to retype this query using an explicit cast\n\n\nThe problem seems to be that when parse_func looks for \"exact match\"\noperators, it doesn't consider numeric to be an exact match for mydom.\nSo that heuristic fails and we're left with no unique best choice for\nthe operator.\n\nI'm not sure if there's anything much that can be done about this.\nWe could treat exact and binary-compatible matches alike (doesn't seem\ngood), or put a special case into the operator selection rules to reduce\ndomains to their basetypes before making the \"exact match\" test.\nNeither of these seem real appealing, but if we don't do something\nI think that domains are going to be a big pain in the neck to use.\n\nAny thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 15:24:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Domains and type coercion "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> (...) or put a special case into the operator selection rules to reduce\n> domains to their basetypes before making the \"exact match\" test.\n\nBy definition, \n\nwhich I believe should be read as \n\n\"A domain is a set of permissible values (of a data type)\".\n\nWhat I am trying to say is that the domain is still the same data type\nw.r.t. operator and functions so reducing it to the basic type for\nsuch searchs is the right thing to do.\n\n\n> Neither of these seem real appealing, but if we don't do something\n> I think that domains are going to be a big pain in the neck to use.\n> \n\nAgreed.\n\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Wed, 20 Mar 2002 16:33:57 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Domains and type coercion"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Any thoughts?\n> \n\nAs we are talking about CAST,\n\nif one CASTs to a domain, SQL99 says we have to check the constraints\nand issue a \"integrity constraint violation\" if appropriate (6.22, GR 21).\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Wed, 20 Mar 2002 16:45:22 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Domains and type coercion"
},
{
"msg_contents": "...\n> The problem seems to be that when parse_func looks for \"exact match\"\n> operators, it doesn't consider numeric to be an exact match for mydom.\n> So that heuristic fails and we're left with no unique best choice for\n> the operator.\n\nSure. At the moment there is no reason for parse_func to think that\nmydom is anything, right?\n\n> I'm not sure if there's anything much that can be done about this.\n\nSomething has to be done ;)\n\n> We could treat exact and binary-compatible matches alike (doesn't seem\n> good), or put a special case into the operator selection rules to reduce\n> domains to their basetypes before making the \"exact match\" test.\n> Neither of these seem real appealing, but if we don't do something\n> I think that domains are going to be a big pain in the neck to use.\n\nThere could also be an explicit heuristic *after* the exact match\ngathering to look for an exact match for domains reduced to their base\ntypes. Is there any reason to look for domains before that?\n\n - Thomas\n",
"msg_date": "Thu, 21 Mar 2002 05:51:50 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Domains and type coercion"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n>> We could treat exact and binary-compatible matches alike (doesn't seem\n>> good), or put a special case into the operator selection rules to reduce\n>> domains to their basetypes before making the \"exact match\" test.\n\n> There could also be an explicit heuristic *after* the exact match\n> gathering to look for an exact match for domains reduced to their base\n> types. Is there any reason to look for domains before that?\n\nThe problem in the case I gave was that the \"exact match\" heuristic\nwas throwing away the operator we really wanted to use. I had\n\"domain + int4\" where domain is really numeric. In the base case,\n\"numeric + int4\", we'll keep both \"numeric + numeric\" and \"int4 + int4\"\nsince each has one exact match, and later decide that \"numeric + numeric\"\nis preferred. In the domain case we will keep only \"int4 + int4\"\n... oops. Testing later will not help.\n\nIf we take the hard SQL99 line that domains *are* the base type plus\nconstraints, then we could reduce domains to base types before we start\nthe entire matching process, and this issue would go away. This would\nprevent declaring any specialized operators or functions for a domain.\n(In fact, I'd be inclined to set things up so that it's impossible to\nstore domain type OIDs in pg_proc or pg_operator, thus saving the time\nof doing getBaseType on one side of the match.) Thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 12:26:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Domains and type coercion "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> If we take the hard SQL99 line that domains *are* the base type plus\n> constraints, then we could reduce domains to base types before we start\n> the entire matching process, and this issue would go away. This would\n> prevent declaring any specialized operators or functions for a domain.\n> (In fact, I'd be inclined to set things up so that it's impossible to\n> store domain type OIDs in pg_proc or pg_operator, thus saving the time\n> of doing getBaseType on one side of the match.) Thoughts?\n> \n\nIMHO this is the right thing to do.\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 21 Mar 2002 13:47:03 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Domains and type coercion"
},
{
"msg_contents": "> If we take the hard SQL99 line that domains *are* the base type plus\n> constraints, then we could reduce domains to base types before we\nstart\n> the entire matching process, and this issue would go away. This\nwould\n> prevent declaring any specialized operators or functions for a\ndomain.\n> (In fact, I'd be inclined to set things up so that it's impossible\nto\n> store domain type OIDs in pg_proc or pg_operator, thus saving the\ntime\n> of doing getBaseType on one side of the match.) Thoughts?\n\nIt would be fairly straight forward to simply copy the domain base\ntype into the atttypid, then create an atttypdomain (normally 0,\nexcept in the case of a domain). Everything would use the attypid,\nexcept for \\d and pg_dump which could use the domain if it exists.\n\nIs this something I should do?\n\n\n",
"msg_date": "Thu, 21 Mar 2002 14:01:33 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Domains and type coercion "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> It would be fairly straight forward to simply copy the domain base\n> type into the atttypid, then create an atttypdomain (normally 0,\n> except in the case of a domain). Everything would use the attypid,\n> except for \\d and pg_dump which could use the domain if it exists.\n\n> Is this something I should do?\n\nNo, because it's quite irrelevant to the problem of type coercion,\nwhich works with expressions; attributes are only one part of the\nexpression world.\n\nActually, considering Fernando's point that a CAST ought to apply the\nconstraints associated with a domain type, your attribute-based\nimplementation is wrong anyway. Rather than merging the domain\nconstraints into the table definition (which will be a nightmare for\npg_dump to sort out, anyway) keep 'em separate. The constraints could\nbe checked during casting from a base type to a domain type --- take a\nlook at the existing mechanism for enforcing typmod (length limits),\nwhich after all is a simplistic kind of domain constraint.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 17:45:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Domains and type coercion "
}
] |
[
{
"msg_contents": "Hi folks,\n\nI'm dead in the water with pg_clog errors:\n\nMar 19 18:25:05 nexus postgres[28736]: [6] FATAL 2: open of /data00/pgdata/pg_clog/007D failed: No such file or directory\nMar 19 18:25:06 nexus postgres[22250]: [1] DEBUG: server process (pid 28736) exited with exit code 2\n\nMar 19 22:06:53 nexus postgres[29389]: [9] FATAL 2: open of /data00/pgdata/pg_clog/0414 failed: No such file or directory\nMar 19 22:06:53 nexus postgres[22250]: [4] DEBUG: server process (pid 29389) exited with exit code 2\n\nMar 19 22:11:04 nexus postgres[29491]: [12] FATAL 2: open of /data00/pgdata/pg_clog/0353 failed: No such file or directory\nMar 19 22:11:04 nexus postgres[22250]: [7] DEBUG: server process (pid 29491) exited with exit code 2\n\nMar 19 22:13:34 nexus postgres[29716]: [6] FATAL 2: open of /data00/pgdata/pg_clog/0353 failed: No such file or directory\nMar 19 22:13:34 nexus postgres[29700]: [1] DEBUG: server process (pid 29716) exited with exit code 2\n\nThere aren't any other error messages, just DEBUG messages about how\nPostgreSQL is cleaning up after itself.\n\nThese occur running a complex-ish query, but no more complex than others,\nand less complex than many. The query runs for about 20 seconds, then\nthe error occurs.\n\nBetween attempts, I shut down postgresql and ran 'ipcclean'. No joy\nthere.\n\nHere's what's in my pg_clog directory:\n\n[postgres@nexus postgres]$ l /data00/pgdata/pg_clog/\ntotal 8\n-rw------- 1 postgres postgres 8192 Mar 19 22:05 0000\n\nI'm running version 7.2, with Tom's vacuum analyze patch.\n\nI searched the postgresql.org website for 'pg_clog', came up dry. Found\nsome stuff in the newsgroup archives that looked similar but different\n(more errors than I'm getting), and that I didn't really understand.\n\nAny help appreciated, as always...\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n",
"msg_date": "Tue, 19 Mar 2002 22:40:18 -0500",
"msg_from": "Gordon Runkle <gar@integrated-dynamics.com>",
"msg_from_op": true,
"msg_subject": "pg_clog troubles"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI'm dead in the water with pg_clog errors:\n\nMar 19 18:25:05 nexus postgres[28736]: [6] FATAL 2: open of\n/data00/pgdata/pg_clog/007D failed: No such file or directory\nMar 19 18:25:06 nexus postgres[22250]: [1] DEBUG: server process (pid\n28736) exited with exit code 2\n\nMar 19 22:06:53 nexus postgres[29389]: [9] FATAL 2: open of\n/data00/pgdata/pg_clog/0414 failed: No such file or directory\nMar 19 22:06:53 nexus postgres[22250]: [4] DEBUG: server process (pid\n29389) exited with exit code 2\n\nMar 19 22:11:04 nexus postgres[29491]: [12] FATAL 2: open of\n/data00/pgdata/pg_clog/0353 failed: No such file or directory\nMar 19 22:11:04 nexus postgres[22250]: [7] DEBUG: server process (pid\n29491) exited with exit code 2\n\nMar 19 22:13:34 nexus postgres[29716]: [6] FATAL 2: open of\n/data00/pgdata/pg_clog/0353 failed: No such file or directory\nMar 19 22:13:34 nexus postgres[29700]: [1] DEBUG: server process (pid\n29716) exited with exit code 2\n\nThere aren't any other error messages, just DEBUG messages about how\nPostgreSQL is cleaning up after itself.\n\nThese occur running a complex-ish query, but no more complex than\nothers,\nand less complex than many. The query runs for about 20 seconds, then\nthe error occurs.\n\nBetween attempts, I shut down postgresql and ran 'ipcclean'. No joy\nthere.\n\nHere's what's in my pg_clog directory:\n\n[postgres@nexus postgres]$ l /data00/pgdata/pg_clog/\ntotal 8\n-rw------- 1 postgres postgres 8192 Mar 19 22:05 0000\n\nI'm running version 7.2, with Tom's vacuum analyze patch.\n\nI searched the postgresql.org website for 'pg_clog', came up dry. Found\nsome stuff in the newsgroup archives that looked similar but different\n(more errors than I'm getting), and that I didn't really understand.\n\nAny help appreciated, as always...\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n",
"msg_date": "19 Mar 2002 23:25:46 -0500",
"msg_from": "Gordon Runkle <gar@integrated-dynamics.com>",
"msg_from_op": true,
"msg_subject": "pg_clog troubles"
},
{
"msg_contents": "Gordon Runkle <gar@integrated-dynamics.com> writes:\n> I'm dead in the water with pg_clog errors:\n\n> Mar 19 18:25:05 nexus postgres[28736]: [6] FATAL 2: open of\n> /data00/pgdata/pg_clog/007D failed: No such file or directory\n\nAm I right in guessing that the range of file names you have present\nin pgdata/clog/ is nowhere near 007D, let alone the others mentioned?\n\nWe have seen a couple of reports like this now, all seemingly indicating\nthat the clog code is being asked about the commit status of a totally-\noff-the-wall transaction number. I would really like to dig into an\nexample with a debugger ... but the prior reporters have been unable\nto reproduce the problem on demand. If you can provide a chance to\nget this behavior under a gdb microscope, let's talk.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 00:05:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_clog troubles "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.