threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi.\n\nHas anybody gop plpython to work on linux/pg7.1.2 ?\n\nI built it as instructed in the README, but calling the first \nplpython function (stupid() from the test suite) causes immediate \nclosedown.\n\nthere is nothing in the logs either ...\n\nI tried it first with python2.1 and then python 1.5 with exactly \nthe same results.\n\nAny advice on debugging what might go wrong ?\n\n-----------------\nHannu\n",
"msg_date": "Wed, 25 Jul 2001 14:51:37 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Has anybody gop PL/Python to work on linux/pg7.1.2 ?"
}
] |
[
{
"msg_contents": "Is there a way to debug a PL/pgSQL function? It's behaving very irradic!\n\nI have two function, one that works and one that doesn't. The part that\ndon't work in func2 is 'SELECT INTO ... ...' and I can't figgure out\nwhy it doesnt't work!\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\ncolonel fissionable assassination Cuba World Trade Center strategic\nsmuggle congress Rule Psix cryptographic SDI critical Kennedy Semtex\nFBI\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "25 Jul 2001 15:46:27 +0200",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "plpgsql: Debug function?"
},
{
"msg_contents": "On 25 Jul 2001, Turbo Fredriksson wrote:\n\n> Is there a way to debug a PL/pgSQL function? It's behaving very irradic!\n\nIt's crude, but you can output debugging statements w/ RAISE NOTICE\nor catch flawed assumptions by RAISE EXCEPTION.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Wed, 25 Jul 2001 11:13:34 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql: Debug function?"
},
{
"msg_contents": ">>>>> \"Joel\" == Joel Burton <jburton@scw.org> writes:\n\n Joel> On 25 Jul 2001, Turbo Fredriksson wrote:\n >> Is there a way to debug a PL/pgSQL function? It's behaving very\n >> irradic!\n\n Joel> It's crude, but you can output debugging statements w/ RAISE\n Joel> NOTICE or catch flawed assumptions by RAISE EXCEPTION.\n\nThat's what I've been doing...\n\nThe problem is that a 'SELECT INTO ...' in the function don't work, but\nthe actual SELECT in psql works fine! \n\nThe variable I'm SELECT'ing into don't get initialized...\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nradar Marxist counter-intelligence 747 Kennedy Serbian CIA NSA SEAL\nTeam 6 FSF [Hello to all my fans in domestic surveillance] Clinton FBI\nSoviet class struggle\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "25 Jul 2001 17:18:31 +0200",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql: Debug function?"
},
{
"msg_contents": "On 25 Jul 2001, Turbo Fredriksson wrote:\n\n> >>>>> \"Joel\" == Joel Burton <jburton@scw.org> writes:\n> \n> Joel> On 25 Jul 2001, Turbo Fredriksson wrote:\n> >> Is there a way to debug a PL/pgSQL function? It's behaving very\n> >> irradic!\n> \n> Joel> It's crude, but you can output debugging statements w/ RAISE\n> Joel> NOTICE or catch flawed assumptions by RAISE EXCEPTION.\n> \n> That's what I've been doing...\n> \n> The problem is that a 'SELECT INTO ...' in the function don't work, but\n> the actual SELECT in psql works fine! \n> \n> The variable I'm SELECT'ing into don't get initialized...\n\nCan you post a simple, reproducible example?\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Wed, 25 Jul 2001 11:25:56 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql: Debug function?"
},
{
"msg_contents": "> Is there a way to debug a PL/pgSQL function? It's behaving very\n> irradic!\n\n> Joel> It's crude, but you can output debugging statements w/ RAISE\n> Joel> NOTICE or catch flawed assumptions by RAISE EXCEPTION.\n\nAlso try turning on query logging, so you can see in the postmaster\nlog the queries plpgsql is sending to the SQL engine. This is especially\nhelpful for catching unexpected substitutions or lack of substitutions\nof plpgsql variables, as in Morgan Curley's recent problem over in\npgsql-sql.\n\nA volunteer to improve plpgsql's debugging/tracing facilities would\nbe favorably received...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Jul 2001 11:46:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: plpgsql: Debug function? "
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> Is there a way to debug a PL/pgSQL function? It's behaving very\n >> irradic!\n\n Joel> It's crude, but you can output debugging statements w/ RAISE\n Joel> NOTICE or catch flawed assumptions by RAISE EXCEPTION.\n\n Tom> Also try turning on query logging, so you can see in the\n Tom> postmaster log the queries plpgsql is sending to the SQL\n Tom> engine.\n\nWould that be the 'debug_print_query = true' in posgresql.conf?\n\nIt IS true, but still nothing in the syslog...\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nPLO Legion of Doom domestic disruption Clinton spy Rule Psix Nazi\nquiche radar fissionable BATF SDI bomb security NSA\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "26 Jul 2001 12:16:52 +0200",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: plpgsql: Debug function?"
},
{
"msg_contents": "On 26 Jul 2001, Turbo Fredriksson wrote:\n\n> >>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n> Would that be the 'debug_print_query = true' in posgresql.conf?\n>\n> It IS true, but still nothing in the syslog...\n\nI'm not sure. But I expect the logging would go out the postmaster's logs,\nnot necessarily syslog.\n\nTake care,\n\nBill\n\n",
"msg_date": "Fri, 27 Jul 2001 17:09:50 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: plpgsql: Debug function?"
}
] |
[
{
"msg_contents": "Hello\n\nConsider two simple tables: AA, BB.\n\nproba=# \\d aa\n Table \"aa\"\n Attribute | Type | Modifier \n-----------+---------+----------\n id | bigint | \n val | integer | \n\nproba=# select * from aa;\n id | val \n----+-----\n 1 | 1\n 2 | 2\n 2 | 2\n 3 | 3\n 3 | 3\n 3 | 3\n(6 rows)\n\nproba=# \\d bb\n Table \"bb\"\n Attribute | Type | Modifier \n-----------+---------+----------\n id | integer | \n val | bigint | \n occured | bigint | \n\nproba=# insert into bb select id,val,count(val) from aa group by id,val;\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!# \n\nAfter that postmaster sould be rerun.\n\nWhat is wrong ?\nWould you mind to tell me how can I fix the situation.\nUsing PostgreSQL 7.0, Slackware 7.0.\n\nI can give you the similar example on the same subject.\nAgain, two tables: CLICK, OTCHET_BROWS.\nproba=# \\d click\n Table \"click\"\n Attribute | Type | Modifier \n------------+-----------+---------------------------------------------------------------\n id_visitor | bigint | \n ip | inet | \n browser | integer | \n referer | integer | \n date | timestamp | not null default 'Fri Jul 20 21:36:05 2001 EEST'::\"timestamp\"\n\nproba=# select id_visitor, browser, count(browser) from click group by id_visitor,browser;\n id_visitor | browser | count \n------------+---------+-------\n 1 | 1 | 20\n 1 | 2 | 17\n 1 | 3 | 2\n 3 | 2 | 1\n(4 rows)\n\nproba=# insert into otchet_brows select id_visitor, browser, count(browser) from click group by id_visitor,browser;\nINSERT 0 6\nproba=# select * from otchet_brows;\n id_visitor | browser | value | date \n------------+---------+-------+-------------------------------\n 1 | 1 | 2 | Mon Jul 23 17:04:53 2001 EEST\n 1 | 2 | 4 | Mon Jul 23 17:04:53 2001 EEST\n 1 | 1 | 18 | Mon Jul 23 17:04:53 2001 EEST\n 1 | 2 | 13 | Mon Jul 23 17:04:53 2001 EEST\n 1 | 3 | 2 | Mon Jul 23 17:04:53 2001 EEST\n 3 | 2 | 1 | Mon Jul 23 17:04:53 2001 EEST\n(6 rows)\n\nproba=# \n\nWhat is the reason ?\n\nThanks in advance.\n\n-- \nVladimir Zolotych gsmith@eurocom.od.ua\n",
"msg_date": "Wed, 25 Jul 2001 17:18:34 +0300",
"msg_from": "\"Vladimir V. Zolotych\" <gsmith@eurocom.od.ua>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.0 problem (may be bug?)"
},
{
"msg_contents": "\"Vladimir V. Zolotych\" <gsmith@eurocom.od.ua> writes:\n> proba=# insert into bb select id,val,count(val) from aa group by id,val;\n> pqReadData() -- backend closed the channel unexpectedly.\n\nTry 7.1. Prior releases tend to have problems with implicit datatype\nconversions in INSERT ... SELECT.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Jul 2001 11:38:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 7.0 problem (may be bug?) "
}
] |
[
{
"msg_contents": "I can't seem to find any posts or docs on the subject\n\n\n",
"msg_date": "Wed, 25 Jul 2001 16:11:55 GMT",
"msg_from": "\"Howard Williams\" <howieshouse@home.com>",
"msg_from_op": true,
"msg_subject": "Can Postgres handle 2-phase commits ?"
},
{
"msg_contents": "Howard Williams wrote:\n> \n> I can't seem to find any posts or docs on the subject\n> \nThis was discussed quite a few months ago, and the answer was \"no\".\n",
"msg_date": "Thu, 26 Jul 2001 11:01:22 -0500",
"msg_from": "\"Keith G. Murphy\" <keithmur@mindspring.com>",
"msg_from_op": false,
"msg_subject": "Re: Can Postgres handle 2-phase commits ?"
}
] |
[
{
"msg_contents": "Here is the revised patch to allow the syntax:\n\nLOCK a,b,c;\n\nIt uses the method I described as the \"new\" method in a previous\nmessage.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\nIndex: src/backend/commands/command.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.136\ndiff -c -p -r1.136 command.c\n*** src/backend/commands/command.c\t2001/07/16 05:06:57\t1.136\n--- src/backend/commands/command.c\t2001/07/26 00:02:24\n*************** needs_toast_table(Relation rel)\n*** 1994,2019 ****\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tRelation\trel;\n! \tint\t\t\taclresult;\n \n! \trel = heap_openr(lockstmt->relname, NoLock);\n \n! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n \n! \tif (lockstmt->mode == AccessShareLock)\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n! \telse\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n \n! \tif (aclresult != ACLCHECK_OK)\n! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n \n! \tLockRelation(rel, lockstmt->mode);\n \n! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n }\n \n \n--- 1994,2170 ----\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tint relCnt;\n \n! \trelCnt = length(lockstmt -> rellist);\n \n! \t/* Handle a single relation lock specially to avoid overhead on\nlikely the\n! \t most common case */\n \n! \tif(relCnt == 1)\n! \t{\n \n! \t\t/* Locking a single table */\n \n! \t\tRelation\trel;\n! \t\tint\t\t\taclresult;\n! \t\tchar *relname;\n \n! \t\trelname = strVal(lfirst(lockstmt->rellist));\n! \n! \t\tfreeList(lockstmt->rellist);\n! \n! \t\trel = heap_openr(relname, NoLock);\n! \n! \t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", relname);\n! \n! \t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\telse\n! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\tif (aclresult != ACLCHECK_OK)\n! \t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n! \n! \t\tLockRelation(rel, lockstmt->mode);\n! \n! \t\tpfree(relname);\n! \n! \t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n! \t} \n! \telse \n! \t{\n! \t\tList *p;\n! \t\tRelation *relationArray;\n! \t\tRelation *pRel;\n! \t\tRelation *blockingLockTarget;\n! \t\tbool allLocked = false;\n! \n! \t\t/* Locking multiple tables */\n! \n! \t\t/* Create an array of relations */\n! \n! \t\trelationArray = palloc(relCnt * sizeof(Relation));\n! \t\tblockingLockTarget = relationArray;\n! \n! \t\tpRel = relationArray;\n! \n! \t\t/* Iterate over the list and populate the relation array */\n! \n! \t\tforeach(p, lockstmt->rellist)\n! \t\t{\n! \t\t\tchar *relname = strVal(lfirst(p));\n! \t\t\tint\t\t\taclresult;\n! \n! \t\t\t*pRel = heap_openr(relname, NoLock);\n! \n! \t\t\tif ((*pRel)->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n! \t\t\t\t\t relname);\n! \n! \t\t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\t\telse\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\t\tif (aclresult != ACLCHECK_OK)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n! \n! \t\t\tpRel++;\n! \t\t\tpfree(relname);\n! \t\t}\n! \n! \t\t/* Now acquire locks on all the relations */\n! \n! \t\twhile(!allLocked) \n! \t\t{\n! \n! \t\t\tallLocked = true;\n! \n! \t\t\t/* Lock the blocking lock target (initially the first lock\n! \t\t\t in the user's list */\n! \n! \t\t\tLockRelation(*blockingLockTarget, lockstmt->mode);\n! \n! \t\t\t/* Lock is now obtained on the lock target, now grab locks in a\n! \t\t\t non-blocking way on the rest of the list */\n! \t\t\t\n! \t\t\tfor(pRel = blockingLockTarget + 1;\n! \t\t\t\tpRel < relationArray + relCnt;\n! \t\t\t\tpRel++)\n! \t\t\t{\n! \t\t\t\tif(!ConditionalLockRelation(*pRel, lockstmt->mode)) \n! \t\t\t\t{\n! \t\t\t\t\tRelation *pRelUnlock;\n! \n! \t\t\t\t\t/* Flag that all relations were not successfully \n! \t\t\t\t\t locked */\n! \t\t\t\t\t\n! \t\t\t\t\tallLocked = false;\n! \n! \t\t\t\t\t/* A lock wasn't obtained, so unlock all others */\n! \n! \t\t\t\t\tfor(pRelUnlock = blockingLockTarget;\n! \t\t\t\t\t\tpRelUnlock < pRel;\n! \t\t\t\t\t\tpRelUnlock++) \n! \t\t\t\t\t\tUnlockRelation(*pRelUnlock, lockstmt->mode);\n! \t\t\t\t\t\n! \t\t\t\t\t/* Next time, do our blocking lock on the contended lock */\n! \n! \t\t\t\t\tblockingLockTarget = pRel;\n! \n! \t\t\t\t\t/* Now break out and try again */\n! \n! \t\t\t\t\tbreak;\n! \t\t\t\t}\n! \t\t\t}\n! \n! \t\t\t/* Now, lock anything before the blocking lock target in the lock\n! \t\t\t target array, if we were successful in getting locks on\n! \t\t\t everything after and including the blocking target */\n! \n! \t\t\tif(allLocked)\n! \t\t\t{\n! \t\t\t\tfor(pRel = relationArray;\n! \t\t\t\t\tpRel < blockingLockTarget;\n! \t\t\t\t\tpRel++)\n! \t\t\t\t{\n! \t\t\t\t\tif(!ConditionalLockRelation(*pRel, lockstmt->mode)) \n! \t\t\t\t\t{\n! \t\t\t\t\t\tRelation *pRelUnlock;\n! \t\t\t\t\t\t\n! \t\t\t\t\t\t/* Flag that all relations were not successfully \n! \t\t\t\t\t\t locked */\n! \n! \t\t\t\t\t\tallLocked = false;\n! \n! \t\t\t\t\t\t/* Lock wasn't obtained, so unlock all others */\n! \n! \t\t\t\t\t\tfor(pRelUnlock = relationArray;\n! \t\t\t\t\t\t\tpRelUnlock < pRel;\n! \t\t\t\t\t\t\tpRelUnlock++) \n! \t\t\t\t\t\t\tUnlockRelation(*pRelUnlock, lockstmt->mode);\n! \t\t\t\t\t\t\n! \t\t\t\t\t\t/* Next time, do our blocking lock on the contended \n! \t\t\t\t\t\t lock */\n! \n! \t\t\t\t\t\tblockingLockTarget = pRel;\n! \t\t\t\t\t\n! \t\t\t\t\t\t/* Now break out and try again */\n! \n! \t\t\t\t\t\tbreak;\n! \t\t\t\t\t}\n! \t\t\t\t}\n! \t\t\t}\n! \t\t}\n! \n! \t\tpfree(relationArray);\n! \t}\n }\n \n \nIndex: src/backend/nodes/copyfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\nretrieving revision 1.148\ndiff -c -p -r1.148 copyfuncs.c\n*** src/backend/nodes/copyfuncs.c\t2001/07/16 19:07:37\t1.148\n--- src/backend/nodes/copyfuncs.c\t2001/07/26 00:02:24\n*************** _copyLockStmt(LockStmt *from)\n*** 2425,2432 ****\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tif (from->relname)\n! \t\tnewnode->relname = pstrdup(from->relname);\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\n--- 2425,2432 ----\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tNode_Copy(from, newnode, rellist);\n! \t\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\nIndex: src/backend/nodes/equalfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\nretrieving revision 1.96\ndiff -c -p -r1.96 equalfuncs.c\n*** src/backend/nodes/equalfuncs.c\t2001/07/16 19:07:38\t1.96\n--- src/backend/nodes/equalfuncs.c\t2001/07/26 00:02:24\n*************** _equalDropUserStmt(DropUserStmt *a, Drop\n*** 1283,1289 ****\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equalstr(a->relname, b->relname))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\n--- 1283,1289 ----\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equal(a->rellist, b->rellist))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.238\ndiff -c -p -r2.238 gram.y\n*** src/backend/parser/gram.y\t2001/07/16 19:07:40\t2.238\n--- src/backend/parser/gram.y\t2001/07/26 00:02:25\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 3280,3290 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->relname = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n--- 3280,3290 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->rellist = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.136\ndiff -c -p -r1.136 parsenodes.h\n*** src/include/nodes/parsenodes.h\t2001/07/16 19:07:40\t1.136\n--- src/include/nodes/parsenodes.h\t2001/07/26 00:02:26\n*************** typedef struct VariableResetStmt\n*** 760,766 ****\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tchar\t *relname;\t\t/* relation to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \n--- 760,766 ----\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tList\t *rellist;\t\t/* relation to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \nIndex: src/interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.146\ndiff -c -p -r1.146 preproc.y\n*** src/interfaces/ecpg/preproc/preproc.y\t2001/07/16 05:07:00\t1.146\n--- src/interfaces/ecpg/preproc/preproc.y\t2001/07/26 00:02:27\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 2421,2427 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n--- 2421,2427 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n",
"msg_date": "Wed, 25 Jul 2001 20:27:51 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "\nThe problem with this patch is that it doesn't always lock the tables in\nthe order supplied by the user, leading to possible deadlock. My guess\nis that you need to try locking A, B, C and if B hangs, I think you need\nto sleep on B, and when you get it, release the lock on B and try A, B,\nC again. I know this is a pain and could fail multiple times, but I\nthink we have to do it this ay.\n\n\n> Here is the revised patch to allow the syntax:\n> \n> LOCK a,b,c;\n> \n> It uses the method I described as the \"new\" method in a previous\n> message.\n> \n> Neil\n> \n> -- \n> Neil Padgett\n> Red Hat Canada Ltd. E-Mail: npadgett@redhat.com\n> 2323 Yonge Street, Suite #300, \n> Toronto, ON M4P 2C9\n> \n> Index: src/backend/commands/command.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\n> retrieving revision 1.136\n> diff -c -p -r1.136 command.c\n> *** src/backend/commands/command.c\t2001/07/16 05:06:57\t1.136\n> --- src/backend/commands/command.c\t2001/07/26 00:02:24\n> *************** needs_toast_table(Relation rel)\n> *** 1994,2019 ****\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tRelation\trel;\n> ! \tint\t\t\taclresult;\n> \n> ! \trel = heap_openr(lockstmt->relname, NoLock);\n> \n> ! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n> \n> ! \tif (lockstmt->mode == AccessShareLock)\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n> ! \telse\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> \n> ! \tif (aclresult != ACLCHECK_OK)\n> ! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> \n> ! \tLockRelation(rel, lockstmt->mode);\n> \n> ! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n> }\n> \n> \n> --- 1994,2170 ----\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tint relCnt;\n> \n> ! \trelCnt = length(lockstmt -> rellist);\n> \n> ! \t/* Handle a single relation lock specially to avoid overhead on\n> likely the\n> ! \t most common case */\n> \n> ! \tif(relCnt == 1)\n> ! \t{\n> \n> ! \t\t/* Locking a single table */\n> \n> ! \t\tRelation\trel;\n> ! \t\tint\t\t\taclresult;\n> ! \t\tchar *relname;\n> \n> ! \t\trelname = strVal(lfirst(lockstmt->rellist));\n> ! \n> ! \t\tfreeList(lockstmt->rellist);\n> ! \n> ! \t\trel = heap_openr(relname, NoLock);\n> ! \n> ! \t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", relname);\n> ! \n> ! \t\tif (lockstmt->mode == AccessShareLock)\n> ! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\tACL_SELECT);\n> ! \t\telse\n> ! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \t\tif (aclresult != ACLCHECK_OK)\n> ! \t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> ! \n> ! \t\tLockRelation(rel, lockstmt->mode);\n> ! \n> ! \t\tpfree(relname);\n> ! \n> ! \t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n> ! \t} \n> ! \telse \n> ! \t{\n> ! \t\tList *p;\n> ! \t\tRelation *relationArray;\n> ! \t\tRelation *pRel;\n> ! \t\tRelation *blockingLockTarget;\n> ! \t\tbool allLocked = false;\n> ! \n> ! \t\t/* Locking multiple tables */\n> ! \n> ! \t\t/* Create an array of relations */\n> ! \n> ! \t\trelationArray = palloc(relCnt * sizeof(Relation));\n> ! \t\tblockingLockTarget = relationArray;\n> ! \n> ! \t\tpRel = relationArray;\n> ! \n> ! \t\t/* Iterate over the list and populate the relation array */\n> ! \n> ! \t\tforeach(p, lockstmt->rellist)\n> ! \t\t{\n> ! \t\t\tchar *relname = strVal(lfirst(p));\n> ! \t\t\tint\t\t\taclresult;\n> ! \n> ! \t\t\t*pRel = heap_openr(relname, NoLock);\n> ! \n> ! \t\t\tif ((*pRel)->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n> ! \t\t\t\t\t relname);\n> ! \n> ! \t\t\tif (lockstmt->mode == AccessShareLock)\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n> ! \t\t\telse\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \t\t\tif (aclresult != ACLCHECK_OK)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> ! \n> ! \t\t\tpRel++;\n> ! \t\t\tpfree(relname);\n> ! \t\t}\n> ! \n> ! \t\t/* Now acquire locks on all the relations */\n> ! \n> ! \t\twhile(!allLocked) \n> ! \t\t{\n> ! \n> ! \t\t\tallLocked = true;\n> ! \n> ! \t\t\t/* Lock the blocking lock target (initially the first lock\n> ! \t\t\t in the user's list */\n> ! \n> ! \t\t\tLockRelation(*blockingLockTarget, lockstmt->mode);\n> ! \n> ! \t\t\t/* Lock is now obtained on the lock target, now grab locks in a\n> ! \t\t\t non-blocking way on the rest of the list */\n> ! \t\t\t\n> ! \t\t\tfor(pRel = blockingLockTarget + 1;\n> ! \t\t\t\tpRel < relationArray + relCnt;\n> ! \t\t\t\tpRel++)\n> ! \t\t\t{\n> ! \t\t\t\tif(!ConditionalLockRelation(*pRel, lockstmt->mode)) \n> ! \t\t\t\t{\n> ! \t\t\t\t\tRelation *pRelUnlock;\n> ! \n> ! \t\t\t\t\t/* Flag that all relations were not successfully \n> ! \t\t\t\t\t locked */\n> ! \t\t\t\t\t\n> ! \t\t\t\t\tallLocked = false;\n> ! \n> ! \t\t\t\t\t/* A lock wasn't obtained, so unlock all others */\n> ! \n> ! \t\t\t\t\tfor(pRelUnlock = blockingLockTarget;\n> ! \t\t\t\t\t\tpRelUnlock < pRel;\n> ! \t\t\t\t\t\tpRelUnlock++) \n> ! \t\t\t\t\t\tUnlockRelation(*pRelUnlock, lockstmt->mode);\n> ! \t\t\t\t\t\n> ! \t\t\t\t\t/* Next time, do our blocking lock on the contended lock */\n> ! \n> ! \t\t\t\t\tblockingLockTarget = pRel;\n> ! \n> ! \t\t\t\t\t/* Now break out and try again */\n> ! \n> ! \t\t\t\t\tbreak;\n> ! \t\t\t\t}\n> ! \t\t\t}\n> ! \n> ! \t\t\t/* Now, lock anything before the blocking lock target in the lock\n> ! \t\t\t target array, if we were successful in getting locks on\n> ! \t\t\t everything after and including the blocking target */\n> ! \n> ! \t\t\tif(allLocked)\n> ! \t\t\t{\n> ! \t\t\t\tfor(pRel = relationArray;\n> ! \t\t\t\t\tpRel < blockingLockTarget;\n> ! \t\t\t\t\tpRel++)\n> ! \t\t\t\t{\n> ! \t\t\t\t\tif(!ConditionalLockRelation(*pRel, lockstmt->mode)) \n> ! \t\t\t\t\t{\n> ! \t\t\t\t\t\tRelation *pRelUnlock;\n> ! \t\t\t\t\t\t\n> ! \t\t\t\t\t\t/* Flag that all relations were not successfully \n> ! \t\t\t\t\t\t locked */\n> ! \n> ! \t\t\t\t\t\tallLocked = false;\n> ! \n> ! \t\t\t\t\t\t/* Lock wasn't obtained, so unlock all others */\n> ! \n> ! \t\t\t\t\t\tfor(pRelUnlock = relationArray;\n> ! \t\t\t\t\t\t\tpRelUnlock < pRel;\n> ! \t\t\t\t\t\t\tpRelUnlock++) \n> ! \t\t\t\t\t\t\tUnlockRelation(*pRelUnlock, lockstmt->mode);\n> ! \t\t\t\t\t\t\n> ! \t\t\t\t\t\t/* Next time, do our blocking lock on the contended \n> ! \t\t\t\t\t\t lock */\n> ! \n> ! \t\t\t\t\t\tblockingLockTarget = pRel;\n> ! \t\t\t\t\t\n> ! \t\t\t\t\t\t/* Now break out and try again */\n> ! \n> ! \t\t\t\t\t\tbreak;\n> ! \t\t\t\t\t}\n> ! \t\t\t\t}\n> ! \t\t\t}\n> ! \t\t}\n> ! \n> ! \t\tpfree(relationArray);\n> ! \t}\n> }\n> \n> \n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.148\n> diff -c -p -r1.148 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t2001/07/16 19:07:37\t1.148\n> --- src/backend/nodes/copyfuncs.c\t2001/07/26 00:02:24\n> *************** _copyLockStmt(LockStmt *from)\n> *** 2425,2432 ****\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tif (from->relname)\n> ! \t\tnewnode->relname = pstrdup(from->relname);\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> --- 2425,2432 ----\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tNode_Copy(from, newnode, rellist);\n> ! \t\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.96\n> diff -c -p -r1.96 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t2001/07/16 19:07:38\t1.96\n> --- src/backend/nodes/equalfuncs.c\t2001/07/26 00:02:24\n> *************** _equalDropUserStmt(DropUserStmt *a, Drop\n> *** 1283,1289 ****\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equalstr(a->relname, b->relname))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> --- 1283,1289 ----\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equal(a->rellist, b->rellist))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.238\n> diff -c -p -r2.238 gram.y\n> *** src/backend/parser/gram.y\t2001/07/16 19:07:40\t2.238\n> --- src/backend/parser/gram.y\t2001/07/26 00:02:25\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 3280,3290 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->relname = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> --- 3280,3290 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->rellist = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.136\n> diff -c -p -r1.136 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t2001/07/16 19:07:40\t1.136\n> --- src/include/nodes/parsenodes.h\t2001/07/26 00:02:26\n> *************** typedef struct VariableResetStmt\n> *** 760,766 ****\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tchar\t *relname;\t\t/* relation to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> --- 760,766 ----\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tList\t *rellist;\t\t/* relation to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> Index: src/interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.146\n> diff -c -p -r1.146 preproc.y\n> *** src/interfaces/ecpg/preproc/preproc.y\t2001/07/16 05:07:00\t1.146\n> --- src/interfaces/ecpg/preproc/preproc.y\t2001/07/26 00:02:27\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 2421,2427 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> --- 2421,2427 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 26 Jul 2001 17:48:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> The problem with this patch is that it doesn't always lock the tables in\n> the order supplied by the user, leading to possible deadlock. My guess\n> is that you need to try locking A, B, C and if B hangs, I think you need\n> to sleep on B, and when you get it, release the lock on B and try A, B,\n> C again. I know this is a pain and could fail multiple times, but I\n> think we have to do it this ay.\n> \n\nDeadlocks are not possible with this patch. The four conditions needed\nfor deadlock are (according to Operating Systems: Internals and Design\nPrinciples, 4th Ed. by W. Stallings):\n\n1. Mutual exclusion: Only one process may use a resource at a time.\n2. Hold and wait: A process may hold allocated resources while awaiting\nassignment of others.\n3. No preemption: No resources can be forcibly removed from a process\nholding it.\n4. Circular wait: A closed chain of processes exists, such that each\nprocess holds at least one resource needed by the next process in the\nchain.\n\nFor deadlock prevention one needs only to prevent the existence of\nat least one of the four conditions. \n\n\nThe patch code never holds any of requested locks, while waiting for a \nlocked relation to become free. If a lock on all tables in the lock list\ncannot be acquired at once, it backs off and releases all locks.\n\nStallings writes about preventing condition 3: \"This condition can be\nprevented in several ways. [. . .] [One way is to require that,] if a\nprocess holding certain resources is denied a further request, that\nprocess must release its original resources and, if necessary, request\nthem again together with the additional resources.\" \n\nThis is exactly what the patch does. Observe that if one lock is not\navailable, the patch releases all locks so far acquired, and then\nacquires\nthe locks again. Hence, condition 3 is prevented, and so deadlock is\nprevented.\n\nNeil\n\np.s. Is this mailing list always this slow?\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Fri, 27 Jul 2001 14:36:25 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > The problem with this patch is that it doesn't always lock the tables in\n> > the order supplied by the user, leading to possible deadlock. My guess\n> > is that you need to try locking A, B, C and if B hangs, I think you need\n> > to sleep on B, and when you get it, release the lock on B and try A, B,\n> > C again. I know this is a pain and could fail multiple times, but I\n> > think we have to do it this ay.\n> > \n> \n> Deadlocks are not possible with this patch. The four conditions needed\n> for deadlock are (according to Operating Systems: Internals and Design\n> Principles, 4th Ed. by W. Stallings):\n> \n...\n> \n> The patch code never holds any of requested locks, while waiting for a \n> locked relation to become free. If a lock on all tables in the lock list\n> cannot be acquired at once, it backs off and releases all locks.\n> \n> Stallings writes about preventing condition 3: \"This condition can be\n> prevented in several ways. [. . .] [One way is to require that,] if a\n> process holding certain resources is denied a further request, that\n> process must release its original resources and, if necessary, request\n> them again together with the additional resources.\" \n> \n> This is exactly what the patch does. Observe that if one lock is not\n> available, the patch releases all locks so far acquired, and then\n> acquires\n> the locks again. Hence, condition 3 is prevented, and so deadlock is\n> prevented.\n\nExcellent point. I had not considered the fact that you don't hang\nwaiting for the other locks; you just release them all and try again.\n\nLooks like a great patch, and it seems better than the OID patch in many\nways.\n\n> p.s. Is this mailing list always this slow?\n\nNot sure. I have gotten patches stuck in the patches queue recently. \nNot sure on the cause. Marc may know.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 01:08:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Bruce Momjian wrote:\n> > \n> > The problem with this patch is that it doesn't always lock the tables in\n> > the order supplied by the user, leading to possible deadlock. My guess\n> > is that you need to try locking A, B, C and if B hangs, I think you need\n> > to sleep on B, and when you get it, release the lock on B and try A, B,\n> > C again. I know this is a pain and could fail multiple times, but I\n> > think we have to do it this ay.\n> > \n> \n> Deadlocks are not possible with this patch. The four conditions needed\n> for deadlock are (according to Operating Systems: Internals and Design\n> Principles, 4th Ed. by W. Stallings):\n> \n> 1. Mutual exclusion: Only one process may use a resource at a time.\n> 2. Hold and wait: A process may hold allocated resources while awaiting\n> assignment of others.\n> 3. No preemption: No resources can be forcibly removed from a process\n> holding it.\n> 4. Circular wait: A closed chain of processes exists, such that each\n> process holds at least one resource needed by the next process in the\n> chain.\n> \n> For deadlock prevention one needs only to prevent the existence of\n> at least one of the four conditions. \n> \n> \n> The patch code never holds any of requested locks, while waiting for a \n> locked relation to become free. If a lock on all tables in the lock list\n> cannot be acquired at once, it backs off and releases all locks.\n> \n> Stallings writes about preventing condition 3: \"This condition can be\n> prevented in several ways. [. . .] [One way is to require that,] if a\n> process holding certain resources is denied a further request, that\n> process must release its original resources and, if necessary, request\n> them again together with the additional resources.\" \n> \n> This is exactly what the patch does. Observe that if one lock is not\n> available, the patch releases all locks so far acquired, and then\n> acquires\n> the locks again. Hence, condition 3 is prevented, and so deadlock is\n> prevented.\n> \n> Neil\n> \n> p.s. Is this mailing list always this slow?\n> \n> -- \n> Neil Padgett\n> Red Hat Canada Ltd. E-Mail: npadgett@redhat.com\n> 2323 Yonge Street, Suite #300, \n> Toronto, ON M4P 2C9\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 01:08:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n\n[snip]\n\n> >\n> > Deadlocks are not possible with this patch. The four conditions needed\n> > for deadlock are (according to Operating Systems: Internals and Design\n> > Principles, 4th Ed. by W. Stallings):\n> >\n> ...\n> >\n> > The patch code never holds any of requested locks, while waiting for a\n> > locked relation to become free. If a lock on all tables in the lock list\n> > cannot be acquired at once, it backs off and releases all locks.\n> >\n> > Stallings writes about preventing condition 3: \"This condition can be\n> > prevented in several ways. [. . .] [One way is to require that,] if a\n> > process holding certain resources is denied a further request, that\n> > process must release its original resources and, if necessary, request\n> > them again together with the additional resources.\"\n> >\n> > This is exactly what the patch does. Observe that if one lock is not\n> > available, the patch releases all locks so far acquired, and then\n> > acquires\n> > the locks again. Hence, condition 3 is prevented, and so deadlock is\n> > prevented.\n> \n> Excellent point. I had not considered the fact that you don't hang\n> waiting for the other locks; you just release them all and try again.\n> \n\nI have a question.\nWhat will happen when the second table is locked for a long time \nthough the first table isn't locked ?\n\nregards,\nHiroshi Ioue\n",
"msg_date": "Mon, 30 Jul 2001 09:11:06 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "On Mon, 30 Jul 2001, Hiroshi Inoue wrote:\n\n> I have a question.\n> What will happen when the second table is locked for a long time\n> though the first table isn't locked ?\n\nConsider the case:\n\nLOCK a,b;\n\nAssume a is free (i.e. not locked), but b is busy (i.e. locked).\n\nFirst the system will do a blocking lock attempt on a, which will return\nimmediately, since a was free. Table a is now locked. Now, the system will\ntry a non-blocking lock on b. But, b is busy so the lock attempt will fail\nimmediately (since the lock attempt was non-blocking). So, the system will\nback off, and the lock on a is released.\n\nNext, a blocking lock attempt will be made on b. (Since it was busy last\ntime, we want to wait for it to become free.) The lock call will block\nuntil b becomes free. At that time, the lock attempt will return, and b\nwill be locked. Then, a non-blocking lock attempt will be made on table a.\n(Recall that we don't have a lock on it, since we released it during\nback-off earlier.) Assuming a is still free, it will be locked and the\nLOCK command will complete. Otherwise, if a is busy, the lock attempt will\nthen restart with a blocking lock attempt on a. The procedure will\ncontinue until all tables are free to lock.\n\nIn summary, no locks are held while waiting for tables to become free --\nin essence, the tables are locked all at once, once all tables in the\nLOCK statement are free.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300,\nToronto, ON M4P 2C9\n\n\n",
"msg_date": "Sun, 29 Jul 2001 20:30:57 -0400 (EDT)",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> On Mon, 30 Jul 2001, Hiroshi Inoue wrote:\n> \n> > I have a question.\n> > What will happen when the second table is locked for a long time\n> > though the first table isn't locked ?\n> \n> Consider the case:\n> \n> LOCK a,b;\n> \n> Assume a is free (i.e. not locked), but b is busy (i.e. locked).\n> \n> First the system will do a blocking lock attempt on a, which will return\n> immediately, since a was free. Table a is now locked. Now, the system will\n> try a non-blocking lock on b. But, b is busy so the lock attempt will fail\n> immediately (since the lock attempt was non-blocking). So, the system will\n> back off, and the lock on a is released.\n> \n> Next, a blocking lock attempt will be made on b. (Since it was busy last\n> time, we want to wait for it to become free.) The lock call will block\n> until b becomes free. At that time, the lock attempt will return, and b\n> will be locked. Then, a non-blocking lock attempt will be made on table a.\n> (Recall that we don't have a lock on it, since we released it during\n> back-off earlier.) Assuming a is still free, it will be locked and the\n> LOCK command will complete. Otherwise, if a is busy, the lock attempt will\n> then restart with a blocking lock attempt on a. The procedure will\n> continue until all tables are free to lock.\n> \n> In summary, no locks are held while waiting for tables to become free --\n> in essence, the tables are locked all at once, once all tables in the\n> LOCK statement are free.\n\nThe more I think about it the more I like it. I never would have\nthought of the idea myself.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 22:49:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> Stallings writes about preventing condition 3: \"This condition can be\n> prevented in several ways. [. . .] [One way is to require that,] if a\n> process holding certain resources is denied a further request, that\n> process must release its original resources and, if necessary, request\n> them again together with the additional resources.\"\n> \n> This is exactly what the patch does.\n\nNo, that is not what it does. Note that Stallings specifies that the\nfailed requestor must release ALL previously held resources. That is\nnot feasible in Postgres; some of the locks may be held due to previous\ncommands in the same transaction. Consider this scenario:\n\n\tProcess 1\t\t\tProcess 2\n\n\tLOCK a;\t\t\t\tLOCK b;\n\t...\t\t\t\t...\n\tLOCK b,c;\t\t\tLOCK a,c;\n\nThe second LOCK statements cannot release the locks already held,\ntherefore this is a deadlock. While that's no worse than we had\nbefore, I believe that your patch introduces a possibility of\nundetected deadlock. Consider this:\n\n\tProcess 1\t\t\tProcess 2\n\n\tLOCK a,b;\t\t\tLOCK b,a;\n\nA possible interleaving of execution is: 1 acquires lock a, 2 acquires\nb, 1 tries to acquire b and fails, 2 tries to acquire a and fails,\n1 releases a, 2 releases b, 1 acquires b, 2 acquires a, 1 tries to\nacquire a and fails, etc etc. It's implausible that this condition\nwould persist in perfect lockstep for very long on a uniprocessor\nmachine ... but not so implausible if you have dual CPUs, each running\none of the two processes at exactly the same speed.\n\nI haven't quite managed to work out a full scenario, but I think it is\npossible that the combination of these two effects could result in an\nindefinite, never-detected deadlock --- without implausible assumptions\nabout process speed. It'd probably take three or more processes\ncontending for several locks, but it seems possible to construct a\nfailure scenario.\n\nI think that the only safe way to do something like this is to teach\nthe lock manager itself about multiple simultaneous requests.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Jul 2001 23:25:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "Neil Padgett wrote:\n> \n> On Mon, 30 Jul 2001, Hiroshi Inoue wrote:\n> \n> > I have a question.\n> > What will happen when the second table is locked for a long time\n> > though the first table isn't locked ?\n> \n> Consider the case:\n> \n> LOCK a,b;\n> \n> Assume a is free (i.e. not locked), but b is busy (i.e. locked).\n> \n> First the system will do a blocking lock attempt on a, which will return\n> immediately, since a was free. Table a is now locked. Now, the system will\n> try a non-blocking lock on b. But, b is busy so the lock attempt will fail\n> immediately (since the lock attempt was non-blocking). So, the system will\n> back off, and the lock on a is released.\n> \n> Next, a blocking lock attempt will be made on b. (Since it was busy last\n> time, we want to wait for it to become free.) The lock call will block\n> until b becomes free. At that time, the lock attempt will return, and b\n> will be locked. Then, a non-blocking lock attempt will be made on table a.\n\nIs it paranoid to worry about the followings ?\n\n1) Concurrent 'lock table a, b;' and 'lock table b, a;'\n could last forever in theory ?\n2) 'Lock table a,b' could hardly acquire the lock when\n both the table a and b are very frequently accessed.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 30 Jul 2001 12:45:33 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> > First the system will do a blocking lock attempt on a, which will return\n> > immediately, since a was free. Table a is now locked. Now, the system will\n> > try a non-blocking lock on b. But, b is busy so the lock attempt will fail\n> > immediately (since the lock attempt was non-blocking). So, the system will\n> > back off, and the lock on a is released.\n> > \n> > Next, a blocking lock attempt will be made on b. (Since it was busy last\n> > time, we want to wait for it to become free.) The lock call will block\n> > until b becomes free. At that time, the lock attempt will return, and b\n> > will be locked. Then, a non-blocking lock attempt will be made on table a.\n> \n> Is it paranoid to worry about the followings ?\n> \n> 1) Concurrent 'lock table a, b;' and 'lock table b, a;'\n> could last forever in theory ?\n> 2) 'Lock table a,b' could hardly acquire the lock when\n> both the table a and b are very frequently accessed.\n\nWell, we do tell people to lock things in the same order. If they did\nthis with two lock statements, they would cause a deadlock. In this\ncase, they could grab their first lock at the same time, fail on the\nsecond, and wait on the second, get it, fail on the first, and go all\nover again. However, they would have to stay synchronized to do this. \nIf one got out of step it would stop.\n\nActually, with this new code, we could go back to locking in oid order,\nwhich would eliminate the problem. However, I prefer to do things in\nthe order specified, at least on the first lock try.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 23:49:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Actually, with this new code, we could go back to locking in oid order,\n> which would eliminate the problem.\n\nNo it wouldn't. If anything, locking in a *randomized* order would be\nthe best bet. But I have no confidence in this approach, anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 00:24:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> > Stallings writes about preventing condition 3: \"This condition can be\n> > prevented in several ways. [. . .] [One way is to require that,] if a\n> > process holding certain resources is denied a further request, that\n> > process must release its original resources and, if necessary, request\n> > them again together with the additional resources.\"\n> >\n> > This is exactly what the patch does.\n> \n> No, that is not what it does. Note that Stallings specifies that the\n> failed requestor must release ALL previously held resources. That is\n> not feasible in Postgres; some of the locks may be held due to previous\n> commands in the same transaction. Consider this scenario:\n> \n> Process 1 Process 2\n> \n> LOCK a; LOCK b;\n> ... ...\n> LOCK b,c; LOCK a,c;\n> \n> The second LOCK statements cannot release the locks already held,\n> therefore this is a deadlock.\n\nBut that is a programmer's error. He/she is acquiring the locks in\nreversed order. The fact that the second lock request is in a multiple\nlock is not much different from doing it with a single lock like in:\n\n Process 1 Process 2\n\n LOCK a; LOCK b;\n ... ...\n LOCK b; LOCK a;\n\nI guess you've mentioned this scenario because of the undetected\ndeadlock\nconcerns, right?\n\n\n\n> While that's no worse than we had\n> before, I believe that your patch introduces a possibility of\n> undetected deadlock. Consider this:\n> \n> Process 1 Process 2\n> \n> LOCK a,b; LOCK b,a;\n> \n> A possible interleaving of execution is: 1 acquires lock a, 2 acquires\n> b, 1 tries to acquire b and fails, 2 tries to acquire a and fails,\n> 1 releases a, 2 releases b, 1 acquires b, 2 acquires a, 1 tries to\n> acquire a and fails, etc etc. It's implausible that this condition\n> would persist in perfect lockstep for very long on a uniprocessor\n> machine ... but not so implausible if you have dual CPUs, each running\n> one of the two processes at exactly the same speed.\n> \n> I haven't quite managed to work out a full scenario, but I think it is\n> possible that the combination of these two effects could result in an\n> indefinite, never-detected deadlock --- without implausible assumptions\n> about process speed. \n\n\nI believe the undetected deadlock is not possible. To get the multiple\nlock statement involved in the deadlock scenario, at least one of the\nlocks\ndesired must have been acquired (in a separate statement) by the other\nprocess.\n\nThe key property of the algorithm that prevents the undetected deadlocks\nis\nthat it waits on the last failed-to-acquire lock. \n\nAs the algorithm waits on the last non-obtained lock and restarts\nfrom there (in a circular fashion), it will eventually reach the lock\nthat\nthe other process has and then stop for good (thus allowing the deadlock\ndetection to see it).\n\nEven if the algorithm started always from the first specified lock and\nthen\ngot in the lockstep mode you've described, the (potential) deadlock \nwould\nnot be detected because it had not happened yet. It will only happen\nwhen\nthe 2nd situation you've described ceases to exist and the crossed locks\nare\nattempted. But them the processes are really stopped and the deadlock\ncan be detected. \n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 30 Jul 2001 11:49:59 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> Is it paranoid to worry about the followings ?\n> \n> 1) Concurrent 'lock table a, b;' and 'lock table b, a;'\n> could last forever in theory ?\n\nYou would need a very evil timeslice duration on a single processor, but\nit\ncould happen on a dual processor. However, the two processes would have\nto\nbe synchronized in a very narrow window of instructions, the schedulers\nin\nboth machines would have to be precisely synchronized and absolutely no\ninterruption (that is not common to both machines) could never occur.\nEven a keyboard press will break the enchantment.\n\nI guess it is what we call \"unstable equilibrium\", possible in theory\nbut\nnever happens in practice except for an infinitesimal amount of time.\nIt is trying to make an egg stand on one end or something like that\n(without breaking the egg, of course :-) ). \n\n\n\n> 2) 'Lock table a,b' could hardly acquire the lock when\n> both the table a and b are very frequently accessed.\n> \n\nYes, multiple locks with the back off is less aggressive than obtaining\nand holding the locks (with individual lock commands).\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 30 Jul 2001 12:16:22 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Actually, with this new code, we could go back to locking in oid order,\n> > which would eliminate the problem.\n> \n> No it wouldn't. If anything, locking in a *randomized* order would be\n> the best bet. But I have no confidence in this approach, anyway.\n\nI am looking for a way to get this in there without munging the lock\ncode, which is already quite complex. What about doing some sort of\nsmall sleep after we reset back to the beginning of the table list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 12:20:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> > The second LOCK statements cannot release the locks already held,\n> > therefore this is a deadlock.\n> \n> But that is a programmer's error. He/she is acquiring the locks in\n> reversed order. The fact that the second lock request is in a multiple\n> lock is not much different from doing it with a single lock like in:\n> \n> Process 1 Process 2\n> \n> LOCK a; LOCK b;\n> ... ...\n> LOCK b; LOCK a;\n> \n> I guess you've mentioned this scenario because of the undetected\n> deadlock\n> concerns, right?\n> \n\nI guess so. The above would issue a deadlock error message, while the\nLOCK a,b would just spin around.\n\n> > While that's no worse than we had\n> > before, I believe that your patch introduces a possibility of\n> > undetected deadlock. Consider this:\n> > \n> > Process 1 Process 2\n> > \n> > LOCK a,b; LOCK b,a;\n> > \n> > A possible interleaving of execution is: 1 acquires lock a, 2 acquires\n> > b, 1 tries to acquire b and fails, 2 tries to acquire a and fails,\n> > 1 releases a, 2 releases b, 1 acquires b, 2 acquires a, 1 tries to\n> > acquire a and fails, etc etc. It's implausible that this condition\n> > would persist in perfect lockstep for very long on a uniprocessor\n> > machine ... but not so implausible if you have dual CPUs, each running\n> > one of the two processes at exactly the same speed.\n> > \n> > I haven't quite managed to work out a full scenario, but I think it is\n> > possible that the combination of these two effects could result in an\n> > indefinite, never-detected deadlock --- without implausible assumptions\n> > about process speed. \n> \n> \n> I believe the undetected deadlock is not possible. To get the multiple\n> lock statement involved in the deadlock scenario, at least one of the\n> locks\n> desired must have been acquired (in a separate statement) by the other\n> process.\n> \n> The key property of the algorithm that prevents the undetected deadlocks\n> is\n> that it waits on the last failed-to-acquire lock. \n> \n> As the algorithm waits on the last non-obtained lock and restarts\n> from there (in a circular fashion), it will eventually reach the lock\n> that\n> the other process has and then stop for good (thus allowing the deadlock\n> detection to see it).\n> \n> Even if the algorithm started always from the first specified lock and\n> then\n> got in the lockstep mode you've described, the (potential) deadlock \n> would\n> not be detected because it had not happened yet. It will only happen\n> when\n> the 2nd situation you've described ceases to exist and the crossed locks\n> are\n> attempted. But them the processes are really stopped and the deadlock\n> can be detected. \n\nThe unusual case here is that deadlock is not checked on request, but\nonly after waiting on the lock for a while. This is because deadlock\ndetection is an expensive operation. In fact, you don't want deadlock\ndetection in this case because LOCK a,b could be evaluated as a,b or b,a\nand you don't want it to fail randomly with deadlock messages.\n\nI certainly would like to find a solution to this that makes everyone\ncomfortable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 12:55:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "On Mon, 30 Jul 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Actually, with this new code, we could go back to locking in oid order,\n> > > which would eliminate the problem.\n> >\n> > No it wouldn't. If anything, locking in a *randomized* order would be\n> > the best bet. But I have no confidence in this approach, anyway.\n>\n> I am looking for a way to get this in there without munging the lock\n> code, which is already quite complex. What about doing some sort of\n> small sleep after we reset back to the beginning of the table list.\n\nIt seems to me that we already have a small sleep in place. After all, in\norder to acquite a lock, the shared memory area has to be accessed. So,\nthe contenders for a lock both have to go through a spin lock. So, if we\nhave the two \"stuck\" processes as in Tom's example, one will win at\nacquiring the spin lock and the other will have to wait. So, they become\ndesynchronized, regardless of how many CPUs or what memory architecture\nyou have.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300,\nToronto, ON M4P 2C9\n\n\n",
"msg_date": "Mon, 30 Jul 2001 12:57:25 -0400 (EDT)",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> On Mon, 30 Jul 2001, Bruce Momjian wrote:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Actually, with this new code, we could go back to locking in oid order,\n> > > > which would eliminate the problem.\n> > >\n> > > No it wouldn't. If anything, locking in a *randomized* order would be\n> > > the best bet. But I have no confidence in this approach, anyway.\n> >\n> > I am looking for a way to get this in there without munging the lock\n> > code, which is already quite complex. What about doing some sort of\n> > small sleep after we reset back to the beginning of the table list.\n> \n> It seems to me that we already have a small sleep in place. After all, in\n> order to acquite a lock, the shared memory area has to be accessed. So,\n> the contenders for a lock both have to go through a spin lock. So, if we\n> have the two \"stuck\" processes as in Tom's example, one will win at\n> acquiring the spin lock and the other will have to wait. So, they become\n> desynchronized, regardless of how many CPUs or what memory architecture\n> you have.\n\nI see your point now, that they can't synchronize because they have to\ngo through the same semaphore and therefore get out of sync. Do they\nget out of sync enough for one to get the lock while the other is not\nholding it, or do the locks actually keep them in sync? I don't know\nthe answer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 13:04:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> The unusual case here is that deadlock is not checked on request, but\n> only after waiting on the lock for a while. This is because deadlock\n> detection is an expensive operation. \n\nBut that is what happens. If one of the locks is not obtained, the\nalgorithm does wait on that lock (after releasing the other locks).\nIn the case of a deadlock (tom's scenario #1) it would wait forever,\nbut the deadlock detection will find it in there and break it.\n\n\n> In fact, you don't want deadlock\n> detection in this case because LOCK a,b could be evaluated as a,b or b,a\n> and you don't want it to fail randomly with deadlock messages.\n> \n\nIt does not change the deadlock scenario at all. It is still determined\nby the resources in a previous (independent) LOCK statement and the ones\non\nthis LOCK statement (being it multiple or not) to be crossed. \n\nAnd deadlock failures will be intermittent anyway. A potential deadlock\ncondition may or may not happen each time depending on the interleaving\nof\nexecution of the two processes.\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 30 Jul 2001 13:24:26 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > The unusual case here is that deadlock is not checked on request, but\n> > only after waiting on the lock for a while. This is because deadlock\n> > detection is an expensive operation. \n> \n> But that is what happens. If one of the locks is not obtained, the\n> algorithm does wait on that lock (after releasing the other locks).\n> In the case of a deadlock (tom's scenario #1) it would wait forever,\n> but the deadlock detection will find it in there and break it.\n\nOK, I thought you were talking about the LOCK a,b case, not the other\ncase where we had a previous LOCK statement. Sorry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 13:30:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > It seems to me that we already have a small sleep in place. After all, in\n> > order to acquite a lock, the shared memory area has to be accessed. So,\n> > the contenders for a lock both have to go through a spin lock. So, if we\n> > have the two \"stuck\" processes as in Tom's example, one will win at\n> > acquiring the spin lock and the other will have to wait. So, they become\n> > desynchronized, regardless of how many CPUs or what memory architecture\n> > you have.\n> \n> I see your point now, that they can't synchronize because they have to\n> go through the same semaphore and therefore get out of sync. Do they\n> get out of sync enough for one to get the lock while the other is not\n> holding it, or do the locks actually keep them in sync? I don't know\n> the answer.\n> \n\nThat is a good point. With the current random sleeps it helps breaking\nthe\nlockstep of the two processes, but when it is changed to a queue the\nrandom\nsleeps won't be there anymore.\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 30 Jul 2001 14:09:41 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > It seems to me that we already have a small sleep in place. After all, in\n> > > order to acquite a lock, the shared memory area has to be accessed. So,\n> > > the contenders for a lock both have to go through a spin lock. So, if we\n> > > have the two \"stuck\" processes as in Tom's example, one will win at\n> > > acquiring the spin lock and the other will have to wait. So, they become\n> > > desynchronized, regardless of how many CPUs or what memory architecture\n> > > you have.\n> > \n> > I see your point now, that they can't synchronize because they have to\n> > go through the same semaphore and therefore get out of sync. Do they\n> > get out of sync enough for one to get the lock while the other is not\n> > holding it, or do the locks actually keep them in sync? I don't know\n> > the answer.\n> > \n> \n> That is a good point. With the current random sleeps it helps breaking\n> the\n> lockstep of the two processes, but when it is changed to a queue the\n> random\n> sleeps won't be there anymore.\n\nAlso most systems can't sleep less than one clock tick, 10ms, meaning\nthe sleeps aren't very random.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 14:14:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> But that is what happens. If one of the locks is not obtained, the\n> algorithm does wait on that lock (after releasing the other locks).\n> In the case of a deadlock (tom's scenario #1) it would wait forever,\n> but the deadlock detection will find it in there and break it.\n\nI'm entirely unconvinced of that. My example #2 shows that it's\npossible for two multi-lock statements to bounce back and forth between\nfailed conditional-lock attempts, never doing a hard wait that would\nallow the lock manager to detect deadlock. That simple example requires\nsomewhat-implausible assumptions about relative process speed, but I\nthink that more complex cases might exhibit similar behavior that is\nless dependent on precise timing. (The fact that I have not come up with\none after a few minutes' thought can hardly be taken as proof that there\nare no such cases.)\n\nBasically, my objection here is that the proposed implementation tries\nto avoid telling the lock manager what is going on. Since the lock\nmanager has sole responsibility for detecting deadlock, it can't\npossibly be a good idea not to give it complete information.\n\n> And deadlock failures will be intermittent anyway. A potential deadlock\n> condition may or may not happen each time depending on the interleaving\n> of execution of the two processes.\n\nOf course. The point is that we currently offer a guarantee that when a\ndeadlock does occur, it will be detected, reported, and recovered from\n(by rolling back one or more of the conflicting transactions). I am\nafraid that the proposed multi-LOCK implementation will break that\nguarantee. I do not think the proposed feature is sufficiently valuable\nto justify taking any such risk.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 14:22:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> the contenders for a lock both have to go through a spin lock. So, if we\n> have the two \"stuck\" processes as in Tom's example, one will win at\n> acquiring the spin lock and the other will have to wait. So, they become\n> desynchronized, regardless of how many CPUs or what memory architecture\n> you have.\n\nNo, actually the existence of the lockmanager mutex (spinlock) makes the\nscenario I described *more* plausible. Note that the scenario involves\na strict alternation of lockmanager operations by the two processes:\n\n>> A possible interleaving of execution is: 1 acquires lock a, 2 acquires\n>> b, 1 tries to acquire b and fails, 2 tries to acquire a and fails,\n>> 1 releases a, 2 releases b, 1 acquires b, 2 acquires a, 1 tries to\n>> acquire a and fails, etc etc. It's implausible that this condition\n>> would persist in perfect lockstep for very long on a uniprocessor\n>> machine ... but not so implausible if you have dual CPUs, each running\n>> one of the two processes at exactly the same speed.\n\nEach process will be acquiring the lockmanager spinlock, doing a little\ncomputation, releasing the spinlock, doing a little more computation\n(in the LOCK statement code, not in the lockmanager), and then trying to\nacquire the spinlock again. When process 1 has the spinlock, process 2\nwill be waiting to acquire it. As soon as 1 releases the spinlock, 2\nwill successfully acquire it --- so, quite plausibly, 1 will complete\nits outside-the-lock-manager operations and be back waiting for the\nspinlock by the time 2 releases it. Even if 1 is a little slower than\nthat, it will probably manage to come along and retake the spinlock\nbefore 2 does. So the existence of the spinlock actually smooths out\nany irregularities in elapsed time and helps to ensure the two processes\nstay in sync.\n\nThe pattern of alternating between hard and conditional locks won't help\nany either. If, say, 1 gets a little ahead and arrives at the first\npart of its cycle (\"acquire lock a\") before 2 has released lock a, guess\nwhat --- it blocks until it can get a. I think that could help\nstabilize the loop too.\n\nLong-term persistence of this pattern is somewhat less plausible on a\nuniprocessor, but given the way our spinlocks work I don't think it's\ncompletely out of the question either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 14:57:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "Okay, I've developed a case where the proposed multi-LOCK implementation\nwill wait forever without detecting deadlock --- indeed, there is no\ntrue deadlock, but it won't figure out how to get out of it.\n\nThe problem is that we have multiple types of locks, some of which\nconflict and some of which don't. It's sufficient to think about\n\"shared\" (reader) and \"exclusive\" (writer) locks for this example.\nConsider this scenario:\n\nProcess 1\tProcess 2\tProcess 3\n\nSH LOCK A\n\n\t\tSH LOCK B\n\n\t\t\t\tEX LOCK B\t3 now waits for 2\n\n\t\tEX LOCK A\t\t\t2 now waits for 1\n\nSH LOCK B\n\nSince process 3 is waiting for ex lock on B, our normal behavior is to\nqueue up 1 behind 3 in the queue for B (if we let 1 go first, 3 could be\nstarved indefinitely by a succession of transactions that want to read\nB). In this situation, however, that would be a deadlock. The deadlock\ndetection code recognizes this, and also recognizes that it can escape\nfrom the deadlock by allowing 1 to go ahead of 3 --- so SH LOCK B is\ngranted to 1, and we can proceed. Note that since the deadlock can only\nbe recognized by looking at locks and procs that are unrelated to the\nqueue for table B, it's extremely expensive to detect this case, and so\nwe don't try to do so during LockAcquire. Only when the timeout expires\nand we run the full deadlock-detection algorithm do we realize that we\nhave a problem and find a way out of it.\n\nNow, let's extend this to a multi-LOCK scenario:\n\nProcess 1\tProcess 2\tProcess 3\tProcess 4\tProcess 5\n\nSH LOCK A\n\n\t\tSH LOCK B\t\t\tSH LOCK C\n\n\t\t\t\tEX LOCK B\t\t\tEX LOCK C\n\n\t\tEX LOCK A\t\t\tEX LOCK A\n\n(at this point, 3 waits for 2, 5 waits for 4, 2 and 4 wait for 1)\n\nMULTI SH LOCK B,C\n\nNow, what will happen? The multi lock code will do an unconditional\nlock on B. This will initially queue up behind 3. After a deadlock\ndetection interval expires, the deadlock code will grant the requested\nsh lock on B to 1 to escape otherwise-certain deadlock. Now the multi-\nlock code will do a conditional lock on C, which will fail (since the\nlock manager will not run the deadlock detection code before failing the\nconditional request). The multi lock code then releases its shlock on B\nand does an unconditional shlock on C. One detection interval later,\nit will have that, but its conditional lock on B will fail. And away we\ngo.\n\nThe bottom line here is you cannot make this work correctly unless you\nteach the lock manager about it. Yes, it's not trivial.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 16:46:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "Well, it seems to me that there are a number of places this can be taken\nthen:\n\n1. Don't worry about these cases and apply the patch anyways.\n\tThe patch will work correctly in most real world situations.\n\n2. Add a counter / timer to detect livelock / alternating deadlock\nconditions. If the counter / timer elapses, abort the offending\ntransaction (i.e. the one doing the multiple locking) and roll back.\n\tThe patch then will always work correctly, AFAICT.\n\t\n3. Go with the original patch I posted.\n\tAside from not locking in user order (which is a debatable issue), this\npatch seems correct. \n\n4. Go with the original patch but sans OID sort. In this case, LOCK a,b;\ndegrades into a short form for LOCK a; LOCk b; This is still useful for\nOracle compatibility.\n\tSatisfies the TODO list item.\n\n5. Go with a major Lock manager overhaul. (which would be added to the\nTODO list.) Defer this functionality until then.\n\tA lock manager overhaul will likely take a while so probably it won't\nbe done for some time. This means the multiple lock syntax will continue\nto be missing from PostgreSQL, possibly for several years.\n\nPersonally, I think 1 is acceptable, and 2 is a better idea. 3/4 are\nalso\ndoable, but lose some of the advantages of the command. 5 is reasonable,\nbut disappointing, especially from a user standpoint.\n\nThoughts?\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Tue, 31 Jul 2001 14:56:51 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> 4. Go with the original patch but sans OID sort. In this case, LOCK a,b;\n> degrades into a short form for LOCK a; LOCk b; This is still useful for\n> Oracle compatibility.\n> \tSatisfies the TODO list item.\n\nI like this one. It's not clear to me that this TODO item is worth\neven as much effort as we've put into the discussion already ;-),\nlet alone a major lockmanager overhaul. And I definitely don't want a\nversion that \"usually works\" ... simple and reliable is beautiful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Jul 2001 15:56:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "> Well, it seems to me that there are a number of places this can be taken\n> then:\n> \n> 1. Don't worry about these cases and apply the patch anyways.\n> \tThe patch will work correctly in most real world situations.\n> \n> 2. Add a counter / timer to detect livelock / alternating deadlock\n> conditions. If the counter / timer elapses, abort the offending\n> transaction (i.e. the one doing the multiple locking) and roll back.\n> \tThe patch then will always work correctly, AFAICT.\n> \t\n> 3. Go with the original patch I posted.\n> \tAside from not locking in user order (which is a debatable issue), this\n> patch seems correct. \n> \n> 4. Go with the original patch but sans OID sort. In this case, LOCK a,b;\n> degrades into a short form for LOCK a; LOCk b; This is still useful for\n> Oracle compatibility.\n> \tSatisfies the TODO list item.\n> \n> 5. Go with a major Lock manager overhaul. (which would be added to the\n> TODO list.) Defer this functionality until then.\n> \tA lock manager overhaul will likely take a while so probably it won't\n> be done for some time. This means the multiple lock syntax will continue\n> to be missing from PostgreSQL, possibly for several years.\n> \n> Personally, I think 1 is acceptable, and 2 is a better idea. 3/4 are\n> also\n> doable, but lose some of the advantages of the command. 5 is reasonable,\n> but disappointing, especially from a user standpoint.\n\nI have been thinking about this too. I have two ideas.\n\nOne, we could have you sleep on the lock, and when you get it, release\nit and then start acquiring the locks in the order specified again. You\ncould lose the lock by the time you get back to the table you had a lock\non, but I think this reduces the chances of getting in a loop with\nothers.\n\nAnother idea is to change the lock code so instead of returning a lock\nfailure on first try, it goes to sleep for DEADLOCK seconds, and once it\nwakes up, and determines there is no deadlock, returns a lock failure. \nThat way, it can release all the locks and do a non-timeout lock on the\ntable that failed. We would then need to release the lock and do the\nsteps outlined above.\n\nOne advantage of this last idea is that it would make multi-table lock\nbetter in cases where there is more than one table that is high-use\nbecause we are waiting a little to see if we get the lock before\nfailing. The downside is that we hold the previously aquired locks\nlonger. I think I can help you modify the lock manager to do the delay.\n\nI know it is frustrating to develop a patch and then have to contort it\nto meet everyones ideas, but in the long run, it makes for better code\nand a more reliable database.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 31 Jul 2001 17:51:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Well, it seems to me that there are a number of places this can be taken\n> > then:\n> >\n\nHonestly, I don't understand what \"Unison\" means.\nWhat spec is preferable for the command ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 01 Aug 2001 12:51:26 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> I have been thinking about this too. I have two ideas.\n> \n> One, we could have you sleep on the lock, and when you get it, release\n> it and then start acquiring the locks in the order specified again. You\n> could lose the lock by the time you get back to the table you had a lock\n> on, but I think this reduces the chances of getting in a loop with\n> others.\n> \n\nI think this could work. But I worry it makes starvation even more\nlikely. This might be acceptable if a \"passive\" lock grabber is what we\nwant to have here, though.\n\nOne idea I've been floating is to allow some syntax whereby multiple\nlocks can be grabbed in either this manner, that is a passive manner, or\ninstead in an aggressive manner (i.e. grab locks in order, and hold them\n-- in other words LOCK a,b; -> LOCK a; LOCK b;). This could be done by\nmeans of some additional keywords, perhaps \"WITH AGGRESSIVE\" or \"WITH\nPASSIVE\", or something to this effect. This shifts some of the burden to\nthe database programmer, with regards to trading off throughput for\nfairness.\n\n> Another idea is to change the lock code so instead of returning a lock\n> failure on first try, it goes to sleep for DEADLOCK seconds, and once it\n> wakes up, and determines there is no deadlock, returns a lock failure.\n> That way, it can release all the locks and do a non-timeout lock on the\n> table that failed. We would then need to release the lock and do the\n> steps outlined above.\n\nThis is interesting. I'd like to hear what other people think about\nthis.\n\n> \n> One advantage of this last idea is that it would make multi-table lock\n> better in cases where there is more than one table that is high-use\n> because we are waiting a little to see if we get the lock before\n> failing. The downside is that we hold the previously aquired locks\n> longer. I think I can help you modify the lock manager to do the delay.\n\nIn other words it sounds like you are making a tradeoff for greater\nthroughput in exchange for possibly reduced concurrency. This can be a\ndesign decision on our part, and might be a reasonable thing to do. How\nhard do you think it will be to tune the DEADLOCK timer to a reasonable\nvalue? Would it have to vary based on load? This could be as simple as\nhaving the timeout could elapse early, if a certain number of lock\nattempts are registered by the lock manager while the backend is\nsleeping.\n\n> I know it is frustrating to develop a patch and then have to contort it\n> to meet everyones ideas, but in the long run, it makes for better code\n> and a more reliable database.\n\nI think I can agree that a reliable database with good code is what\neveryone wants!\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Wed, 01 Aug 2001 15:59:12 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> Bruce Momjian wrote:\n>> One, we could have you sleep on the lock, and when you get it, release\n>> it and then start acquiring the locks in the order specified again.\n\n> I think this could work. But I worry it makes starvation even more\n> likely.\n\nI agree, it looks like a recipe for starvation.\n\n>> Another idea is to change the lock code so instead of returning a lock\n>> failure on first try, it goes to sleep for DEADLOCK seconds, and once it\n>> wakes up, and determines there is no deadlock, returns a lock failure.\n>> That way, it can release all the locks and do a non-timeout lock on the\n>> table that failed. We would then need to release the lock and do the\n>> steps outlined above.\n\n> This is interesting. I'd like to hear what other people think about\n> this.\n\nI doubt that this works --- it still has the same fundamental problem:\nthat you're not telling the lock manager what it needs to know to\nfulfill its responsibility of detecting deadlock. I believe I can still\nproduce a ping-pong undetected deadlock example with the above behavior;\nit'd just take a few more processes. The issue is that you are\nexpecting the lock manager to detect or not detect deadlock, when you\nstill have some lock requests up your sleeve that it's not seen yet.\nAs long as you can block before presenting them all, it can never work.\n\n> In other words it sounds like you are making a tradeoff for greater\n> throughput in exchange for possibly reduced concurrency. This can be a\n> design decision on our part, and might be a reasonable thing to do. How\n> hard do you think it will be to tune the DEADLOCK timer to a reasonable\n> value? Would it have to vary based on load?\n\nRight now the deadlock timeout is extremely noncritical; I doubt many\npeople actually bother to tune it. I would rather not see us change\nthings in a way that makes that number become significant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 16:23:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "> > This is interesting. I'd like to hear what other people think about\n> > this.\n> \n> I doubt that this works --- it still has the same fundamental problem:\n> that you're not telling the lock manager what it needs to know to\n> fulfill its responsibility of detecting deadlock. I believe I can still\n> produce a ping-pong undetected deadlock example with the above behavior;\n> it'd just take a few more processes. The issue is that you are\n> expecting the lock manager to detect or not detect deadlock, when you\n> still have some lock requests up your sleeve that it's not seen yet.\n> As long as you can block before presenting them all, it can never work.\n\nI know there has been talk about having this done in the lock manager,\nand I know it isn't worth the effort, but I am wondering how you would\ndo it even if you were doing in the lock manager with more information\navailable.\n\nYou could query all the locks at once but we really can already do that\nnow. We could detect deadlock better. Would that be the only\nadvantage?\n\nSeems this whole idea maybe wasn't a good one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 21:45:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> it'd just take a few more processes. The issue is that you are\n>> expecting the lock manager to detect or not detect deadlock, when you\n>> still have some lock requests up your sleeve that it's not seen yet.\n>> As long as you can block before presenting them all, it can never work.\n\n> I know there has been talk about having this done in the lock manager,\n> and I know it isn't worth the effort, but I am wondering how you would\n> do it even if you were doing in the lock manager with more information\n> available.\n\nI'd have to go back and study my 1980's-vintage operating system theory\ntextbooks before answering that ;-). But acquisition of multiple locks\nis a solved problem, AFAIR.\n\nLikely we'd have to throw out the existing lockmanager datastructures\nand start fresh, however --- they assume that a proc waits for only one\nlock at a time. It'd be a nontrivial bit of work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 00:18:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> it'd just take a few more processes. The issue is that you are\n> >> expecting the lock manager to detect or not detect deadlock, when you\n> >> still have some lock requests up your sleeve that it's not seen yet.\n> >> As long as you can block before presenting them all, it can never work.\n> \n> > I know there has been talk about having this done in the lock manager,\n> > and I know it isn't worth the effort, but I am wondering how you would\n> > do it even if you were doing in the lock manager with more information\n> > available.\n> \n> I'd have to go back and study my 1980's-vintage operating system theory\n> textbooks before answering that ;-). But acquisition of multiple locks\n> is a solved problem, AFAIR.\n> \n> Likely we'd have to throw out the existing lockmanager datastructures\n> and start fresh, however --- they assume that a proc waits for only one\n> lock at a time. It'd be a nontrivial bit of work.\n\nOh, OK. Just checking. It seems the starvation problem kept hitting us\nas soon as we fixed the deadlock, cycling problem, and I was wondering\nif there even was a solution. My guess is that you would have to put\nthe multi-lock request in several lock queues and make sure they all got\ndone at some point. A mess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 06:15:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Oh, OK. Just checking. It seems the starvation problem kept hitting us\n> as soon as we fixed the deadlock, cycling problem, and I was wondering\n> if there even was a solution. My guess is that you would have to put\n> the multi-lock request in several lock queues and make sure they all got\n> done at some point. A mess.\n> \n\nWhat about having the syntax\n\nLOCK a,b,c;\n\nnow just as a shorthand for \n\nLOCK a;\nLOCK b;\nLOCK c;\n\nThis would save typing and allow for Oracle compatibility.\n\n\nIf one day we get a multiple-lock facility inside the \nlock manager, we add the optional keyword \"SIMULTANEOUSLY\"\nso that other mode is used instead.\n\n\nOne more reason for adding the \"simple\" version of the multiple\nlock is that we may need it already.\n\nI wonder how we handle\n\nLOCK v;\n\nwhere \"v\" is a view. We should be locking all the base tables.\n\nSuppose that the base tables for \"v\" are \"a\", \"b\" and \"c\".\nIn this case\n\nLOCK v;\n\nshould be rewritten as\n\nLOCK a,b,c;\n\n\n\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 02 Aug 2001 15:24:01 -0400",
"msg_from": "Fernando Nasser <fnasser@cygnus.com>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Fernando Nasser <fnasser@cygnus.com> writes:\n> What about having the syntax\n> LOCK a,b,c;\n> now just as a shorthand for \n> LOCK a;\n> LOCK b;\n> LOCK c;\n> This would save typing and allow for Oracle compatibility.\n\nThis seems fine to me (and in fact I thought we'd already agreed to it).\nMaybe some day we will get ambitious enough to make it do\nparallel-locking, but for now we can get 80% of what we want with 0.8%\nof the effort ;-)\n\n> I wonder how we handle\n> LOCK v;\n> where \"v\" is a view.\n\nregression=# create view v as select * from int4_tbl;\nCREATE\nregression=# lock table v;\nERROR: LOCK TABLE: v is not a table\n\n> We should be locking all the base tables.\n\nI consider that debatable. It hard-wires a rather constricted idea\nof what the semantics of a view are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 16:05:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@cygnus.com> writes:\n> > What about having the syntax\n> > LOCK a,b,c;\n> > now just as a shorthand for\n> > LOCK a;\n> > LOCK b;\n> > LOCK c;\n> > This would save typing and allow for Oracle compatibility.\n> \n> This seems fine to me (and in fact I thought we'd already agreed to it).\n\nHere's the patch then. =)\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\nIndex: src/backend/commands/command.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.136\ndiff -c -p -r1.136 command.c\n*** src/backend/commands/command.c\t2001/07/16 05:06:57\t1.136\n--- src/backend/commands/command.c\t2001/07/31 22:04:05\n*************** needs_toast_table(Relation rel)\n*** 1984,1991 ****\n \t\tMAXALIGN(data_length);\n \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n }\n! \n! \n /*\n *\n * LOCK TABLE\n--- 1984,1990 ----\n \t\tMAXALIGN(data_length);\n \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n }\n! \t\n /*\n *\n * LOCK TABLE\n*************** needs_toast_table(Relation rel)\n*** 1994,2019 ****\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tRelation\trel;\n! \tint\t\t\taclresult;\n \n! \trel = heap_openr(lockstmt->relname, NoLock);\n \n! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n \n! \tif (lockstmt->mode == AccessShareLock)\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n! \telse\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n \n! \tif (aclresult != ACLCHECK_OK)\n! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n \n! \tLockRelation(rel, lockstmt->mode);\n \n! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n }\n \n \n--- 1993,2096 ----\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tint relCnt;\n! \n! \trelCnt = length(lockstmt -> rellist);\n! \n! \t/* Handle a single relation lock specially to avoid overhead on\nlikely the\n! \t most common case */\n! \n! \tif(relCnt == 1)\n! \t{\n! \n! \t\t/* Locking a single table */\n! \n! \t\tRelation\trel;\n! \t\tint\t\t\taclresult;\n! \t\tchar *relname;\n! \n! \t\trelname = strVal(lfirst(lockstmt->rellist));\n! \n! \t\tfreeList(lockstmt->rellist);\n! \n! \t\trel = heap_openr(relname, NoLock);\n! \n! \t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", relname);\n! \n! \t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\telse\n! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\tif (aclresult != ACLCHECK_OK)\n! \t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n! \n! \t\tLockRelation(rel, lockstmt->mode);\n! \n! \t\tpfree(relname);\n! \n! \t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n! \t} \n! \telse \n! \t{\n! \t\tList *p;\n! \t\tRelation *RelationArray;\n! \t\tRelation *pRel;\n! \n! \t\t/* Locking multiple tables */\n! \n! \t\t/* Create an array of relations */\n! \n! \t\tRelationArray = palloc(relCnt * sizeof(Relation));\n! \t\tpRel = RelationArray;\n! \n! \t\t/* Iterate over the list and populate the relation array */\n! \n! \t\tforeach(p, lockstmt->rellist)\n! \t\t{\n! \t\t\tchar* relname = strVal(lfirst(p));\n! \t\t\tint\t\t\taclresult;\n! \n! \t\t\t*pRel = heap_openr(relname, NoLock);\n! \n! \t\t\tif ((*pRel)->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n! \t\t\t\t\t relname);\n! \n! \t\t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\t\telse\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\t\tif (aclresult != ACLCHECK_OK)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n \n! \t\t\tpRel++;\n! \t\t\tpfree(relname);\n! \t\t}\n \n! \t\t/* Now, lock all the relations, closing each after it is locked\n! \t\t (Keeping the locks)\n! \t\t */\n \n! \t\tfor(pRel = RelationArray;\n! \t\t\tpRel < RelationArray + relCnt;\n! \t\t\tpRel++)\n! \t\t\t{\n! \t\t\t\tLockRelation(*pRel, lockstmt->mode);\n \n! \t\t\t\theap_close(*pRel, NoLock);\n! \t\t\t}\n \n! \t\t/* Free the relation array */\n \n! \t\tpfree(RelationArray);\n! \t}\n }\n \n \nIndex: src/backend/nodes/copyfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\nretrieving revision 1.148\ndiff -c -p -r1.148 copyfuncs.c\n*** src/backend/nodes/copyfuncs.c\t2001/07/16 19:07:37\t1.148\n--- src/backend/nodes/copyfuncs.c\t2001/07/31 22:04:06\n*************** _copyLockStmt(LockStmt *from)\n*** 2425,2432 ****\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tif (from->relname)\n! \t\tnewnode->relname = pstrdup(from->relname);\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\n--- 2425,2432 ----\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tNode_Copy(from, newnode, rellist);\n! \t\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\nIndex: src/backend/nodes/equalfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\nretrieving revision 1.96\ndiff -c -p -r1.96 equalfuncs.c\n*** src/backend/nodes/equalfuncs.c\t2001/07/16 19:07:38\t1.96\n--- src/backend/nodes/equalfuncs.c\t2001/07/31 22:04:06\n*************** _equalDropUserStmt(DropUserStmt *a, Drop\n*** 1283,1289 ****\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equalstr(a->relname, b->relname))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\n--- 1283,1289 ----\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equal(a->rellist, b->rellist))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.238\ndiff -c -p -r2.238 gram.y\n*** src/backend/parser/gram.y\t2001/07/16 19:07:40\t2.238\n--- src/backend/parser/gram.y\t2001/07/31 22:04:10\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 3280,3290 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->relname = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n--- 3280,3290 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->rellist = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.136\ndiff -c -p -r1.136 parsenodes.h\n*** src/include/nodes/parsenodes.h\t2001/07/16 19:07:40\t1.136\n--- src/include/nodes/parsenodes.h\t2001/07/31 22:04:11\n*************** typedef struct VariableResetStmt\n*** 760,766 ****\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tchar\t *relname;\t\t/* relation to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \n--- 760,766 ----\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tList\t *rellist;\t\t/* relations to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \nIndex: src/interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.146\ndiff -c -p -r1.146 preproc.y\n*** src/interfaces/ecpg/preproc/preproc.y\t2001/07/16 05:07:00\t1.146\n--- src/interfaces/ecpg/preproc/preproc.y\t2001/07/31 22:04:14\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 2421,2427 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n--- 2421,2427 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n",
"msg_date": "Thu, 02 Aug 2001 16:11:51 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@cygnus.com> writes:\n> > What about having the syntax\n> > LOCK a,b,c;\n> > now just as a shorthand for\n> > LOCK a;\n> > LOCK b;\n> > LOCK c;\n> > This would save typing and allow for Oracle compatibility.\n> \n> This seems fine to me (and in fact I thought we'd already agreed to it).\n> Maybe some day we will get ambitious enough to make it do\n> parallel-locking, but for now we can get 80% of what we want with 0.8%\n> of the effort ;-)\n> \n\nAgreed.\n\n> > I wonder how we handle\n> > LOCK v;\n> > where \"v\" is a view.\n> \n> regression=# create view v as select * from int4_tbl;\n> CREATE\n> regression=# lock table v;\n> ERROR: LOCK TABLE: v is not a table\n> \n> > We should be locking all the base tables.\n> \n> I consider that debatable. It hard-wires a rather constricted idea\n> of what the semantics of a view are.\n> \n\nI've only mentioned it because it is what Oracle does. It says explicitly\n(in their documentation) that if \"table\" is \"LOCK VIEW table\" is actually\na view, all base tables necessary to compute that view are locked.\n\nI guess the principle (for Oracle folks) was that, for the user, there should\nbe no distinction between a real table and a view. Thus, it should not matter\nfor the user if the thing that is being locked is a real table or if it\nis actually being implemented as a view. Consider that it may have been \na table one day, but the DBA changed it into a view. So that SQL will\nnot work anymore and give the \"ERROR: LOCK TABLE: v is not a table\" message.\nThis violates the Data Independence notion.\n\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 02 Aug 2001 16:19:36 -0400",
"msg_from": "Fernando Nasser <fnasser@cygnus.com>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Fernando Nasser <fnasser@cygnus.com> writes:\n> I guess the principle (for Oracle folks) was that, for the user, there should\n> be no distinction between a real table and a view. Thus, it should not matter\n> for the user if the thing that is being locked is a real table or if it\n> is actually being implemented as a view. Consider that it may have been \n> a table one day, but the DBA changed it into a view. So that SQL will\n> not work anymore and give the \"ERROR: LOCK TABLE: v is not a table\" message.\n> This violates the Data Independence notion.\n\nI don't really buy this, because it makes life difficult for DBAs who\nwant to do creative things with views. Update rules don't necessarily\ntouch exactly the same set of tables that are mentioned in the select\nrule. But that's the only set that a LOCK implementation might possibly\nknow about.\n\nConsider: for the view as view (ie, select) there's no real need to do\nlocking at all. The implicit read locks that will be grabbed as the\nview is expanded will do fine. For updates, the behavior can and should\nbe defined by the rewrite rules that the DBA supplies. (Hmm, I'm not\nsure that LOCK is one of the allowed query types in a rule --- if not,\nit probably should be, so that the rule author can ensure the right\nkinds of locks are grabbed in the right sequence.)\n\nAnother serious issue, which gets back to your original point, is that\nwe have no good idea what order to lock the base tables in. If we had\na concurrent-lock implementation it wouldn't matter, but in the absence\nof one I am not sure it's a good idea to put in a LOCK that is going to\nlock base tables in some arbitrary order.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 16:58:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n>> This seems fine to me (and in fact I thought we'd already agreed to it).\n\n> Here's the patch then. =)\n\n[ quick mental checklist... ] You forgot the documentation updates.\nOtherwise it looks good.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 17:01:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "> Fernando Nasser <fnasser@cygnus.com> writes:\n> > What about having the syntax\n> > LOCK a,b,c;\n> > now just as a shorthand for \n> > LOCK a;\n> > LOCK b;\n> > LOCK c;\n> > This would save typing and allow for Oracle compatibility.\n> \n> This seems fine to me (and in fact I thought we'd already agreed to it).\n> Maybe some day we will get ambitious enough to make it do\n> parallel-locking, but for now we can get 80% of what we want with 0.8%\n> of the effort ;-)\n\nI think that was my point, that even in the lock manager, we would have\nstarvation problems and things would get very complicated. In\nhindsight, the idea of locking multiple tables in unison was just not\nreasonable in PostgreSQL at this time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 17:32:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Neil Padgett <npadgett@redhat.com> writes:\n> >> This seems fine to me (and in fact I thought we'd already agreed to it).\n> \n> > Here's the patch then. =)\n> \n> [ quick mental checklist... ] You forgot the documentation updates.\n> Otherwise it looks good.\n\nOk -- I made a go at the documentation. I'm no SGML wizard, so if\nsomeone could check my changes that would be helpful.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\nIndex: doc/src/sgml/ref/lock.sgml\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/lock.sgml,v\nretrieving revision 1.24\ndiff -c -p -r1.24 lock.sgml\n*** doc/src/sgml/ref/lock.sgml\t2001/07/09 22:18:33\t1.24\n--- doc/src/sgml/ref/lock.sgml\t2001/08/02 21:39:42\n*************** Postgres documentation\n*** 15,21 ****\n LOCK\n </refname>\n <refpurpose>\n! Explicitly lock a table inside a transaction\n </refpurpose>\n </refnamediv>\n <refsynopsisdiv>\n--- 15,21 ----\n LOCK\n </refname>\n <refpurpose>\n! Explicitly lock a table / tables inside a transaction\n </refpurpose>\n </refnamediv>\n <refsynopsisdiv>\n*************** Postgres documentation\n*** 23,30 ****\n <date>2001-07-09</date>\n </refsynopsisdivinfo>\n <synopsis>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> IN\n<replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n \n where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n \n--- 23,30 ----\n <date>2001-07-09</date>\n </refsynopsisdivinfo>\n <synopsis>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n...]\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n...] IN <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n \n where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n \n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 373,378 ****\n--- 373,379 ----\n An example for this rule was given previously when discussing the \n use of SHARE ROW EXCLUSIVE mode rather than SHARE mode.\n </para>\n+ \n </listitem>\n </itemizedlist>\n \n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 383,388 ****\n--- 384,395 ----\n </para>\n </note>\n \n+ <para>\n+ When locking multiple tables, the command LOCK a, b; is equivalent\nto LOCK\n+ a; LOCK b;. The tables are locked one-by-one in the order specified\nin the\n+ <command>LOCK</command> command.\n+ </para>\n+ \n <refsect2 id=\"R2-SQL-LOCK-3\">\n <refsect2info>\n <date>1999-06-08</date>\n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 406,411 ****\n--- 413,419 ----\n <para>\n <command>LOCK</command> works only inside transactions.\n </para>\n+ \n </refsect2>\n </refsect1>\n \nIndex: src/backend/commands/command.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.136\ndiff -c -p -r1.136 command.c\n*** src/backend/commands/command.c\t2001/07/16 05:06:57\t1.136\n--- src/backend/commands/command.c\t2001/08/02 21:39:43\n*************** needs_toast_table(Relation rel)\n*** 1984,1991 ****\n \t\tMAXALIGN(data_length);\n \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n }\n! \n! \n /*\n *\n * LOCK TABLE\n--- 1984,1990 ----\n \t\tMAXALIGN(data_length);\n \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n }\n! \t\n /*\n *\n * LOCK TABLE\n*************** needs_toast_table(Relation rel)\n*** 1994,2019 ****\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tRelation\trel;\n! \tint\t\t\taclresult;\n \n! \trel = heap_openr(lockstmt->relname, NoLock);\n \n! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n \n! \tif (lockstmt->mode == AccessShareLock)\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n! \telse\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n \n! \tif (aclresult != ACLCHECK_OK)\n! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n \n! \tLockRelation(rel, lockstmt->mode);\n \n! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n }\n \n \n--- 1993,2096 ----\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tint relCnt;\n! \n! \trelCnt = length(lockstmt -> rellist);\n! \n! \t/* Handle a single relation lock specially to avoid overhead on\nlikely the\n! \t most common case */\n! \n! \tif(relCnt == 1)\n! \t{\n! \n! \t\t/* Locking a single table */\n! \n! \t\tRelation\trel;\n! \t\tint\t\t\taclresult;\n! \t\tchar *relname;\n! \n! \t\trelname = strVal(lfirst(lockstmt->rellist));\n! \n! \t\tfreeList(lockstmt->rellist);\n! \n! \t\trel = heap_openr(relname, NoLock);\n! \n! \t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", relname);\n! \n! \t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\telse\n! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\tif (aclresult != ACLCHECK_OK)\n! \t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n! \n! \t\tLockRelation(rel, lockstmt->mode);\n! \n! \t\tpfree(relname);\n! \n! \t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n! \t} \n! \telse \n! \t{\n! \t\tList *p;\n! \t\tRelation *RelationArray;\n! \t\tRelation *pRel;\n! \n! \t\t/* Locking multiple tables */\n! \n! \t\t/* Create an array of relations */\n! \n! \t\tRelationArray = palloc(relCnt * sizeof(Relation));\n! \t\tpRel = RelationArray;\n! \n! \t\t/* Iterate over the list and populate the relation array */\n! \n! \t\tforeach(p, lockstmt->rellist)\n! \t\t{\n! \t\t\tchar* relname = strVal(lfirst(p));\n! \t\t\tint\t\t\taclresult;\n! \n! \t\t\t*pRel = heap_openr(relname, NoLock);\n! \n! \t\t\tif ((*pRel)->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n! \t\t\t\t\t relname);\n! \n! \t\t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\t\telse\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\t\tif (aclresult != ACLCHECK_OK)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n \n! \t\t\tpRel++;\n! \t\t\tpfree(relname);\n! \t\t}\n \n! \t\t/* Now, lock all the relations, closing each after it is locked\n! \t\t (Keeping the locks)\n! \t\t */\n \n! \t\tfor(pRel = RelationArray;\n! \t\t\tpRel < RelationArray + relCnt;\n! \t\t\tpRel++)\n! \t\t\t{\n! \t\t\t\tLockRelation(*pRel, lockstmt->mode);\n \n! \t\t\t\theap_close(*pRel, NoLock);\n! \t\t\t}\n \n! \t\t/* Free the relation array */\n \n! \t\tpfree(RelationArray);\n! \t}\n }\n \n \nIndex: src/backend/nodes/copyfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\nretrieving revision 1.148\ndiff -c -p -r1.148 copyfuncs.c\n*** src/backend/nodes/copyfuncs.c\t2001/07/16 19:07:37\t1.148\n--- src/backend/nodes/copyfuncs.c\t2001/08/02 21:39:44\n*************** _copyLockStmt(LockStmt *from)\n*** 2425,2432 ****\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tif (from->relname)\n! \t\tnewnode->relname = pstrdup(from->relname);\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\n--- 2425,2432 ----\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tNode_Copy(from, newnode, rellist);\n! \t\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\nIndex: src/backend/nodes/equalfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\nretrieving revision 1.96\ndiff -c -p -r1.96 equalfuncs.c\n*** src/backend/nodes/equalfuncs.c\t2001/07/16 19:07:38\t1.96\n--- src/backend/nodes/equalfuncs.c\t2001/08/02 21:39:44\n*************** _equalDropUserStmt(DropUserStmt *a, Drop\n*** 1283,1289 ****\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equalstr(a->relname, b->relname))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\n--- 1283,1289 ----\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equal(a->rellist, b->rellist))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.238\ndiff -c -p -r2.238 gram.y\n*** src/backend/parser/gram.y\t2001/07/16 19:07:40\t2.238\n--- src/backend/parser/gram.y\t2001/08/02 21:39:46\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 3280,3290 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->relname = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n--- 3280,3290 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->rellist = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.136\ndiff -c -p -r1.136 parsenodes.h\n*** src/include/nodes/parsenodes.h\t2001/07/16 19:07:40\t1.136\n--- src/include/nodes/parsenodes.h\t2001/08/02 21:39:46\n*************** typedef struct VariableResetStmt\n*** 760,766 ****\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tchar\t *relname;\t\t/* relation to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \n--- 760,766 ----\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tList\t *rellist;\t\t/* relations to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \nIndex: src/interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.146\ndiff -c -p -r1.146 preproc.y\n*** src/interfaces/ecpg/preproc/preproc.y\t2001/07/16 05:07:00\t1.146\n--- src/interfaces/ecpg/preproc/preproc.y\t2001/08/02 21:39:49\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 2421,2427 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n--- 2421,2427 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n",
"msg_date": "Thu, 02 Aug 2001 18:20:54 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Tom Lane wrote:\n> > \n> > Fernando Nasser <fnasser@cygnus.com> writes:\n> > > What about having the syntax\n> > > LOCK a,b,c;\n> > > now just as a shorthand for\n> > > LOCK a;\n> > > LOCK b;\n> > > LOCK c;\n> > > This would save typing and allow for Oracle compatibility.\n> > \n> > This seems fine to me (and in fact I thought we'd already agreed to it).\n> \n> Here's the patch then. =)\n> \n> Neil\n> \n> -- \n> Neil Padgett\n> Red Hat Canada Ltd. E-Mail: npadgett@redhat.com\n> 2323 Yonge Street, Suite #300, \n> Toronto, ON M4P 2C9\n> \n> Index: src/backend/commands/command.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\n> retrieving revision 1.136\n> diff -c -p -r1.136 command.c\n> *** src/backend/commands/command.c\t2001/07/16 05:06:57\t1.136\n> --- src/backend/commands/command.c\t2001/07/31 22:04:05\n> *************** needs_toast_table(Relation rel)\n> *** 1984,1991 ****\n> \t\tMAXALIGN(data_length);\n> \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> }\n> ! \n> ! \n> /*\n> *\n> * LOCK TABLE\n> --- 1984,1990 ----\n> \t\tMAXALIGN(data_length);\n> \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> }\n> ! \t\n> /*\n> *\n> * LOCK TABLE\n> *************** needs_toast_table(Relation rel)\n> *** 1994,2019 ****\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tRelation\trel;\n> ! \tint\t\t\taclresult;\n> \n> ! \trel = heap_openr(lockstmt->relname, NoLock);\n> \n> ! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n> \n> ! \tif (lockstmt->mode == AccessShareLock)\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n> ! \telse\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> \n> ! \tif (aclresult != ACLCHECK_OK)\n> ! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> \n> ! \tLockRelation(rel, lockstmt->mode);\n> \n> ! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n> }\n> \n> \n> --- 1993,2096 ----\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tint relCnt;\n> ! \n> ! \trelCnt = length(lockstmt -> rellist);\n> ! \n> ! \t/* Handle a single relation lock specially to avoid overhead on\n> likely the\n> ! \t most common case */\n> ! \n> ! \tif(relCnt == 1)\n> ! \t{\n> ! \n> ! \t\t/* Locking a single table */\n> ! \n> ! \t\tRelation\trel;\n> ! \t\tint\t\t\taclresult;\n> ! \t\tchar *relname;\n> ! \n> ! \t\trelname = strVal(lfirst(lockstmt->rellist));\n> ! \n> ! \t\tfreeList(lockstmt->rellist);\n> ! \n> ! \t\trel = heap_openr(relname, NoLock);\n> ! \n> ! \t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", relname);\n> ! \n> ! \t\tif (lockstmt->mode == AccessShareLock)\n> ! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\tACL_SELECT);\n> ! \t\telse\n> ! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \t\tif (aclresult != ACLCHECK_OK)\n> ! \t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> ! \n> ! \t\tLockRelation(rel, lockstmt->mode);\n> ! \n> ! \t\tpfree(relname);\n> ! \n> ! \t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n> ! \t} \n> ! \telse \n> ! \t{\n> ! \t\tList *p;\n> ! \t\tRelation *RelationArray;\n> ! \t\tRelation *pRel;\n> ! \n> ! \t\t/* Locking multiple tables */\n> ! \n> ! \t\t/* Create an array of relations */\n> ! \n> ! \t\tRelationArray = palloc(relCnt * sizeof(Relation));\n> ! \t\tpRel = RelationArray;\n> ! \n> ! \t\t/* Iterate over the list and populate the relation array */\n> ! \n> ! \t\tforeach(p, lockstmt->rellist)\n> ! \t\t{\n> ! \t\t\tchar* relname = strVal(lfirst(p));\n> ! \t\t\tint\t\t\taclresult;\n> ! \n> ! \t\t\t*pRel = heap_openr(relname, NoLock);\n> ! \n> ! \t\t\tif ((*pRel)->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n> ! \t\t\t\t\t relname);\n> ! \n> ! \t\t\tif (lockstmt->mode == AccessShareLock)\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n> ! \t\t\telse\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \t\t\tif (aclresult != ACLCHECK_OK)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> \n> ! \t\t\tpRel++;\n> ! \t\t\tpfree(relname);\n> ! \t\t}\n> \n> ! \t\t/* Now, lock all the relations, closing each after it is locked\n> ! \t\t (Keeping the locks)\n> ! \t\t */\n> \n> ! \t\tfor(pRel = RelationArray;\n> ! \t\t\tpRel < RelationArray + relCnt;\n> ! \t\t\tpRel++)\n> ! \t\t\t{\n> ! \t\t\t\tLockRelation(*pRel, lockstmt->mode);\n> \n> ! \t\t\t\theap_close(*pRel, NoLock);\n> ! \t\t\t}\n> \n> ! \t\t/* Free the relation array */\n> \n> ! \t\tpfree(RelationArray);\n> ! \t}\n> }\n> \n> \n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.148\n> diff -c -p -r1.148 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t2001/07/16 19:07:37\t1.148\n> --- src/backend/nodes/copyfuncs.c\t2001/07/31 22:04:06\n> *************** _copyLockStmt(LockStmt *from)\n> *** 2425,2432 ****\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tif (from->relname)\n> ! \t\tnewnode->relname = pstrdup(from->relname);\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> --- 2425,2432 ----\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tNode_Copy(from, newnode, rellist);\n> ! \t\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.96\n> diff -c -p -r1.96 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t2001/07/16 19:07:38\t1.96\n> --- src/backend/nodes/equalfuncs.c\t2001/07/31 22:04:06\n> *************** _equalDropUserStmt(DropUserStmt *a, Drop\n> *** 1283,1289 ****\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equalstr(a->relname, b->relname))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> --- 1283,1289 ----\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equal(a->rellist, b->rellist))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.238\n> diff -c -p -r2.238 gram.y\n> *** src/backend/parser/gram.y\t2001/07/16 19:07:40\t2.238\n> --- src/backend/parser/gram.y\t2001/07/31 22:04:10\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 3280,3290 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->relname = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> --- 3280,3290 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->rellist = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.136\n> diff -c -p -r1.136 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t2001/07/16 19:07:40\t1.136\n> --- src/include/nodes/parsenodes.h\t2001/07/31 22:04:11\n> *************** typedef struct VariableResetStmt\n> *** 760,766 ****\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tchar\t *relname;\t\t/* relation to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> --- 760,766 ----\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tList\t *rellist;\t\t/* relations to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> Index: src/interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.146\n> diff -c -p -r1.146 preproc.y\n> *** src/interfaces/ecpg/preproc/preproc.y\t2001/07/16 05:07:00\t1.146\n> --- src/interfaces/ecpg/preproc/preproc.y\t2001/07/31 22:04:14\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 2421,2427 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> --- 2421,2427 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 18:23:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@cygnus.com> writes:\n> > I guess the principle (for Oracle folks) was that, for the user, there should\n> > be no distinction between a real table and a view. Thus, it should not matter\n> > for the user if the thing that is being locked is a real table or if it\n> > is actually being implemented as a view. Consider that it may have been\n> > a table one day, but the DBA changed it into a view. So that SQL will\n> > not work anymore and give the \"ERROR: LOCK TABLE: v is not a table\" message.\n> > This violates the Data Independence notion.\n> \n> I don't really buy this, because it makes life difficult for DBAs who\n> want to do creative things with views. Update rules don't necessarily\n> touch exactly the same set of tables that are mentioned in the select\n> rule. But that's the only set that a LOCK implementation might possibly\n> know about.\n> \n> Consider: for the view as view (ie, select) there's no real need to do\n> locking at all. The implicit read locks that will be grabbed as the\n> view is expanded will do fine. For updates, the behavior can and should\n> be defined by the rewrite rules that the DBA supplies. (Hmm, I'm not\n> sure that LOCK is one of the allowed query types in a rule --- if not,\n> it probably should be, so that the rule author can ensure the right\n> kinds of locks are grabbed in the right sequence.)\n> \n\nThese are good points. I suppose Oracle needs this because they\nhave DBMS-implemented updatable views (not with rules as we do).\n\nBTW, it seems we have a SQL non-conformance issue here: views that are\nonly projections+selections of a single base table are SQL-updatable. \nWe should allow updates to those by rewriting them to refer to the base table.\nAnd instead of just ignoring updates (unless we have rules in place) for\nnon-updatable views we should print some error like \n \"ERROR: attempt to modify non-updatable view\".\n\n\n> Another serious issue, which gets back to your original point, is that\n> we have no good idea what order to lock the base tables in. If we had\n> a concurrent-lock implementation it wouldn't matter, but in the absence\n> of one I am not sure it's a good idea to put in a LOCK that is going to\n> lock base tables in some arbitrary order.\n> \n\nThis is true. It should not be allowed (as it is not useful, as you've\npointed out) for non-updatable views.\n\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 02 Aug 2001 19:06:20 -0400",
"msg_from": "Fernando Nasser <fnasser@cygnus.com>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Fernando Nasser <fnasser@cygnus.com> writes:\n> BTW, it seems we have a SQL non-conformance issue here: views that are\n> only projections+selections of a single base table are SQL-updatable.\n\nIndeed. In Postgres terms I think this means that if a CREATE VIEW\ndescribes a view that meets the spec's constraints to be \"updatable\",\nwe should automatically create a default set of insert/update/delete\nrules for it. This is (or should be) on the TODO list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 19:26:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@cygnus.com> writes:\n> > BTW, it seems we have a SQL non-conformance issue here: views that are\n> > only projections+selections of a single base table are SQL-updatable.\n> \n> Indeed. In Postgres terms I think this means that if a CREATE VIEW\n> describes a view that meets the spec's constraints to be \"updatable\",\n> we should automatically create a default set of insert/update/delete\n> rules for it. This is (or should be) on the TODO list.\n> \n\nAgreed. \n\nWe should also emit an error if someone tries to update a non-updatable view\n(i.e., it is a view and there is no user defined rules for that update operation).\nSilently ignoring the update scares me and I bet it is not what the standard\nwould tell us to do. Any suggestion on how can we do this? I thought of\nadding default rules for those cases so they generate the error. But we would\nneed an error() function or something to invoke from there. \n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 02 Aug 2001 19:58:25 -0400",
"msg_from": "Fernando Nasser <fnasser@cygnus.com>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "[ it's past time to move this thread over to pghackers ]\n\nFernando Nasser <fnasser@cygnus.com> writes:\n> Tom Lane wrote:\n>> Fernando Nasser <fnasser@cygnus.com> writes:\n>>> BTW, it seems we have a SQL non-conformance issue here: views that are\n>>> only projections+selections of a single base table are SQL-updatable.\n>> \n>> Indeed. In Postgres terms I think this means that if a CREATE VIEW\n>> describes a view that meets the spec's constraints to be \"updatable\",\n>> we should automatically create a default set of insert/update/delete\n>> rules for it. This is (or should be) on the TODO list.\n\n> Agreed. \n\n> We should also emit an error if someone tries to update a\n> non-updatable view (i.e., it is a view and there is no user defined\n> rules for that update operation). Silently ignoring the update scares\n> me and I bet it is not what the standard would tell us to do. Any\n> suggestion on how can we do this?\n\nIt's already there as of 7.1:\n\nregression=# create view v as select * from a;\nCREATE\nregression=# insert into v default values;\nERROR: Cannot insert into a view without an appropriate rule\nregression=#\n\nThe parts of the behavior that actually need some debate are what the\ninteraction should be between default rules and explicitly created rules\n--- in particular, how not to break existing pg_dump scripts. Here's\na very off-the-cuff suggestion that might or might not survive scrutiny:\n\n1. Add an \"is_default\" boolean column to pg_rewrite. This will always\nbe FALSE for entries made by explicit CREATE RULE commands, but will be\nTRUE for entries created automatically when a CREATE VIEW is done for an\nupdatable view.\n\n2. When a CREATE RULE is done, look to see if there is an is_default\nrule for the same ev_class and ev_type (ie, same target table/view\nand same action type). If so, delete it. This allows CREATE RULE\nfollowing CREATE VIEW to override the default rules. A variant is to\ndelete *all* default rules for the target object regardless of action\ntype --- this might be safer, on the theory that if you have a\nnondefault ON INSERT rule you likely don't want a default ON DELETE.\n\n3. pg_dump would ignore (ie, not dump) is_default rules, knowing that\nthey'd get remade by CREATE VIEW. This prevents default rules from\nbecoming \"real\" rules after a dump/reload cycle.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 20:11:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Rules for updatable views (was Re: [PATCHES] Revised Patch to allow\n\tmultiple table locks in \"Unison\")"
},
{
"msg_contents": "\nPatch applied. Thanks. I had to edit it because the patch contents wrapped in\nthe email message.\n\n> Tom Lane wrote:\n> > \n> > Neil Padgett <npadgett@redhat.com> writes:\n> > >> This seems fine to me (and in fact I thought we'd already agreed to it).\n> > \n> > > Here's the patch then. =)\n> > \n> > [ quick mental checklist... ] You forgot the documentation updates.\n> > Otherwise it looks good.\n> \n> Ok -- I made a go at the documentation. I'm no SGML wizard, so if\n> someone could check my changes that would be helpful.\n> \n> Neil\n> \n> -- \n> Neil Padgett\n> Red Hat Canada Ltd. E-Mail: npadgett@redhat.com\n> 2323 Yonge Street, Suite #300, \n> Toronto, ON M4P 2C9\n> \n> Index: doc/src/sgml/ref/lock.sgml\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/lock.sgml,v\n> retrieving revision 1.24\n> diff -c -p -r1.24 lock.sgml\n> *** doc/src/sgml/ref/lock.sgml\t2001/07/09 22:18:33\t1.24\n> --- doc/src/sgml/ref/lock.sgml\t2001/08/02 21:39:42\n> *************** Postgres documentation\n> *** 15,21 ****\n> LOCK\n> </refname>\n> <refpurpose>\n> ! Explicitly lock a table inside a transaction\n> </refpurpose>\n> </refnamediv>\n> <refsynopsisdiv>\n> --- 15,21 ----\n> LOCK\n> </refname>\n> <refpurpose>\n> ! Explicitly lock a table / tables inside a transaction\n> </refpurpose>\n> </refnamediv>\n> <refsynopsisdiv>\n> *************** Postgres documentation\n> *** 23,30 ****\n> <date>2001-07-09</date>\n> </refsynopsisdivinfo>\n> <synopsis>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> IN\n> <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n> \n> where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n> \n> --- 23,30 ----\n> <date>2001-07-09</date>\n> </refsynopsisdivinfo>\n> <synopsis>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n> ...]\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n> ...] IN <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n> \n> where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n> \n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 373,378 ****\n> --- 373,379 ----\n> An example for this rule was given previously when discussing the \n> use of SHARE ROW EXCLUSIVE mode rather than SHARE mode.\n> </para>\n> + \n> </listitem>\n> </itemizedlist>\n> \n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 383,388 ****\n> --- 384,395 ----\n> </para>\n> </note>\n> \n> + <para>\n> + When locking multiple tables, the command LOCK a, b; is equivalent\n> to LOCK\n> + a; LOCK b;. The tables are locked one-by-one in the order specified\n> in the\n> + <command>LOCK</command> command.\n> + </para>\n> + \n> <refsect2 id=\"R2-SQL-LOCK-3\">\n> <refsect2info>\n> <date>1999-06-08</date>\n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 406,411 ****\n> --- 413,419 ----\n> <para>\n> <command>LOCK</command> works only inside transactions.\n> </para>\n> + \n> </refsect2>\n> </refsect1>\n> \n> Index: src/backend/commands/command.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\n> retrieving revision 1.136\n> diff -c -p -r1.136 command.c\n> *** src/backend/commands/command.c\t2001/07/16 05:06:57\t1.136\n> --- src/backend/commands/command.c\t2001/08/02 21:39:43\n> *************** needs_toast_table(Relation rel)\n> *** 1984,1991 ****\n> \t\tMAXALIGN(data_length);\n> \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> }\n> ! \n> ! \n> /*\n> *\n> * LOCK TABLE\n> --- 1984,1990 ----\n> \t\tMAXALIGN(data_length);\n> \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> }\n> ! \t\n> /*\n> *\n> * LOCK TABLE\n> *************** needs_toast_table(Relation rel)\n> *** 1994,2019 ****\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tRelation\trel;\n> ! \tint\t\t\taclresult;\n> \n> ! \trel = heap_openr(lockstmt->relname, NoLock);\n> \n> ! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n> \n> ! \tif (lockstmt->mode == AccessShareLock)\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n> ! \telse\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> \n> ! \tif (aclresult != ACLCHECK_OK)\n> ! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> \n> ! \tLockRelation(rel, lockstmt->mode);\n> \n> ! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n> }\n> \n> \n> --- 1993,2096 ----\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tint relCnt;\n> ! \n> ! \trelCnt = length(lockstmt -> rellist);\n> ! \n> ! \t/* Handle a single relation lock specially to avoid overhead on\n> likely the\n> ! \t most common case */\n> ! \n> ! \tif(relCnt == 1)\n> ! \t{\n> ! \n> ! \t\t/* Locking a single table */\n> ! \n> ! \t\tRelation\trel;\n> ! \t\tint\t\t\taclresult;\n> ! \t\tchar *relname;\n> ! \n> ! \t\trelname = strVal(lfirst(lockstmt->rellist));\n> ! \n> ! \t\tfreeList(lockstmt->rellist);\n> ! \n> ! \t\trel = heap_openr(relname, NoLock);\n> ! \n> ! \t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", relname);\n> ! \n> ! \t\tif (lockstmt->mode == AccessShareLock)\n> ! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\tACL_SELECT);\n> ! \t\telse\n> ! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \t\tif (aclresult != ACLCHECK_OK)\n> ! \t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> ! \n> ! \t\tLockRelation(rel, lockstmt->mode);\n> ! \n> ! \t\tpfree(relname);\n> ! \n> ! \t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n> ! \t} \n> ! \telse \n> ! \t{\n> ! \t\tList *p;\n> ! \t\tRelation *RelationArray;\n> ! \t\tRelation *pRel;\n> ! \n> ! \t\t/* Locking multiple tables */\n> ! \n> ! \t\t/* Create an array of relations */\n> ! \n> ! \t\tRelationArray = palloc(relCnt * sizeof(Relation));\n> ! \t\tpRel = RelationArray;\n> ! \n> ! \t\t/* Iterate over the list and populate the relation array */\n> ! \n> ! \t\tforeach(p, lockstmt->rellist)\n> ! \t\t{\n> ! \t\t\tchar* relname = strVal(lfirst(p));\n> ! \t\t\tint\t\t\taclresult;\n> ! \n> ! \t\t\t*pRel = heap_openr(relname, NoLock);\n> ! \n> ! \t\t\tif ((*pRel)->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n> ! \t\t\t\t\t relname);\n> ! \n> ! \t\t\tif (lockstmt->mode == AccessShareLock)\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n> ! \t\t\telse\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \t\t\tif (aclresult != ACLCHECK_OK)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> \n> ! \t\t\tpRel++;\n> ! \t\t\tpfree(relname);\n> ! \t\t}\n> \n> ! \t\t/* Now, lock all the relations, closing each after it is locked\n> ! \t\t (Keeping the locks)\n> ! \t\t */\n> \n> ! \t\tfor(pRel = RelationArray;\n> ! \t\t\tpRel < RelationArray + relCnt;\n> ! \t\t\tpRel++)\n> ! \t\t\t{\n> ! \t\t\t\tLockRelation(*pRel, lockstmt->mode);\n> \n> ! \t\t\t\theap_close(*pRel, NoLock);\n> ! \t\t\t}\n> \n> ! \t\t/* Free the relation array */\n> \n> ! \t\tpfree(RelationArray);\n> ! \t}\n> }\n> \n> \n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.148\n> diff -c -p -r1.148 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t2001/07/16 19:07:37\t1.148\n> --- src/backend/nodes/copyfuncs.c\t2001/08/02 21:39:44\n> *************** _copyLockStmt(LockStmt *from)\n> *** 2425,2432 ****\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tif (from->relname)\n> ! \t\tnewnode->relname = pstrdup(from->relname);\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> --- 2425,2432 ----\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tNode_Copy(from, newnode, rellist);\n> ! \t\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.96\n> diff -c -p -r1.96 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t2001/07/16 19:07:38\t1.96\n> --- src/backend/nodes/equalfuncs.c\t2001/08/02 21:39:44\n> *************** _equalDropUserStmt(DropUserStmt *a, Drop\n> *** 1283,1289 ****\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equalstr(a->relname, b->relname))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> --- 1283,1289 ----\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equal(a->rellist, b->rellist))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.238\n> diff -c -p -r2.238 gram.y\n> *** src/backend/parser/gram.y\t2001/07/16 19:07:40\t2.238\n> --- src/backend/parser/gram.y\t2001/08/02 21:39:46\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 3280,3290 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->relname = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> --- 3280,3290 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->rellist = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.136\n> diff -c -p -r1.136 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t2001/07/16 19:07:40\t1.136\n> --- src/include/nodes/parsenodes.h\t2001/08/02 21:39:46\n> *************** typedef struct VariableResetStmt\n> *** 760,766 ****\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tchar\t *relname;\t\t/* relation to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> --- 760,766 ----\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tList\t *rellist;\t\t/* relations to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> Index: src/interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.146\n> diff -c -p -r1.146 preproc.y\n> *** src/interfaces/ecpg/preproc/preproc.y\t2001/07/16 05:07:00\t1.146\n> --- src/interfaces/ecpg/preproc/preproc.y\t2001/08/02 21:39:49\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 2421,2427 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> --- 2421,2427 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 15:39:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Patch applied.\n\nIdly looking this over again, I noticed a big OOPS:\n\n>> ! \t\tfreeList(lockstmt->rellist);\n\n>> ! \t\tpfree(relname);\n\n>> ! \t\t\tpfree(relname);\n\nIt is most definitely NOT the executor's business to release pieces of\nthe querytree; this will certainly break plpgsql functions, for example,\nwhere the same querytree is executed repeatedly.\nBruce, please remove those lines.\n\nAnother thing I am concerned about now that I look more closely is that\nthe multi-rel case code opens the relations without any lock, and then\nassumes they'll stick around while it opens and access-checks the rest.\nThis will fail if someone else drops one of the rels meanwhile. I think\nthe entire routine should be reduced to a simple loop that opens, locks,\nand closes the rels one at a time. The extra code bulk to do it this\nway isn't buying us anything at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Aug 2001 15:51:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\" "
},
{
"msg_contents": "I have backed out the entire patch. I think it is best for the author\nto make the suggested changes and resubmit. Reversed patch attached.\n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Patch applied.\n> \n> Idly looking this over again, I noticed a big OOPS:\n> \n> >> ! \t\tfreeList(lockstmt->rellist);\n> \n> >> ! \t\tpfree(relname);\n> \n> >> ! \t\t\tpfree(relname);\n> \n> It is most definitely NOT the executor's business to release pieces of\n> the querytree; this will certainly break plpgsql functions, for example,\n> where the same querytree is executed repeatedly.\n> Bruce, please remove those lines.\n> \n> Another thing I am concerned about now that I look more closely is that\n> the multi-rel case code opens the relations without any lock, and then\n> assumes they'll stick around while it opens and access-checks the rest.\n> This will fail if someone else drops one of the rels meanwhile. I think\n> the entire routine should be reduced to a simple loop that opens, locks,\n> and closes the rels one at a time. The extra code bulk to do it this\n> way isn't buying us anything at all.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nTom Lane wrote:\n> \n> Neil Padgett <npadgett@redhat.com> writes:\n> >> This seems fine to me (and in fact I thought we'd already agreed to it).\n> \n> > Here's the patch then. =)\n> \n> [ quick mental checklist... ] You forgot the documentation updates.\n> Otherwise it looks good.\n\nOk -- I made a go at the documentation. I'm no SGML wizard, so if\nsomeone could check my changes that would be helpful.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\nIndex: doc/src/sgml/ref/lock.sgml\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/lock.sgml,v\nretrieving revision 1.24\ndiff -c -p -r1.24 lock.sgml\n*** doc/src/sgml/ref/lock.sgml\t2001/07/09 22:18:33\t1.24\n--- doc/src/sgml/ref/lock.sgml\t2001/08/02 21:39:42\n*************** Postgres documentation\n*** 15,21 ****\n LOCK\n </refname>\n <refpurpose>\n! Explicitly lock a table inside a transaction\n </refpurpose>\n </refnamediv>\n <refsynopsisdiv>\n--- 15,21 ----\n LOCK\n </refname>\n <refpurpose>\n! Explicitly lock a table / tables inside a transaction\n </refpurpose>\n </refnamediv>\n <refsynopsisdiv>\n*************** Postgres documentation\n*** 23,30 ****\n <date>2001-07-09</date>\n </refsynopsisdivinfo>\n <synopsis>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> IN <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n \n where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n \n--- 23,30 ----\n <date>2001-07-09</date>\n </refsynopsisdivinfo>\n <synopsis>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,...]\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,...] IN <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n \n where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n \n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 373,378 ****\n--- 373,379 ----\n An example for this rule was given previously when discussing the \n use of SHARE ROW EXCLUSIVE mode rather than SHARE mode.\n </para>\n+ \n </listitem>\n </itemizedlist>\n \n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 383,388 ****\n--- 384,395 ----\n </para>\n </note>\n \n+ <para>\n+ When locking multiple tables, the command LOCK a, b; is equivalent to LOCK\n+ a; LOCK b;. The tables are locked one-by-one in the order specified in the\n+ <command>LOCK</command> command.\n+ </para>\n+ \n <refsect2 id=\"R2-SQL-LOCK-3\">\n <refsect2info>\n <date>1999-06-08</date>\n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 406,411 ****\n--- 413,419 ----\n <para>\n <command>LOCK</command> works only inside transactions.\n </para>\n+ \n </refsect2>\n </refsect1>\n \nIndex: src/backend/commands/command.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.136\ndiff -c -p -r1.136 command.c\n*** src/backend/commands/command.c\t2001/07/16 05:06:57\t1.136\n--- src/backend/commands/command.c\t2001/08/02 21:39:43\n*************** needs_toast_table(Relation rel)\n*** 1984,1991 ****\n \t\tMAXALIGN(data_length);\n \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n }\n! \n! \n /*\n *\n * LOCK TABLE\n--- 1984,1990 ----\n \t\tMAXALIGN(data_length);\n \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n }\n! \t\n /*\n *\n * LOCK TABLE\n*************** needs_toast_table(Relation rel)\n*** 1994,2019 ****\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tRelation\trel;\n! \tint\t\t\taclresult;\n \n! \trel = heap_openr(lockstmt->relname, NoLock);\n \n! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n \n! \tif (lockstmt->mode == AccessShareLock)\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n! \telse\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n \n! \tif (aclresult != ACLCHECK_OK)\n! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n \n! \tLockRelation(rel, lockstmt->mode);\n \n! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n }\n \n \n--- 1993,2096 ----\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tint relCnt;\n! \n! \trelCnt = length(lockstmt -> rellist);\n! \n! \t/* Handle a single relation lock specially to avoid overhead on likely the\n! \t most common case */\n! \n! \tif(relCnt == 1)\n! \t{\n! \n! \t\t/* Locking a single table */\n! \n! \t\tRelation\trel;\n! \t\tint\t\t\taclresult;\n! \t\tchar *relname;\n! \n! \t\trelname = strVal(lfirst(lockstmt->rellist));\n! \n! \t\tfreeList(lockstmt->rellist);\n! \n! \t\trel = heap_openr(relname, NoLock);\n! \n! \t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", relname);\n! \n! \t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\telse\n! \t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\tif (aclresult != ACLCHECK_OK)\n! \t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n! \n! \t\tLockRelation(rel, lockstmt->mode);\n! \n! \t\tpfree(relname);\n! \n! \t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n! \t} \n! \telse \n! \t{\n! \t\tList *p;\n! \t\tRelation *RelationArray;\n! \t\tRelation *pRel;\n! \n! \t\t/* Locking multiple tables */\n! \n! \t\t/* Create an array of relations */\n! \n! \t\tRelationArray = palloc(relCnt * sizeof(Relation));\n! \t\tpRel = RelationArray;\n! \n! \t\t/* Iterate over the list and populate the relation array */\n! \n! \t\tforeach(p, lockstmt->rellist)\n! \t\t{\n! \t\t\tchar* relname = strVal(lfirst(p));\n! \t\t\tint\t\t\taclresult;\n! \n! \t\t\t*pRel = heap_openr(relname, NoLock);\n! \n! \t\t\tif ((*pRel)->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n! \t\t\t\t\t relname);\n! \n! \t\t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\t\telse\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\t\tif (aclresult != ACLCHECK_OK)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n \n! \t\t\tpRel++;\n! \t\t\tpfree(relname);\n! \t\t}\n \n! \t\t/* Now, lock all the relations, closing each after it is locked\n! \t\t (Keeping the locks)\n! \t\t */\n \n! \t\tfor(pRel = RelationArray;\n! \t\t\tpRel < RelationArray + relCnt;\n! \t\t\tpRel++)\n! \t\t\t{\n! \t\t\t\tLockRelation(*pRel, lockstmt->mode);\n \n! \t\t\t\theap_close(*pRel, NoLock);\n! \t\t\t}\n \n! \t\t/* Free the relation array */\n \n! \t\tpfree(RelationArray);\n! \t}\n }\n \n \nIndex: src/backend/nodes/copyfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\nretrieving revision 1.148\ndiff -c -p -r1.148 copyfuncs.c\n*** src/backend/nodes/copyfuncs.c\t2001/07/16 19:07:37\t1.148\n--- src/backend/nodes/copyfuncs.c\t2001/08/02 21:39:44\n*************** _copyLockStmt(LockStmt *from)\n*** 2425,2432 ****\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tif (from->relname)\n! \t\tnewnode->relname = pstrdup(from->relname);\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\n--- 2425,2432 ----\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tNode_Copy(from, newnode, rellist);\n! \t\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\nIndex: src/backend/nodes/equalfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\nretrieving revision 1.96\ndiff -c -p -r1.96 equalfuncs.c\n*** src/backend/nodes/equalfuncs.c\t2001/07/16 19:07:38\t1.96\n--- src/backend/nodes/equalfuncs.c\t2001/08/02 21:39:44\n*************** _equalDropUserStmt(DropUserStmt *a, Drop\n*** 1283,1289 ****\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equalstr(a->relname, b->relname))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\n--- 1283,1289 ----\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equal(a->rellist, b->rellist))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.238\ndiff -c -p -r2.238 gram.y\n*** src/backend/parser/gram.y\t2001/07/16 19:07:40\t2.238\n--- src/backend/parser/gram.y\t2001/08/02 21:39:46\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 3280,3290 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->relname = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n--- 3280,3290 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->rellist = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.136\ndiff -c -p -r1.136 parsenodes.h\n*** src/include/nodes/parsenodes.h\t2001/07/16 19:07:40\t1.136\n--- src/include/nodes/parsenodes.h\t2001/08/02 21:39:46\n*************** typedef struct VariableResetStmt\n*** 760,766 ****\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tchar\t *relname;\t\t/* relation to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \n--- 760,766 ----\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tList\t *rellist;\t\t/* relations to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \nIndex: src/interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.146\ndiff -c -p -r1.146 preproc.y\n*** src/interfaces/ecpg/preproc/preproc.y\t2001/07/16 05:07:00\t1.146\n--- src/interfaces/ecpg/preproc/preproc.y\t2001/08/02 21:39:49\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 2421,2427 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n--- 2421,2427 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}",
"msg_date": "Sat, 4 Aug 2001 18:03:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Patch applied.\n> \n> Idly looking this over again, I noticed a big OOPS:\n> \n> >> ! freeList(lockstmt->rellist);\n> \n> >> ! pfree(relname);\n> \n> >> ! pfree(relname);\n> \n> It is most definitely NOT the executor's business to release pieces of\n> the querytree; this will certainly break plpgsql functions, for example,\n> where the same querytree is executed repeatedly.\n\nThanks for having a look, Tom. I've taken you advice into account, and\nI've reworked the patch.\n\nA new patch is below.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\nIndex: doc/src/sgml/ref/lock.sgml\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/lock.sgml,v\nretrieving revision 1.26\ndiff -c -p -r1.26 lock.sgml\n*** doc/src/sgml/ref/lock.sgml\t2001/08/04 22:01:38\t1.26\n--- doc/src/sgml/ref/lock.sgml\t2001/08/07 18:34:23\n*************** Postgres documentation\n*** 15,21 ****\n LOCK\n </refname>\n <refpurpose>\n! Explicitly lock a table inside a transaction\n </refpurpose>\n </refnamediv>\n <refsynopsisdiv>\n--- 15,21 ----\n LOCK\n </refname>\n <refpurpose>\n! Explicitly lock a table / tables inside a transaction\n </refpurpose>\n </refnamediv>\n <refsynopsisdiv>\n*************** Postgres documentation\n*** 23,30 ****\n <date>2001-07-09</date>\n </refsynopsisdivinfo>\n <synopsis>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> IN\n<replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n \n where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n \n--- 23,30 ----\n <date>2001-07-09</date>\n </refsynopsisdivinfo>\n <synopsis>\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n...]\n! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n...] IN <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n \n where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n \n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 373,378 ****\n--- 373,379 ----\n An example for this rule was given previously when discussing the \n use of SHARE ROW EXCLUSIVE mode rather than SHARE mode.\n </para>\n+ \n </listitem>\n </itemizedlist>\n \n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 383,388 ****\n--- 384,395 ----\n </para>\n </note>\n \n+ <para>\n+ When locking multiple tables, the command LOCK a, b; is equivalent\nto LOCK\n+ a; LOCK b;. The tables are locked one-by-one in the order specified\nin the\n+ <command>LOCK</command> command.\n+ </para>\n+ \n <refsect2 id=\"R2-SQL-LOCK-3\">\n <refsect2info>\n <date>1999-06-08</date>\n*************** ERROR <replaceable class=\"PARAMETER\">nam\n*** 406,411 ****\n--- 413,419 ----\n <para>\n <command>LOCK</command> works only inside transactions.\n </para>\n+ \n </refsect2>\n </refsect1>\n \nIndex: src/backend/commands/command.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.138\ndiff -c -p -r1.138 command.c\n*** src/backend/commands/command.c\t2001/08/04 22:01:38\t1.138\n--- src/backend/commands/command.c\t2001/08/07 18:34:24\n*************** needs_toast_table(Relation rel)\n*** 1984,1991 ****\n \t\tMAXALIGN(data_length);\n \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n }\n! \n! \n /*\n *\n * LOCK TABLE\n--- 1984,1990 ----\n \t\tMAXALIGN(data_length);\n \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n }\n! \t\n /*\n *\n * LOCK TABLE\n*************** needs_toast_table(Relation rel)\n*** 1994,2019 ****\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tRelation\trel;\n! \tint\t\t\taclresult;\n! \n! \trel = heap_openr(lockstmt->relname, NoLock);\n! \n! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n! \n! \tif (lockstmt->mode == AccessShareLock)\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n! \telse\n! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \tif (aclresult != ACLCHECK_OK)\n! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n! \n! \tLockRelation(rel, lockstmt->mode);\n! \n! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n }\n \n \n--- 1993,2030 ----\n void\n LockTableCommand(LockStmt *lockstmt)\n {\n! \tList *p;\n! \tRelation rel;\n! \t\n! \t/* Iterate over the list and open, lock, and close the relations\n! \t one at a time\n! \t */\n! \n! \t\tforeach(p, lockstmt->rellist)\n! \t\t{\n! \t\t\tchar* relname = strVal(lfirst(p));\n! \t\t\tint\t\t\taclresult;\n! \t\t\t\n! \t\t\trel = heap_openr(relname, NoLock);\n! \n! \t\t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n! \t\t\t\t\t relname);\n! \t\t\t\n! \t\t\tif (lockstmt->mode == AccessShareLock)\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n! \t\t\telse\n! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n! \n! \t\t\tif (aclresult != ACLCHECK_OK)\n! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n! \n! \t\t\tLockRelation(rel, lockstmt->mode);\n! \t\t\t\n! \t\t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n! \t\t}\n }\n \n \nIndex: src/backend/nodes/copyfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\nretrieving revision 1.150\ndiff -c -p -r1.150 copyfuncs.c\n*** src/backend/nodes/copyfuncs.c\t2001/08/04 22:01:38\t1.150\n--- src/backend/nodes/copyfuncs.c\t2001/08/07 18:34:24\n*************** _copyLockStmt(LockStmt *from)\n*** 2425,2432 ****\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tif (from->relname)\n! \t\tnewnode->relname = pstrdup(from->relname);\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\n--- 2425,2432 ----\n {\n \tLockStmt *newnode = makeNode(LockStmt);\n \n! \tNode_Copy(from, newnode, rellist);\n! \t\n \tnewnode->mode = from->mode;\n \n \treturn newnode;\nIndex: src/backend/nodes/equalfuncs.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\nretrieving revision 1.98\ndiff -c -p -r1.98 equalfuncs.c\n*** src/backend/nodes/equalfuncs.c\t2001/08/04 22:01:38\t1.98\n--- src/backend/nodes/equalfuncs.c\t2001/08/07 18:34:24\n*************** _equalDropUserStmt(DropUserStmt *a, Drop\n*** 1283,1289 ****\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equalstr(a->relname, b->relname))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\n--- 1283,1289 ----\n static bool\n _equalLockStmt(LockStmt *a, LockStmt *b)\n {\n! \tif (!equal(a->rellist, b->rellist))\n \t\treturn false;\n \tif (a->mode != b->mode)\n \t\treturn false;\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.241\ndiff -c -p -r2.241 gram.y\n*** src/backend/parser/gram.y\t2001/08/06 05:42:48\t2.241\n--- src/backend/parser/gram.y\t2001/08/07 18:34:30\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 3281,3291 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->relname = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\n--- 3281,3291 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n \n! \t\t\t\t\tn->rellist = $3;\n \t\t\t\t\tn->mode = $4;\n \t\t\t\t\t$$ = (Node *)n;\n \t\t\t\t}\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.138\ndiff -c -p -r1.138 parsenodes.h\n*** src/include/nodes/parsenodes.h\t2001/08/04 22:01:39\t1.138\n--- src/include/nodes/parsenodes.h\t2001/08/07 18:34:30\n*************** typedef struct VariableResetStmt\n*** 760,766 ****\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tchar\t *relname;\t\t/* relation to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \n--- 760,766 ----\n typedef struct LockStmt\n {\n \tNodeTag\t\ttype;\n! \tList\t *rellist;\t\t/* relations to lock */\n \tint\t\t\tmode;\t\t\t/* lock mode */\n } LockStmt;\n \nIndex: src/interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.148\ndiff -c -p -r1.148 preproc.y\n*** src/interfaces/ecpg/preproc/preproc.y\t2001/08/04 22:01:39\t1.148\n--- src/interfaces/ecpg/preproc/preproc.y\t2001/08/07 18:34:32\n*************** DeleteStmt: DELETE FROM relation_expr w\n*** 2421,2427 ****\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n--- 2421,2427 ----\n \t\t\t\t}\n \t\t;\n \n! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n \t\t\t\t{\n \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n \t\t\t\t}\n",
"msg_date": "Tue, 07 Aug 2001 14:48:02 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Tom Lane wrote:\n> > \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Patch applied.\n> > \n> > Idly looking this over again, I noticed a big OOPS:\n> > \n> > >> ! freeList(lockstmt->rellist);\n> > \n> > >> ! pfree(relname);\n> > \n> > >> ! pfree(relname);\n> > \n> > It is most definitely NOT the executor's business to release pieces of\n> > the querytree; this will certainly break plpgsql functions, for example,\n> > where the same querytree is executed repeatedly.\n> \n> Thanks for having a look, Tom. I've taken you advice into account, and\n> I've reworked the patch.\n> \n> A new patch is below.\n> \n> Neil\n> \n> -- \n> Neil Padgett\n> Red Hat Canada Ltd. E-Mail: npadgett@redhat.com\n> 2323 Yonge Street, Suite #300, \n> Toronto, ON M4P 2C9\n> \n> Index: doc/src/sgml/ref/lock.sgml\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/lock.sgml,v\n> retrieving revision 1.26\n> diff -c -p -r1.26 lock.sgml\n> *** doc/src/sgml/ref/lock.sgml\t2001/08/04 22:01:38\t1.26\n> --- doc/src/sgml/ref/lock.sgml\t2001/08/07 18:34:23\n> *************** Postgres documentation\n> *** 15,21 ****\n> LOCK\n> </refname>\n> <refpurpose>\n> ! Explicitly lock a table inside a transaction\n> </refpurpose>\n> </refnamediv>\n> <refsynopsisdiv>\n> --- 15,21 ----\n> LOCK\n> </refname>\n> <refpurpose>\n> ! Explicitly lock a table / tables inside a transaction\n> </refpurpose>\n> </refnamediv>\n> <refsynopsisdiv>\n> *************** Postgres documentation\n> *** 23,30 ****\n> <date>2001-07-09</date>\n> </refsynopsisdivinfo>\n> <synopsis>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> IN\n> <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n> \n> where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n> \n> --- 23,30 ----\n> <date>2001-07-09</date>\n> </refsynopsisdivinfo>\n> <synopsis>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n> ...]\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n> ...] IN <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n> \n> where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n> \n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 373,378 ****\n> --- 373,379 ----\n> An example for this rule was given previously when discussing the \n> use of SHARE ROW EXCLUSIVE mode rather than SHARE mode.\n> </para>\n> + \n> </listitem>\n> </itemizedlist>\n> \n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 383,388 ****\n> --- 384,395 ----\n> </para>\n> </note>\n> \n> + <para>\n> + When locking multiple tables, the command LOCK a, b; is equivalent\n> to LOCK\n> + a; LOCK b;. The tables are locked one-by-one in the order specified\n> in the\n> + <command>LOCK</command> command.\n> + </para>\n> + \n> <refsect2 id=\"R2-SQL-LOCK-3\">\n> <refsect2info>\n> <date>1999-06-08</date>\n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 406,411 ****\n> --- 413,419 ----\n> <para>\n> <command>LOCK</command> works only inside transactions.\n> </para>\n> + \n> </refsect2>\n> </refsect1>\n> \n> Index: src/backend/commands/command.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\n> retrieving revision 1.138\n> diff -c -p -r1.138 command.c\n> *** src/backend/commands/command.c\t2001/08/04 22:01:38\t1.138\n> --- src/backend/commands/command.c\t2001/08/07 18:34:24\n> *************** needs_toast_table(Relation rel)\n> *** 1984,1991 ****\n> \t\tMAXALIGN(data_length);\n> \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> }\n> ! \n> ! \n> /*\n> *\n> * LOCK TABLE\n> --- 1984,1990 ----\n> \t\tMAXALIGN(data_length);\n> \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> }\n> ! \t\n> /*\n> *\n> * LOCK TABLE\n> *************** needs_toast_table(Relation rel)\n> *** 1994,2019 ****\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tRelation\trel;\n> ! \tint\t\t\taclresult;\n> ! \n> ! \trel = heap_openr(lockstmt->relname, NoLock);\n> ! \n> ! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n> ! \n> ! \tif (lockstmt->mode == AccessShareLock)\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n> ! \telse\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \tif (aclresult != ACLCHECK_OK)\n> ! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> ! \n> ! \tLockRelation(rel, lockstmt->mode);\n> ! \n> ! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n> }\n> \n> \n> --- 1993,2030 ----\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tList *p;\n> ! \tRelation rel;\n> ! \t\n> ! \t/* Iterate over the list and open, lock, and close the relations\n> ! \t one at a time\n> ! \t */\n> ! \n> ! \t\tforeach(p, lockstmt->rellist)\n> ! \t\t{\n> ! \t\t\tchar* relname = strVal(lfirst(p));\n> ! \t\t\tint\t\t\taclresult;\n> ! \t\t\t\n> ! \t\t\trel = heap_openr(relname, NoLock);\n> ! \n> ! \t\t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n> ! \t\t\t\t\t relname);\n> ! \t\t\t\n> ! \t\t\tif (lockstmt->mode == AccessShareLock)\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n> ! \t\t\telse\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \t\t\tif (aclresult != ACLCHECK_OK)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> ! \n> ! \t\t\tLockRelation(rel, lockstmt->mode);\n> ! \t\t\t\n> ! \t\t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n> ! \t\t}\n> }\n> \n> \n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.150\n> diff -c -p -r1.150 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t2001/08/04 22:01:38\t1.150\n> --- src/backend/nodes/copyfuncs.c\t2001/08/07 18:34:24\n> *************** _copyLockStmt(LockStmt *from)\n> *** 2425,2432 ****\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tif (from->relname)\n> ! \t\tnewnode->relname = pstrdup(from->relname);\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> --- 2425,2432 ----\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tNode_Copy(from, newnode, rellist);\n> ! \t\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.98\n> diff -c -p -r1.98 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t2001/08/04 22:01:38\t1.98\n> --- src/backend/nodes/equalfuncs.c\t2001/08/07 18:34:24\n> *************** _equalDropUserStmt(DropUserStmt *a, Drop\n> *** 1283,1289 ****\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equalstr(a->relname, b->relname))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> --- 1283,1289 ----\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equal(a->rellist, b->rellist))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.241\n> diff -c -p -r2.241 gram.y\n> *** src/backend/parser/gram.y\t2001/08/06 05:42:48\t2.241\n> --- src/backend/parser/gram.y\t2001/08/07 18:34:30\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 3281,3291 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->relname = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> --- 3281,3291 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->rellist = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.138\n> diff -c -p -r1.138 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t2001/08/04 22:01:39\t1.138\n> --- src/include/nodes/parsenodes.h\t2001/08/07 18:34:30\n> *************** typedef struct VariableResetStmt\n> *** 760,766 ****\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tchar\t *relname;\t\t/* relation to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> --- 760,766 ----\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tList\t *rellist;\t\t/* relations to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> Index: src/interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.148\n> diff -c -p -r1.148 preproc.y\n> *** src/interfaces/ecpg/preproc/preproc.y\t2001/08/04 22:01:39\t1.148\n> --- src/interfaces/ecpg/preproc/preproc.y\t2001/08/07 18:34:32\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 2421,2427 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> --- 2421,2427 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Aug 2001 11:20:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n\n> Tom Lane wrote:\n> > \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Patch applied.\n> > \n> > Idly looking this over again, I noticed a big OOPS:\n> > \n> > >> ! freeList(lockstmt->rellist);\n> > \n> > >> ! pfree(relname);\n> > \n> > >> ! pfree(relname);\n> > \n> > It is most definitely NOT the executor's business to release pieces of\n> > the querytree; this will certainly break plpgsql functions, for example,\n> > where the same querytree is executed repeatedly.\n> \n> Thanks for having a look, Tom. I've taken you advice into account, and\n> I've reworked the patch.\n> \n> A new patch is below.\n> \n> Neil\n> \n> -- \n> Neil Padgett\n> Red Hat Canada Ltd. E-Mail: npadgett@redhat.com\n> 2323 Yonge Street, Suite #300, \n> Toronto, ON M4P 2C9\n> \n> Index: doc/src/sgml/ref/lock.sgml\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/lock.sgml,v\n> retrieving revision 1.26\n> diff -c -p -r1.26 lock.sgml\n> *** doc/src/sgml/ref/lock.sgml\t2001/08/04 22:01:38\t1.26\n> --- doc/src/sgml/ref/lock.sgml\t2001/08/07 18:34:23\n> *************** Postgres documentation\n> *** 15,21 ****\n> LOCK\n> </refname>\n> <refpurpose>\n> ! Explicitly lock a table inside a transaction\n> </refpurpose>\n> </refnamediv>\n> <refsynopsisdiv>\n> --- 15,21 ----\n> LOCK\n> </refname>\n> <refpurpose>\n> ! Explicitly lock a table / tables inside a transaction\n> </refpurpose>\n> </refnamediv>\n> <refsynopsisdiv>\n> *************** Postgres documentation\n> *** 23,30 ****\n> <date>2001-07-09</date>\n> </refsynopsisdivinfo>\n> <synopsis>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> IN\n> <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n> \n> where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n> \n> --- 23,30 ----\n> <date>2001-07-09</date>\n> </refsynopsisdivinfo>\n> <synopsis>\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n> ...]\n> ! LOCK [ TABLE ] <replaceable class=\"PARAMETER\">name</replaceable> [,\n> ...] IN <replaceable class=\"PARAMETER\">lockmode</replaceable> MODE\n> \n> where <replaceable class=\"PARAMETER\">lockmode</replaceable> is one of:\n> \n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 373,378 ****\n> --- 373,379 ----\n> An example for this rule was given previously when discussing the \n> use of SHARE ROW EXCLUSIVE mode rather than SHARE mode.\n> </para>\n> + \n> </listitem>\n> </itemizedlist>\n> \n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 383,388 ****\n> --- 384,395 ----\n> </para>\n> </note>\n> \n> + <para>\n> + When locking multiple tables, the command LOCK a, b; is equivalent\n> to LOCK\n> + a; LOCK b;. The tables are locked one-by-one in the order specified\n> in the\n> + <command>LOCK</command> command.\n> + </para>\n> + \n> <refsect2 id=\"R2-SQL-LOCK-3\">\n> <refsect2info>\n> <date>1999-06-08</date>\n> *************** ERROR <replaceable class=\"PARAMETER\">nam\n> *** 406,411 ****\n> --- 413,419 ----\n> <para>\n> <command>LOCK</command> works only inside transactions.\n> </para>\n> + \n> </refsect2>\n> </refsect1>\n> \n> Index: src/backend/commands/command.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\n> retrieving revision 1.138\n> diff -c -p -r1.138 command.c\n> *** src/backend/commands/command.c\t2001/08/04 22:01:38\t1.138\n> --- src/backend/commands/command.c\t2001/08/07 18:34:24\n> *************** needs_toast_table(Relation rel)\n> *** 1984,1991 ****\n> \t\tMAXALIGN(data_length);\n> \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> }\n> ! \n> ! \n> /*\n> *\n> * LOCK TABLE\n> --- 1984,1990 ----\n> \t\tMAXALIGN(data_length);\n> \treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> }\n> ! \t\n> /*\n> *\n> * LOCK TABLE\n> *************** needs_toast_table(Relation rel)\n> *** 1994,2019 ****\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tRelation\trel;\n> ! \tint\t\t\taclresult;\n> ! \n> ! \trel = heap_openr(lockstmt->relname, NoLock);\n> ! \n> ! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\telog(ERROR, \"LOCK TABLE: %s is not a table\", lockstmt->relname);\n> ! \n> ! \tif (lockstmt->mode == AccessShareLock)\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(), ACL_SELECT);\n> ! \telse\n> ! \t\taclresult = pg_aclcheck(lockstmt->relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \tif (aclresult != ACLCHECK_OK)\n> ! \t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> ! \n> ! \tLockRelation(rel, lockstmt->mode);\n> ! \n> ! \theap_close(rel, NoLock);\t/* close rel, keep lock */\n> }\n> \n> \n> --- 1993,2030 ----\n> void\n> LockTableCommand(LockStmt *lockstmt)\n> {\n> ! \tList *p;\n> ! \tRelation rel;\n> ! \t\n> ! \t/* Iterate over the list and open, lock, and close the relations\n> ! \t one at a time\n> ! \t */\n> ! \n> ! \t\tforeach(p, lockstmt->rellist)\n> ! \t\t{\n> ! \t\t\tchar* relname = strVal(lfirst(p));\n> ! \t\t\tint\t\t\taclresult;\n> ! \t\t\t\n> ! \t\t\trel = heap_openr(relname, NoLock);\n> ! \n> ! \t\t\tif (rel->rd_rel->relkind != RELKIND_RELATION)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: %s is not a table\", \n> ! \t\t\t\t\t relname);\n> ! \t\t\t\n> ! \t\t\tif (lockstmt->mode == AccessShareLock)\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_SELECT);\n> ! \t\t\telse\n> ! \t\t\t\taclresult = pg_aclcheck(relname, GetUserId(),\n> ! \t\t\t\t\t\t\t\t\t\tACL_UPDATE | ACL_DELETE);\n> ! \n> ! \t\t\tif (aclresult != ACLCHECK_OK)\n> ! \t\t\t\telog(ERROR, \"LOCK TABLE: permission denied\");\n> ! \n> ! \t\t\tLockRelation(rel, lockstmt->mode);\n> ! \t\t\t\n> ! \t\t\theap_close(rel, NoLock);\t/* close rel, keep lock */\n> ! \t\t}\n> }\n> \n> \n> Index: src/backend/nodes/copyfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/copyfuncs.c,v\n> retrieving revision 1.150\n> diff -c -p -r1.150 copyfuncs.c\n> *** src/backend/nodes/copyfuncs.c\t2001/08/04 22:01:38\t1.150\n> --- src/backend/nodes/copyfuncs.c\t2001/08/07 18:34:24\n> *************** _copyLockStmt(LockStmt *from)\n> *** 2425,2432 ****\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tif (from->relname)\n> ! \t\tnewnode->relname = pstrdup(from->relname);\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> --- 2425,2432 ----\n> {\n> \tLockStmt *newnode = makeNode(LockStmt);\n> \n> ! \tNode_Copy(from, newnode, rellist);\n> ! \t\n> \tnewnode->mode = from->mode;\n> \n> \treturn newnode;\n> Index: src/backend/nodes/equalfuncs.c\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes/equalfuncs.c,v\n> retrieving revision 1.98\n> diff -c -p -r1.98 equalfuncs.c\n> *** src/backend/nodes/equalfuncs.c\t2001/08/04 22:01:38\t1.98\n> --- src/backend/nodes/equalfuncs.c\t2001/08/07 18:34:24\n> *************** _equalDropUserStmt(DropUserStmt *a, Drop\n> *** 1283,1289 ****\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equalstr(a->relname, b->relname))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> --- 1283,1289 ----\n> static bool\n> _equalLockStmt(LockStmt *a, LockStmt *b)\n> {\n> ! \tif (!equal(a->rellist, b->rellist))\n> \t\treturn false;\n> \tif (a->mode != b->mode)\n> \t\treturn false;\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.241\n> diff -c -p -r2.241 gram.y\n> *** src/backend/parser/gram.y\t2001/08/06 05:42:48\t2.241\n> --- src/backend/parser/gram.y\t2001/08/07 18:34:30\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 3281,3291 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->relname = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> --- 3281,3291 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt:\tLOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\tLockStmt *n = makeNode(LockStmt);\n> \n> ! \t\t\t\t\tn->rellist = $3;\n> \t\t\t\t\tn->mode = $4;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.138\n> diff -c -p -r1.138 parsenodes.h\n> *** src/include/nodes/parsenodes.h\t2001/08/04 22:01:39\t1.138\n> --- src/include/nodes/parsenodes.h\t2001/08/07 18:34:30\n> *************** typedef struct VariableResetStmt\n> *** 760,766 ****\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tchar\t *relname;\t\t/* relation to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> --- 760,766 ----\n> typedef struct LockStmt\n> {\n> \tNodeTag\t\ttype;\n> ! \tList\t *rellist;\t\t/* relations to lock */\n> \tint\t\t\tmode;\t\t\t/* lock mode */\n> } LockStmt;\n> \n> Index: src/interfaces/ecpg/preproc/preproc.y\n> ===================================================================\n> RCS file:\n> /home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n> retrieving revision 1.148\n> diff -c -p -r1.148 preproc.y\n> *** src/interfaces/ecpg/preproc/preproc.y\t2001/08/04 22:01:39\t1.148\n> --- src/interfaces/ecpg/preproc/preproc.y\t2001/08/07 18:34:32\n> *************** DeleteStmt: DELETE FROM relation_expr w\n> *** 2421,2427 ****\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> --- 2421,2427 ----\n> \t\t\t\t}\n> \t\t;\n> \n> ! LockStmt: LOCK_P opt_table relation_name_list opt_lock\n> \t\t\t\t{\n> \t\t\t\t\t$$ = cat_str(4, make_str(\"lock\"), $2, $3, $4);\n> \t\t\t\t}\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Aug 2001 10:07:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Revised Patch to allow multiple table locks in \"Unison\""
}
] |
[
{
"msg_contents": "pgsql-odbc doesn't seem a right place to post.\nI forwarded my reply to pgsql-hackers also.\n\nMichael Rudolph wrote:\n> \n> Hi,\n> \n> I have a performance-problem with my Postgresql-database: up to now I\n> had centura sqlbase but for some reasons I have to change to Postgres.\n> After copying the data of an existing database I made some tests for\n> performance which where very disappointing. There is one table, let's\n> say table1, with about 11.0000 rows. On doing a \"select distinct var1\n> from table1\" via ODBC-Driver, I had to wait about 15 sec for the result.\n> The same select on centura sqlbase lasted about 3 sec. Ok, I thought, it\n> might be the ODBC-Driver with that bad performance and I did that query\n> directly with psql. Not much better, it lasted around 10 sec.\n> \n\nHow many rows are returned by the query in reality ?\nDoes the table *table1* have an index on the column *var1*\nin your centura sqlbase ?\n\n> This isn't what I can give to my users because in my application I have\n> to do a lot of queries of that kind. The users would have a lot of time\n> to hang around, waiting for the application, and thinking, how to kill\n> me ;-) .\n> \n> I made some more tests with indices and with more indices but the\n> improvement is not worth to mention.\n> \n> I am now at a point, wondering, if the transfer from centura to postgres\n> is the right way but I can't imagine why the performance of centura is\n> so much better. The conditions are nearly the same (256 MB RAM, Pentium\n> III) beside of the OS, which is Novell Netware for Centura and Linux\n> (Kernel 2.4.0) for Postgres. The version of Postgres is 7.0.3.\n> \n> Does anybody have an idea what else I can do? Thank yo very much for\n> your help.\n> \n> Michael\n",
"msg_date": "Thu, 26 Jul 2001 10:49:22 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Slow Performance in PostgreSQL"
},
{
"msg_contents": "> How many rows are returned by the query in reality ?\n> Does the table *table1* have an index on the column *var1*\n> in your centura sqlbase ?\n\n... and did you do a \"vacuum analyze\"? What is the schema? What is the\nquery? What is the result of \"explain\" for that query?\n\nAll of these things are relevant for any inquiry regarding poor\nperformance. I'm sure you will get acceptable performance once things\nare adjusted.\n\n - Thomas\n",
"msg_date": "Thu, 26 Jul 2001 04:45:50 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Slow Performance in PostgreSQL"
},
{
"msg_contents": "Michael Rudolph wrote:\n> \n> To let all of you know the status of my problem: I got on one large\n> step when realized, that I forgot a \"vacuum analyze\" after copying the\n> data. When I did this, my performance gain was huge - but didn't reach\n> the performance of centura at all. It is still a bit slower. Maybe I\n> can optimize the ODBC-Connection in any way, because there is still\n> one unsolved question when watching the postgres-logfile: Every query\n> is done two times (I don't know why) and both queries of one type need\n> the same execution time. So I think, if I manage to reduce that load,\n> I can get an acceptable performance.\n> \n\nCould you turn on mylog debug though it generates a lot of\ndebug output ? To turn on it, please add the Windows registry\nentry\nHKEY_LOCAL_MACHINE\\SOFTWARE\\ODBC\\ODBCINST.INI\\PostgreSQL\\Debug\nas 1. To turn off mylog debug, please set the entry to 0 or\nremove it.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 27 Jul 2001 19:05:59 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Slow Performance in PostgreSQL"
}
] |
[
{
"msg_contents": "Hello all,\n\n Is anybody trying to solve the 8191 bytes query limit from libpq\nwindows port ???\n We've discussed this topic on \"Large queries - again\" thread and it\nseems like nobody got interested on fixing it.\n All Windows applications that rely on libpq are broken because of\nthis issue (ODBC applications are fine btw).\n I can also do any kind of testing under Windows (and actually I'll\ndo it anyway). I wonder if this limitation also applies to the unix libpq\nlibrary ???\n Jan, Tom, Bruce - any news on this ?\n\nBest Regards,\nSteve Howe\n\n\n",
"msg_date": "Thu, 26 Jul 2001 01:24:42 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "LIBPQ on Windows and large Queries"
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n> Is anybody trying to solve the 8191 bytes query limit from libpq\n> windows port ???\n\nI think it's *your* responsibility, at least to identify what's going\non. This is an open source project, and that means you can and should\nfix problems that affect you.\n\nFWIW, if the problem is real (which I still misdoubt), it seems like\nit would have to be related to needing to flush multiple output\nbufferloads during a single PQsendQuery. This works just fine on\neverything but Windows --- why would it fail there? And why would\nno one but you have complained of it before? I have no ideas.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Jul 2001 01:36:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LIBPQ on Windows and large Queries "
},
{
"msg_contents": "At 01:36 27.07.2001 -0400, you wrote:\n>\"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > Is anybody trying to solve the 8191 bytes query limit from libpq\n> > windows port ???\n>\n>I think it's *your* responsibility, at least to identify what's going\n>on. This is an open source project, and that means you can and should\n>fix problems that affect you.\n>\n>FWIW, if the problem is real (which I still misdoubt), it seems like\n>it would have to be related to needing to flush multiple output\n>bufferloads during a single PQsendQuery. This works just fine on\n>everything but Windows --- why would it fail there? And why would\n>no one but you have complained of it before? I have no ideas.\n\n[...]\n\nTo go on about this, I use psql 7.1.2 for toying around on Win2K, and have not\nhad this problem. I just evaluated using TOASTED blobs (written in a text \ncolumn\nas base64 encoded).\n\nI did use the cygwin libpq though. Probably I'll get around compiling \nnative libpq\nand try that with my test cases.\n\nGreetings,\n Joerg\n\n",
"msg_date": "Fri, 27 Jul 2001 10:14:47 +0200",
"msg_from": "Joerg Hessdoerfer <Joerg.Hessdoerfer@sea-gmbh.com>",
"msg_from_op": false,
"msg_subject": "Re: LIBPQ on Windows and large Queries "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > Is anybody trying to solve the 8191 bytes query limit from libpq\n> > windows port ???\n> \n> FWIW, if the problem is real (which I still misdoubt), \n\nYes it's real.\nError handling seems the cause. When \"Error: pqReadData() --\nread() failed: errno=0 No error\" occurs WSAGetLastError()\nreturns WSAEWOULDBLOCK. If EWOULDBLOCK exists and errno ==\nEWOULDBLOCK, pqReadData() returns 0 or 1 not -1.\nI added the code errno = WSAGetLastError(); and \n#define\tEWOULDBLOCK WSAEWOULDBLOCK.\nAfter that I encountered another error \"pqFlush() -- \ncouldn't send data: errno=0\". Then WSAGetLastError() also\nreturns WSAEWOULDBLOCK. After adding *errno = WSAGetLastError();*\nthe insertion was successful.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 27 Jul 2001 18:13:54 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: LIBPQ on Windows and large Queries"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Yes it's real.\n> Error handling seems the cause.\n\nHmm, are you working with CVS tip or 7.1.* sources? I'm wondering\nhow this interacts with the recent patch to #define errno as\nWSAGetLastError() on WIN32...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Jul 2001 16:05:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LIBPQ on Windows and large Queries "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Yes it's real.\n> > Error handling seems the cause.\n> \n> Hmm, are you working with CVS tip or 7.1.* sources? I'm wondering\n> how this interacts with the recent patch to #define errno as\n> WSAGetLastError() on WIN32...\n> \n\nOops sorry, I could have no cvs access for 3 weeks.\nWell haven't this problem solved already ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 30 Jul 2001 08:55:35 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: LIBPQ on Windows and large Queries"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Well haven't this problem solved already ?\n\nI'm not sure. Steve, have you tried current sources?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Jul 2001 19:57:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LIBPQ on Windows and large Queries "
},
{
"msg_contents": "Hello all,\n\n I was in a trip and just arrived, and will do it real soon.\n\n\nBest Regards,\nSteve Howe\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Hiroshi Inoue\" <Inoue@tpf.co.jp>\nCc: \"Steve Howe\" <howe@carcass.dhs.org>; <pgsql-hackers@postgresql.org>\nSent: Sunday, July 29, 2001 8:57 PM\nSubject: Re: [HACKERS] LIBPQ on Windows and large Queries \n\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Well haven't this problem solved already ?\n> \n> I'm not sure. Steve, have you tried current sources?\n> \n> regards, tom lane\n> \n\n",
"msg_date": "Wed, 1 Aug 2001 02:20:23 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: LIBPQ on Windows and large Queries "
}
] |
[
{
"msg_contents": "Try this:\n\ntest=# create table parent(a int4 primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'parent_pkey'\nfor table 'parent'\nCREATE\ntest=# alter table parent add column \"b\" int4 references parent(c) on delete\nset null;\nALTER\ntest=# \\d test\nDid not find any relation named \"test\".\ntest=# \\d parent\n Table \"parent\"\n Attribute | Type | Modifier\n-----------+---------+----------\n a | integer | not null\n b | integer |\nIndex: parent_pkey\n\n\nNotice how the reference to the non-existent column was allowed...\n\nNow I check the pg_trigger table:\n\ntest=# select * from pg_trigger;\n tgrelid | tgname | tgfoid | tgtype | tgenabled | tgisconstraint |\ntgconst\nrname | tgconstrrelid | tgdeferrable | tginitdeferred | tgnargs | tgattr |\ntgargs\n---------+----------------+--------+--------+-----------+----------------+--\n------\n------+---------------+--------------+----------------+---------+--------+--\n------\n 1260 | pg_sync_pg_pwd | 1689 | 29 | t | f |\n | 0 | f | f | 0 | |\n(1 row)\n\n...and it looks like the reference was never created...\n\nChris\n\n",
"msg_date": "Thu, 26 Jul 2001 12:45:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Bug in ADD COLUMN with REFERENCES"
}
] |
[
{
"msg_contents": "\n> > There is one table, let's\n> > say table1, with about 11.0000 rows. On doing a \"select distinct var1\n> > from table1\" via ODBC-Driver, I had to wait about 15 sec for the result.\n\n1. I assume following index exists: \n\tcreate index table1_x0 on table1 (var1);\n\n\"select distinct\" when executed through a btree index could benefit from\nan optimization as follows:\n\nfor each \"key\" that is already in a sorted list output skip the heap tuple\nlookup, since for a duplicate it is not necessary to know the tx status,\nnot sure if that is implementable though.\n\n(PostreSQL needs to look up the tx status in the table for each key in index)\n\nThe performance difference is unfortunately to be expected, since Centura can\nprobably do the distinct by only looking at the index.\n\nYou could try to normalize your schema (pull those values out into an extra table) \nif you need such \"distinct\" queries to be fast.\n\nAndreas\n",
"msg_date": "Thu, 26 Jul 2001 09:18:36 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: Slow Performance in PostgreSQL"
}
] |
[
{
"msg_contents": "Dear all ...\nI have a postgres problem \nI am developing application with c++\nto connect to postgres SQL.\n\nWhen I compiled my source.cc I found some error like this :\n\n/tmp/ccy63XDd.o: In function `main':\n/tmp/ccy63XDd.o(.text+0x70): undefined reference to `PQsetdbLogin'\n/tmp/ccy63XDd.o(.text+0x91): undefined reference to `PQstatus'\n/tmp/ccy63XDd.o(.text+0xc4): undefined reference to `PQerrorMessage'\ncollect2: ld returned 1 exit status\n\nPlease help me to fix the problem .....\nThank you for your attention ....\n\n-- \nZudi Iswanto\n",
"msg_date": "Thu, 26 Jul 2001 16:55:14 +0700",
"msg_from": "Zudi Iswanto <zudi@dnet.net.id>",
"msg_from_op": true,
"msg_subject": "None"
},
{
"msg_contents": "On Thu, Jul 26, 2001 at 04:55:14PM +0700, Zudi Iswanto wrote:\n> I am developing application with c++\n... \n> /tmp/ccy63XDd.o(.text+0x70): undefined reference to `PQsetdbLogin'\n> /tmp/ccy63XDd.o(.text+0x91): undefined reference to `PQstatus'\n> /tmp/ccy63XDd.o(.text+0xc4): undefined reference to `PQerrorMessage'\n\nDid you link libpq as well as libpq++ ? ie something like\n-L/usr/local/lib/pgsql -Wl,-R/usr/local/lib/pgsql -lpq++ -lpq\n\nCheers,\n\nPatrick\n",
"msg_date": "Thu, 26 Jul 2001 20:29:21 +0100",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "In the course of developing my XML parser hooks I've been using an\nexternal XML parser (expat) which is built as a shared library. The \nC functions I'm writing need to access functions within that library.\n\nIs it OK just to link the .so of my backend function against the expat\nlibrary? i.e. to do\n\ngcc -shared -lexpat -o pgxml.so pgxml.o \n\nas the link stage (it seems to work fine) -or is there a portability \nproblem with this?\n\nIF this is OK, would it be sensible to change the platform-specific \nmakefile %.so rule to allow the specification of extra instance specific \nflags i.e. (example from Makefile.linux)\n\n%.so: %.o\n $(CC) -shared -o $@ $<\n\nchanged to:\n\n%.so: %.o\n\t$(CC) -shared $(DLLINKFLAGS) -o $@ $< \n\nor something similar, which would prevent me from having to override\nthe global rule and allow greater portability.\n\nThanks\n\nJohn\n",
"msg_date": "Thu, 26 Jul 2001 10:15:10 +0000",
"msg_from": "\"John Gray\" <jgray@beansindustry.co.uk>",
"msg_from_op": true,
"msg_subject": "Linking a shared library against a C function"
},
{
"msg_contents": "John Gray writes:\n\n> Is it OK just to link the .so of my backend function against the expat\n> library? i.e. to do\n>\n> gcc -shared -lexpat -o pgxml.so pgxml.o\n>\n> as the link stage (it seems to work fine) -or is there a portability\n> problem with this?\n\nOn some platforms the dynamic loader will have to load the libexpat.so\nobject separately before the pgxml.so object, otherwise loading the latter\nwill fail with unresolved symbols. (This is either because the dynamic\nloader ignores the libraries dependencies, or because the system doesn't\nallow shared libraries to have dependencies at all.) In those cases the\nonly solution is to use something like libtool. However, those systems\nare getting rarer, so you ought to be safe as a contrib item anyway.\n\n> IF this is OK, would it be sensible to change the platform-specific\n> makefile %.so rule to allow the specification of extra instance specific\n> flags i.e. (example from Makefile.linux)\n>\n> %.so: %.o\n> $(CC) -shared -o $@ $<\n>\n> changed to:\n>\n> %.so: %.o\n> \t$(CC) -shared $(DLLINKFLAGS) -o $@ $<\n>\n> or something similar, which would prevent me from having to override\n> the global rule and allow greater portability.\n\nUse the Makefile.shlib interface. The rules in Makefile.port aren't very\npowerful.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 6 Aug 2001 02:21:08 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Linking a shared library against a C function"
}
] |
[
{
"msg_contents": "If I do something like that, psql claims:\n\nERROR: unexpected SELECT query in exec_stmt_execsql()\n\nwhy? How do I do this?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\niodine NORAD Saddam Hussein Ft. Bragg colonel assassination Honduras\n$400 million in gold bullion Nazi killed Rule Psix World Trade Center\nradar plutonium explosion\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "26 Jul 2001 12:46:16 +0200",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "plpgsql: 'SELECT * FROM tbl'"
}
] |
[
{
"msg_contents": "Just want to say I have been looking at the development version of 7.2 and I am\ncompletely impressed. The one huge stumbling block to a 24x7 deployment,\nvacuum, has been removed!! This is utterly fantastic! Is there a target date\nfor release? (sorry for asking, I know how irritating such questions are.)\n",
"msg_date": "Thu, 26 Jul 2001 08:04:36 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Release of 7.2"
},
{
"msg_contents": "What's new in 7.2? URL?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nRule Psix jihad PLO Iran arrangements nitrate Nazi class struggle\ntoluene spy plutonium Soviet Marxist tritium attack\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "26 Jul 2001 14:30:02 +0200",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": false,
"msg_subject": "Re: Release of 7.2"
},
{
"msg_contents": "Oops!! I looked at the title and it may have been misleading. Sorry.\n\nmlw wrote:\n> \n> Just want to say I have been looking at the development version of 7.2 and I am\n> completely impressed. The one huge stumbling block to a 24x7 deployment,\n> vacuum, has been removed!! This is utterly fantastic! Is there a target date\n> for release? (sorry for asking, I know how irritating such questions are.)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Thu, 26 Jul 2001 08:59:32 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Release of 7.2-Not released!!! Just a Question!"
},
{
"msg_contents": "\nSee the TODO list with dashes.\n\n> What's new in 7.2? URL?\n> \n> -- \n> Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n> ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n> / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n> _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n> \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n> \n> Rule Psix jihad PLO Iran arrangements nitrate Nazi class struggle\n> toluene spy plutonium Soviet Marxist tritium attack\n> [See http://www.aclu.org/echelonwatch/index.html for more about this]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 26 Jul 2001 17:55:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release of 7.2"
}
] |
[
{
"msg_contents": "\tI'm surprised you are even able to run postgres in windows. I\ndidn't even know postgres supported windows. Could you kindly point me to\ninstructions on how to run it and build the postgres souce on windows? If\nnobody will try to fix it, then maybe we should just try it ourseleves and\npost some patch to it.\n\n-----Original Message-----\nFrom: Steve Howe [mailto:howe@carcass.dhs.org]\nSent: Wednesday, July 25, 2001 9:25 PM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] LIBPQ on Windows and large Queries\n\n\nHello all,\n\n Is anybody trying to solve the 8191 bytes query limit from libpq\nwindows port ???\n We've discussed this topic on \"Large queries - again\" thread and it\nseems like nobody got interested on fixing it.\n All Windows applications that rely on libpq are broken because of\nthis issue (ODBC applications are fine btw).\n I can also do any kind of testing under Windows (and actually I'll\ndo it anyway). I wonder if this limitation also applies to the unix libpq\nlibrary ???\n Jan, Tom, Bruce - any news on this ?\n\nBest Regards,\nSteve Howe\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Thu, 26 Jul 2001 12:36:27 -0700",
"msg_from": "Khoa Do <kdo@stratacare.com>",
"msg_from_op": true,
"msg_subject": "RE: LIBPQ on Windows and large Queries"
},
{
"msg_contents": "On Thu, 26 Jul 2001, Khoa Do wrote:\n\n> \tI'm surprised you are even able to run postgres in windows. I\n> didn't even know postgres supported windows. Could you kindly point me to\n> instructions on how to run it and build the postgres souce on windows? If\n> nobody will try to fix it, then maybe we should just try it ourseleves and\n> post some patch to it.\n> \n> -----Original Message-----\n> From: Steve Howe [mailto:howe@carcass.dhs.org]\n> Sent: Wednesday, July 25, 2001 9:25 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] LIBPQ on Windows and large Queries\n> \n> \n> Hello all,\n> \n> Is anybody trying to solve the 8191 bytes query limit from libpq\n> windows port ???\n> We've discussed this topic on \"Large queries - again\" thread and it\n> seems like nobody got interested on fixing it.\n> All Windows applications that rely on libpq are broken because of\n> this issue (ODBC applications are fine btw).\n> I can also do any kind of testing under Windows (and actually I'll\n> do it anyway). I wonder if this limitation also applies to the unix libpq\n> library ???\n> Jan, Tom, Bruce - any news on this ?\n\nSteve's question was about the Postgres library (libpq.dll). I can't\nconfirm that is hasn't been TOAST'ed (for >8192 chars).\n\nPostgreSQL does, however, run just peachy under Windows. Has for a long\ntime w/Windows NT and Windows 2000; recently, works with Windows 98\ntoo. (Can't vouch for WinME, never touched the thing.)\n\n www.cygwin.com\n\nCan download it as part of the Cygwin package. You'll need to install\nCygIPC (easily found via google, a simple binary install). Whole thing is\npretty much of a snap nowadays.\n\nOf course, would you want to run a serious database under Windows 98?\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Thu, 26 Jul 2001 16:37:06 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "RE: LIBPQ on Windows and large Queries"
}
] |
[
{
"msg_contents": "I am attempting to compile PostgreSQL 7.1.2 on an IBM B50 running AIX 5.1.\nRunning configure works fine but when I go to the next step and run gmake\ndies with the following error. If someone could point me in the direction of\nwhat exactly is causing the error I would appreciate it greatly.\n\n<!-- BEGIN Error Message -->\ngmake[4]: Leaving directory\n`/downloads/postgresql/postgresql-7.1.2/src/backend/\nutils/time'\n/usr/bin/ld -r -o SUBSYS.o fmgrtab.o adt/SUBSYS.o cache/SUBSYS.o\nerror/SUBSYS.o\nfmgr/SUBSYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBSYS.o mmgr/SUBSYS.o\nsort/SUBSY\nS.o time/SUBSYS.o\ngmake[3]: Leaving directory\n`/downloads/postgresql/postgresql-7.1.2/src/backend/\nutils'\ngcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\naccess/SUBSYS\n.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o\nexecuto\nr/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o\noptimizer/SU\nBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\nstorage\n/SUBSYS.o tcop/SUBSYS.o\ntils/SUBSYS.o -lz -lPW -lld -lnsl -ldl -lreadline -o p\nostgres\nld: 0711-317 ERROR: Undefined symbol: .log\nld: 0711-317 ERROR: Undefined symbol: .ceil\nld: 0711-317 ERROR: Undefined symbol: .sqrt\nld: 0711-317 ERROR: Undefined symbol: .isnan\nld: 0711-317 ERROR: Undefined symbol: .floor\nld: 0711-317 ERROR: Undefined symbol: .pow\nld: 0711-317 ERROR: Undefined symbol: .exp\nld: 0711-317 ERROR: Undefined symbol: .log10\nld: 0711-317 ERROR: Undefined symbol: .acos\nld: 0711-317 ERROR: Undefined symbol: .asin\nld: 0711-317 ERROR: Undefined symbol: .atan\nld: 0711-317 ERROR: Undefined symbol: .atan2\nld: 0711-317 ERROR: Undefined symbol: .cos\nld: 0711-317 ERROR: Undefined symbol: .tan\nld: 0711-317 ERROR: Undefined symbol: .sin\nld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more\ninformation.\ncollect2: ld returned 8 exit status\ngmake[2]: *** [postgres] Error 1\ngmake[2]: Leaving directory\n`/downloads/postgresql/postgresql-7.1.2/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/downloads/postgresql/postgresql-7.1.2/src'\ngmake: *** [all] Error 2\n<!-- END Error Message -->\n\n-Mark Esformes\nDeveloper\nwww.BigWhat.com\nBigWhat.com, Inc.\n410 Old Main Street\nBradenton, FL 34205\n941-747-2160\nmark@bigwhat.com\n\"A Different Kind of Search!\"\n\n",
"msg_date": "Thu, 26 Jul 2001 18:28:47 -0400",
"msg_from": "\"BigWhat.com\" <ml.postgres@bigwhat.com>",
"msg_from_op": true,
"msg_subject": "Failed compile PostgreSQL 7.1.2 on AIX 5.1"
},
{
"msg_contents": "On Thu, 26 Jul 2001, BigWhat.com wrote:\n\n> I am attempting to compile PostgreSQL 7.1.2 on an IBM B50 running AIX 5.1.\n\nI'm jealous. I saw a rack full of B50's and fell in love.. ;-)\n\n[Snip]\n\n> gcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\n> access/SUBSYS\n> .o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o\n> executo\n> r/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o\n> optimizer/SU\n> BSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o\n> storage\n> /SUBSYS.o tcop/SUBSYS.o\n> tils/SUBSYS.o -lz -lPW -lld -lnsl -ldl -lreadline -o p\n> ostgres\n> ld: 0711-317 ERROR: Undefined symbol: .log\n> ld: 0711-317 ERROR: Undefined symbol: .ceil\n> ld: 0711-317 ERROR: Undefined symbol: .sqrt\n> ld: 0711-317 ERROR: Undefined symbol: .isnan\n> ld: 0711-317 ERROR: Undefined symbol: .floor\n> ld: 0711-317 ERROR: Undefined symbol: .pow\n> ld: 0711-317 ERROR: Undefined symbol: .exp\n> ld: 0711-317 ERROR: Undefined symbol: .log10\n> ld: 0711-317 ERROR: Undefined symbol: .acos\n> ld: 0711-317 ERROR: Undefined symbol: .asin\n> ld: 0711-317 ERROR: Undefined symbol: .atan\n> ld: 0711-317 ERROR: Undefined symbol: .atan2\n> ld: 0711-317 ERROR: Undefined symbol: .cos\n> ld: 0711-317 ERROR: Undefined symbol: .tan\n> ld: 0711-317 ERROR: Undefined symbol: .sin\n> ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more\n> information.\n\nIt looks like you need to add a -lm in there somewhere to include the math\nlibrary.\n\nexport LDFLAGS=\"-lm\" should do the job, unless someone else knows the\nexact place to put it.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Thu, 26 Jul 2001 22:41:03 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": false,
"msg_subject": "Re: Failed compile PostgreSQL 7.1.2 on AIX 5.1"
},
{
"msg_contents": "BigWhat.com writes:\n\n> ld: 0711-317 ERROR: Undefined symbol: .log\n> ld: 0711-317 ERROR: Undefined symbol: .ceil\n> ld: 0711-317 ERROR: Undefined symbol: .sqrt\n[...]\n\nLooks like configure failed to notice your math library (-lm). Please\ncheck the config.log file to see why.\n\n> ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more\n> information.\n\nWell...?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 5 Aug 2001 21:34:31 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Failed compile PostgreSQL 7.1.2 on AIX 5.1"
}
] |
[
{
"msg_contents": "I noticed that pltcl didn't have any way to get to SPI_lastoid like plpgsql does.. I started using pltcl a lot because I like to decide when and how my queries get planned.. so I put one together really quick\n\nSorry I don't have the original around to make a quick diff, but its a very small change... I think this should be in the next release, there's no reason not to have it.\n\nits a function with no expected arguments, so you can use it like:\nspi_exec \"INSERT INTO mytable(columns...) VALUES(values..)\"\nset oid [spi_lastoid]\nspi_exec \"SELECT mytable_id from mytable WHERE oid=$oid\"\n\nIt just didn't make sense for me to use plpgsql and pltcl, or just screw them both and use SPI from C\n\nthese changes are for src/pl/tcl/pltcl.c\n\n/* start C code */\n\n/* forward declaration */\nstatic int pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n int argc, char *argv[]);\n\n/* this needs to go in in pltcl_init_interp with the rest of 'em */\n Tcl_CreateCommand(interp, \"spi_lastoid\",\n pltcl_SPI_lastoid, NULL, NULL);\n \n \n/**********************************************************************\n * pltcl_SPI_lastoid() - return the last oid. To\n * be used after insert queries\n **********************************************************************/\nstatic int\npltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n int argc, char *argv[])\n{\n char buf[64];\n sprintf(buf,\"%d\",SPI_lastoid);\n Tcl_SetResult(interp, buf, TCL_VOLATILE);\n return TCL_OK;\n}\n\n/* end C code */\n\n-bob\n",
"msg_date": "Thu, 26 Jul 2001 22:31:22 -0400",
"msg_from": "bob@redivi.com",
"msg_from_op": true,
"msg_subject": "pltcl - lastoid"
},
{
"msg_contents": "Attached is the patch you suggested, with a documentation addition. Is\nthis correct?\n\n> I noticed that pltcl didn't have any way to get to SPI_lastoid like plpgsql does.. I started using pltcl a lot because I like to decide when and how my queries get planned.. so I put one together really quick\n> \n> Sorry I don't have the original around to make a quick diff, but its a very small change... I think this should be in the next release, there's no reason not to have it.\n> \n> its a function with no expected arguments, so you can use it like:\n> spi_exec \"INSERT INTO mytable(columns...) VALUES(values..)\"\n> set oid [spi_lastoid]\n> spi_exec \"SELECT mytable_id from mytable WHERE oid=$oid\"\n> \n> It just didn't make sense for me to use plpgsql and pltcl, or just screw them both and use SPI from C\n> \n> these changes are for src/pl/tcl/pltcl.c\n> \n> /* start C code */\n> \n> /* forward declaration */\n> static int pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n> int argc, char *argv[]);\n> \n> /* this needs to go in in pltcl_init_interp with the rest of 'em */\n> Tcl_CreateCommand(interp, \"spi_lastoid\",\n> pltcl_SPI_lastoid, NULL, NULL);\n> \n> \n> /**********************************************************************\n> * pltcl_SPI_lastoid() - return the last oid. To\n> * be used after insert queries\n> **********************************************************************/\n> static int\n> pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n> int argc, char *argv[])\n> {\n> char buf[64];\n> sprintf(buf,\"%d\",SPI_lastoid);\n> Tcl_SetResult(interp, buf, TCL_VOLATILE);\n> return TCL_OK;\n> }\n> \n> /* end C code */\n> \n> -bob\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/pltcl.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/pltcl.sgml,v\nretrieving revision 2.11\ndiff -c -r2.11 pltcl.sgml\n*** doc/src/sgml/pltcl.sgml\t2001/06/09 02:19:07\t2.11\n--- doc/src/sgml/pltcl.sgml\t2001/08/01 19:32:04\n***************\n*** 395,400 ****\n--- 395,412 ----\n </varlistentry>\n \n <varlistentry>\n+ <indexterm>\n+ <primary>spi_lastoid</primary>\n+ </indexterm>\n+ <term>spi_lastoid</term>\n+ <listitem>\n+ <para>\n+ \tReturns the OID of the last query if it was an INSERT.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <term>spi_exec ?-count <replaceable>n</replaceable>? ?-array <replaceable>name</replaceable>? <replaceable>query</replaceable> ?<replaceable>loop-body</replaceable>?</term>\n <listitem>\n <para>\nIndex: src/pl/tcl/pltcl.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/tcl/pltcl.c,v\nretrieving revision 1.37\ndiff -c -r1.37 pltcl.c\n*** src/pl/tcl/pltcl.c\t2001/06/09 02:19:07\t1.37\n--- src/pl/tcl/pltcl.c\t2001/08/01 19:32:09\n***************\n*** 144,149 ****\n--- 144,151 ----\n \t\t\t\t\t int tupno, HeapTuple tuple, TupleDesc tupdesc);\n static void pltcl_build_tuple_argument(HeapTuple tuple, TupleDesc tupdesc,\n \t\t\t\t\t\t Tcl_DString *retval);\n+ static int pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n+ \t\t\t\tint argc, char *argv[]);\n \n /*\n * This routine is a crock, and so is everyplace that calls it. The problem\n***************\n*** 251,257 ****\n \t\t\t\t\t pltcl_SPI_prepare, NULL, NULL);\n \tTcl_CreateCommand(interp, \"spi_execp\",\n \t\t\t\t\t pltcl_SPI_execp, NULL, NULL);\n! \n #ifdef ENABLE_PLTCL_UNKNOWN\n \t/************************************************************\n \t * Try to load the unknown procedure from pltcl_modules\n--- 253,261 ----\n \t\t\t\t\t pltcl_SPI_prepare, NULL, NULL);\n \tTcl_CreateCommand(interp, \"spi_execp\",\n \t\t\t\t\t pltcl_SPI_execp, NULL, NULL);\n! \tTcl_CreateCommand(interp, \"spi_lastoid\",\n! \t\t\t\t\t pltcl_SPI_lastoid, NULL, NULL);\n! \t\t\t\t\t \n #ifdef ENABLE_PLTCL_UNKNOWN\n \t/************************************************************\n \t * Try to load the unknown procedure from pltcl_modules\n***************\n*** 2270,2275 ****\n--- 2274,2294 ----\n \t ************************************************************/\n \tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n \tsprintf(buf, \"%d\", ntuples);\n+ \tTcl_SetResult(interp, buf, TCL_VOLATILE);\n+ \treturn TCL_OK;\n+ }\n+ \n+ \n+ /**********************************************************************\n+ * pltcl_SPI_lastoid() - return the last oid. To\n+ * be used after insert queries\n+ **********************************************************************/\n+ static int\n+ pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n+ \t\t\t\t int argc, char *argv[])\n+ {\n+ \tchar buf[64];\n+ \tsprintf(buf,\"%d\",SPI_lastoid);\n \tTcl_SetResult(interp, buf, TCL_VOLATILE);\n \treturn TCL_OK;\n }",
"msg_date": "Wed, 1 Aug 2001 15:35:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pltcl - lastoid"
},
{
"msg_contents": "Kindly format OIDs with %u not %d ... otherwise it looks reasonable...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 15:57:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pltcl - lastoid "
},
{
"msg_contents": "> Kindly format OIDs with %u not %d ... otherwise it looks reasonable...\n\nChange made.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 17:19:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pltcl - lastoid"
},
{
"msg_contents": "bob@redivi.com wrote:\n> \n> I noticed that pltcl didn't have any way to get to SPI_lastoid like plpgsql does.. I started using pltcl a lot because I like to decide when and how my queries get planned.. so I put one together really quick\n> \n\nPlease note that OIDs may be optional in 7.2 though\nit doesn't seem a problem for you.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 02 Aug 2001 10:42:43 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pltcl - lastoid"
},
{
"msg_contents": "Patch applied. Thanks.\n\n> I noticed that pltcl didn't have any way to get to SPI_lastoid like plpgsql does.. I started using pltcl a lot because I like to decide when and how my queries get planned.. so I put one together really quick\n> \n> Sorry I don't have the original around to make a quick diff, but its a very small change... I think this should be in the next release, there's no reason not to have it.\n> \n> its a function with no expected arguments, so you can use it like:\n> spi_exec \"INSERT INTO mytable(columns...) VALUES(values..)\"\n> set oid [spi_lastoid]\n> spi_exec \"SELECT mytable_id from mytable WHERE oid=$oid\"\n> \n> It just didn't make sense for me to use plpgsql and pltcl, or just screw them both and use SPI from C\n> \n> these changes are for src/pl/tcl/pltcl.c\n> \n> /* start C code */\n> \n> /* forward declaration */\n> static int pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n> int argc, char *argv[]);\n> \n> /* this needs to go in in pltcl_init_interp with the rest of 'em */\n> Tcl_CreateCommand(interp, \"spi_lastoid\",\n> pltcl_SPI_lastoid, NULL, NULL);\n> \n> \n> /**********************************************************************\n> * pltcl_SPI_lastoid() - return the last oid. To\n> * be used after insert queries\n> **********************************************************************/\n> static int\n> pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n> int argc, char *argv[])\n> {\n> char buf[64];\n> sprintf(buf,\"%d\",SPI_lastoid);\n> Tcl_SetResult(interp, buf, TCL_VOLATILE);\n> return TCL_OK;\n> }\n> \n> /* end C code */\n> \n> -bob\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/pltcl.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/pltcl.sgml,v\nretrieving revision 2.11\ndiff -c -r2.11 pltcl.sgml\n*** doc/src/sgml/pltcl.sgml\t2001/06/09 02:19:07\t2.11\n--- doc/src/sgml/pltcl.sgml\t2001/08/01 19:32:04\n***************\n*** 395,400 ****\n--- 395,412 ----\n </varlistentry>\n \n <varlistentry>\n+ <indexterm>\n+ <primary>spi_lastoid</primary>\n+ </indexterm>\n+ <term>spi_lastoid</term>\n+ <listitem>\n+ <para>\n+ \tReturns the OID of the last query if it was an INSERT.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ <varlistentry>\n <term>spi_exec ?-count <replaceable>n</replaceable>? ?-array <replaceable>name</replaceable>? <replaceable>query</replaceable> ?<replaceable>loop-body</replaceable>?</term>\n <listitem>\n <para>\nIndex: src/pl/tcl/pltcl.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/tcl/pltcl.c,v\nretrieving revision 1.37\ndiff -c -r1.37 pltcl.c\n*** src/pl/tcl/pltcl.c\t2001/06/09 02:19:07\t1.37\n--- src/pl/tcl/pltcl.c\t2001/08/01 19:32:09\n***************\n*** 144,149 ****\n--- 144,151 ----\n \t\t\t\t\t int tupno, HeapTuple tuple, TupleDesc tupdesc);\n static void pltcl_build_tuple_argument(HeapTuple tuple, TupleDesc tupdesc,\n \t\t\t\t\t\t Tcl_DString *retval);\n+ static int pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n+ \t\t\t\tint argc, char *argv[]);\n \n /*\n * This routine is a crock, and so is everyplace that calls it. The problem\n***************\n*** 251,257 ****\n \t\t\t\t\t pltcl_SPI_prepare, NULL, NULL);\n \tTcl_CreateCommand(interp, \"spi_execp\",\n \t\t\t\t\t pltcl_SPI_execp, NULL, NULL);\n! \n #ifdef ENABLE_PLTCL_UNKNOWN\n \t/************************************************************\n \t * Try to load the unknown procedure from pltcl_modules\n--- 253,261 ----\n \t\t\t\t\t pltcl_SPI_prepare, NULL, NULL);\n \tTcl_CreateCommand(interp, \"spi_execp\",\n \t\t\t\t\t pltcl_SPI_execp, NULL, NULL);\n! \tTcl_CreateCommand(interp, \"spi_lastoid\",\n! \t\t\t\t\t pltcl_SPI_lastoid, NULL, NULL);\n! \t\t\t\t\t \n #ifdef ENABLE_PLTCL_UNKNOWN\n \t/************************************************************\n \t * Try to load the unknown procedure from pltcl_modules\n***************\n*** 2270,2275 ****\n--- 2274,2294 ----\n \t ************************************************************/\n \tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n \tsprintf(buf, \"%d\", ntuples);\n+ \tTcl_SetResult(interp, buf, TCL_VOLATILE);\n+ \treturn TCL_OK;\n+ }\n+ \n+ \n+ /**********************************************************************\n+ * pltcl_SPI_lastoid() - return the last oid. To\n+ * be used after insert queries\n+ **********************************************************************/\n+ static int\n+ pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp,\n+ \t\t\t\t int argc, char *argv[])\n+ {\n+ \tchar buf[64];\n+ \tsprintf(buf,\"%u\",SPI_lastoid);\n \tTcl_SetResult(interp, buf, TCL_VOLATILE);\n \treturn TCL_OK;\n }",
"msg_date": "Thu, 2 Aug 2001 11:46:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pltcl - lastoid"
}
] |
[
{
"msg_contents": "Hi,\n I'm fighting with problem with indexes. I read documentation about\nperformance tips, about internal logic functions which are making decision\nif to use or not use indexes, etc. and I'm still failed. I'm not SQL\nguru and I don't know what to do now. My tables and indexes looks like\n...\n\nCREATE TABLE counters (\n line VARCHAR(64) NOT NULL,\n counterfrom INT8 NOT NULL,\n counterto INT8 NOT NULL,\n counterstamp TIMESTAMP NOT NULL,\n stamp TIMESTAMP NOT NULL DEFAULT 'now');\n\nCREATE INDEX counters_line_stamp ON counters (line, counterstamp); \n\n I have several other tables too with names static_counters_(hour|day|month).\nWhy? It's only for fast sumarization, so ...\n\n in counters - 5min counters for last hour, rows are moved into static_counters\n after hour sumarization in counters_hour table\n\n in counters_hour - last day hour sums, rows are moved into static_counters_\n hour table after day sumarization in counters_day\n\n in counters_day - last month days sums, rows are moved into static_counters_\n days table after month sumarization in counters_month\n\n I'm inserting about 300 rows into counters table in 5min period (fetching\ninfo from routers). Sumarization is doing everyhour with some internal logic\nand decision are made by hour info. There are about 3 milions rows in\nstatic_counters table and they are only for last month. It means, that when\nnext month begins, we moved this old data into tables counters_YYYY_MM, etc.\nI'm running VACUUM ANALYZE two times a day. Everything works fine, but I'm drawing graphs from static_counters and counters tables. For first graph I\nneed about 33 hour old data and for second graph I need about a week old data.\nI know, now there is a more data than I need in this table, but if I create a\ntable with needed values only, there is no indexes used too. Select for graphs\nlooks like ...\n\nnetacc=> EXPLAIN (SELECT SUM(counterfrom) AS from, SUM(counterto) AS to,\nfloor((985098900 - date_part('epoch', counterstamp)) / 300) AS sequence\nFROM counters WHERE line='absolonll' AND date_part('epoch', counterstamp)\n> 984978900 GROUP BY sequence, line) UNION (SELECT SUM(counterfrom) AS\nfrom, SUM(counterto) AS to, floor((985098900 - date_part('epoch',\ncounterstamp)) / 300) AS sequence FROM static_counters WHERE\nline='absolonll' AND date_part('epoch', counterstamp) > 984978900 GROUP BY\nsequence, line); NOTICE: QUERY PLAN:\n \nUnique (cost=67518.73..67525.44 rows=89 width=36)\n -> Sort (cost=67518.73..67518.73 rows=895 width=36)\n -> Append (cost=1860.01..67474.87 rows=895 width=36)\n -> Aggregate (cost=1860.01..1870.90 rows=109 width=36)\n -> Group (cost=1860.01..1865.46 rows=1089 width=36)\n -> Sort (cost=1860.01..1860.01 rows=1089\nwidth=36)\n -> Seq Scan on counters\n(cost=0.00..1805.10 rows=1089 width=36)\n -> Aggregate (cost=65525.38..65603.97 rows=786 width=36)\n -> Group (cost=65525.38..65564.67 rows=7858\nwidth=36)\n -> Sort (cost=65525.38..65525.38 rows=7858\nwidth=36)\n -> Seq Scan on static_counters\n(cost=0.00..65016.95 rows=7858 width=36)\n \nEXPLAIN\nnetacc=>\n\n ... Indexes are used when I have a few rows in table only :( Result of this\nselect is about ~105 rows in every way. Now, I don't know what to do, because\ndrawing of this two graphs is about 30 seconds and it's too much.\n Please, how may I change my solution for fast graphs drawings? May I\nsplit this table? Make table for each line? Upgrade HW?\n\n I'm running PostgreSQL 7.0.3 now on RedHat 6.2 linux box. HW of this box\nis Duron 700MHz, 384MB RAM, SCSI disk. May I upgrade PostgreSQL? May I\nupgrade RAM, CPU? I don't know what to do now and any help will be very\nappreciated.\n\n Thank you very much,\nking regards,\n Robert Vojta\n\n-- \n _\n |-| __ Robert Vojta <vojta-at-ipex.cz> -= Oo.oO =-\n |=| [Ll] IPEX, s.r.o.\n \"^\" ====`o",
"msg_date": "Fri, 27 Jul 2001 12:13:37 +0200",
"msg_from": "Robert Vojta <vojta@ipex.cz>",
"msg_from_op": true,
"msg_subject": "indexes and big tables"
},
{
"msg_contents": "\nOn Fri, 27 Jul 2001, Robert Vojta wrote:\n> netacc=> EXPLAIN (SELECT SUM(counterfrom) AS from, SUM(counterto) AS to,\n> floor((985098900 - date_part('epoch', counterstamp)) / 300) AS sequence\n> FROM counters WHERE line='absolonll' AND date_part('epoch', counterstamp)\n> > 984978900 GROUP BY sequence, line) UNION (SELECT SUM(counterfrom) AS\n> from, SUM(counterto) AS to, floor((985098900 - date_part('epoch',\n> counterstamp)) / 300) AS sequence FROM static_counters WHERE\n> line='absolonll' AND date_part('epoch', counterstamp) > 984978900 GROUP BY\n> sequence, line); NOTICE: QUERY PLAN:\n\nIs there any possibility of overlapping rows between the parts of the\nunion? If not, I'd suggest union all, since that might get rid of the top\nlevel unique and sort steps (probably not a huge gain, but might help).\n\n> Unique (cost=67518.73..67525.44 rows=89 width=36)\n> -> Sort (cost=67518.73..67518.73 rows=895 width=36)\n> -> Append (cost=1860.01..67474.87 rows=895 width=36)\n> -> Aggregate (cost=1860.01..1870.90 rows=109 width=36)\n> -> Group (cost=1860.01..1865.46 rows=1089 width=36)\n> -> Sort (cost=1860.01..1860.01 rows=1089\n> width=36)\n> -> Seq Scan on counters\n> (cost=0.00..1805.10 rows=1089 width=36)\n> -> Aggregate (cost=65525.38..65603.97 rows=786 width=36)\n> -> Group (cost=65525.38..65564.67 rows=7858\n> width=36)\n> -> Sort (cost=65525.38..65525.38 rows=7858\n> width=36)\n> -> Seq Scan on static_counters\n> (cost=0.00..65016.95 rows=7858 width=36)\n> \n> EXPLAIN\n> netacc=>\n\n",
"msg_date": "Fri, 27 Jul 2001 08:59:16 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: indexes and big tables"
},
{
"msg_contents": "> Is there any possibility of overlapping rows between the parts of the\n> union? If not, I'd suggest union all, since that might get rid of the top\n> level unique and sort steps (probably not a huge gain, but might help).\n\nHi,\n thanx for the response, there is a little possibility of overlapping rows.\nBut, I removed date_part function and it use index now. It's too slow, so\nI move all needed data into graphs_5m table and it works fine now, previous\ntime for graphs drawing was ~30-60seconds, today it's ~3-5 seconds. Thanx.\n\nBest regards,\n Robert\n\n-- \n _\n |-| __ Robert Vojta <vojta-at-ipex.cz> -= Oo.oO =-\n |=| [Ll] IPEX, s.r.o.\n \"^\" ====`o",
"msg_date": "Sat, 28 Jul 2001 12:49:01 +0200",
"msg_from": "Robert Vojta <vojta@ipex.cz>",
"msg_from_op": true,
"msg_subject": "Re: indexes and big tables"
}
] |
[
{
"msg_contents": "\n> netacc=> EXPLAIN (SELECT SUM(counterfrom) AS from, \n> SUM(counterto) AS to,\n> floor((985098900 - date_part('epoch', counterstamp)) / 300) \n> AS sequence\n> FROM counters WHERE line='absolonll' AND date_part('epoch', \n> ) counterstamp > 984978900 GROUP BY sequence, line) ...\n\nI would guess the problem is the restriction on counterstamp, because \nwritten like that, it probably can't use the index.\n\ntry something where you avoid the use of the date_part function e.g.:\n\tAND counterstamp > '2001-07-26 00:00:00.0'\n\nAndreas\n",
"msg_date": "Fri, 27 Jul 2001 13:19:54 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: indexes and big tables"
},
{
"msg_contents": "> I would guess the problem is the restriction on counterstamp, because \n> written like that, it probably can't use the index.\n> \n> try something where you avoid the use of the date_part function e.g.:\n> \tAND counterstamp > '2001-07-26 00:00:00.0'\n\n I will try it, but it use the index when there is a few amount of rows.\nWhen I insert a lot of rows like me (in milions), index isn't used. I don't\nknow the number of rows which makes border between using and don't using\nindex and I can discover it if you want. Going to try your suggestions ...\n\nBest regards,\n Robert\n\n-- \n _\n |-| __ Robert Vojta <vojta-at-ipex.cz> -= Oo.oO =-\n |=| [Ll] IPEX, s.r.o.\n \"^\" ====`o",
"msg_date": "Fri, 27 Jul 2001 14:30:53 +0200",
"msg_from": "Robert Vojta <vojta@ipex.cz>",
"msg_from_op": false,
"msg_subject": "Re: indexes and big tables"
}
] |
[
{
"msg_contents": "I was looking over the todo list and saw that someone wanted to support XML. I\nhave some quick and dirty stuff that could be used.\n\nOK, what should the feature look like?\n\nShould it be grafted onto pg_dump or should a new utility pg_xml be created?\n\nHow strict should it be? A stricter parser is easier to write, one can use a\nlibrary, unfortunately most xml is crap and for the utility to be useful, it\nhas to be real fuzzy.\n\nAny input would be appreciated.\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 27 Jul 2001 07:40:38 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "From TODO, XML?"
},
{
"msg_contents": "In article <3B615336.D654E7E1@mohawksoft.com>, markw@mohawksoft.com (mlw)\nwrote:\n> I was looking over the todo list and saw that someone wanted to support\n> XML. I have some quick and dirty stuff that could be used.\n> \n\nI'm not clear from the TODO what that \"XML support\" might involve. The\nreference to pg_dump suggests an XML dump format for databases. That only\nmakes sense if we build an XML frontend that can load XML-based pg_dump\nfiles.\n\nI can't see any very useful application though, unless someone has a\nstandard for database dumps using XML -I'd have thought that our current\n\"list of SQL statements\" dump is fine (and useful if you speak SQL)\n\n> OK, what should the feature look like?\n> \n\nWhat's the feature for? The things I've been working on are trying to make\nan XML parser available in the backend, and to build some XML document\nmanipulation functions/operators. This is useful for what I'm doing (using\nXML documents as short packets of human and machine-readable descriptive\ndata) and may be useful to other people. This work hasn't progressed very\nfar (I did only spend an afternoon or so writing it though....):\n(available at http://www.cabbage.uklinux.net/pgxml.tar.gz)\n\nOne obvious (and current) topic is XQuery and we might ask whether PG\ncould/should implement it. I think some thinking would be needed on that\nbecause a) It involves having a second, non-SQL parser on the front-end\nand that could be quite a large undertaking and b) there's probably\n(from my initial reading) some discrepancy between the PG (and indeed\nSQL) data model and the XQuery one. If we could work round that, XQuery\n*might* be an attraction to people. Certainly the ability to form one XML\ndocument out of another via a query may be good for some projects.\n\nPerhaps if people interested in XML \"stuff\" could add here, we might flesh\nout a little more of what's desired.\n\n> Should it be grafted onto pg_dump or should a new utility pg_xml be\n> created?\n> \n> How strict should it be? A stricter parser is easier to write, one can\n> use a library, unfortunately most xml is crap and for the utility to be\n> useful, it has to be real fuzzy.\n> \n\nI don't think you really can write a non-strict XML parser. At least, not\nif you want the resulting DOM to be useful - violations of well-formedness\nprobably result in logical difficulties wth the document structure. i.e. \n\n<a>\n<b>text\n<c>more text</c>\n</a>\n\nIs <c> within <b>? Are <b> and <c> siblings? These are answerable with\nwell-formed XML -And they're very relevant questions to ask for many XML\nprocessing tasks. \n\n> Any input would be appreciated.\n> \n\nLikewise -I'd be very insterested to know what sort of things people were\ninterested in -as I've found an area where I have a need which others\nmight share. I'd like to contribute some effort into it.\n\nRegards\n\nJohn\n\n",
"msg_date": "Fri, 27 Jul 2001 18:30:40 +0000",
"msg_from": "jgray@beansindustry.co.uk",
"msg_from_op": false,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "jgray@beansindustry.co.uk wrote:\n> \n> In article <3B615336.D654E7E1@mohawksoft.com>, markw@mohawksoft.com (mlw)\n> wrote:\n> > I was looking over the todo list and saw that someone wanted to support\n> > XML. I have some quick and dirty stuff that could be used.\n> >\n> \n> I'm not clear from the TODO what that \"XML support\" might involve. The\n> reference to pg_dump suggests an XML dump format for databases. That only\n> makes sense if we build an XML frontend that can load XML-based pg_dump\n> files.\n> \n> I can't see any very useful application though, unless someone has a\n> standard for database dumps using XML -I'd have thought that our current\n> \"list of SQL statements\" dump is fine (and useful if you speak SQL)\n\nActually I have been thinking about a couple projects I have done. Vendors like\nto think XML is a way to distribute databases.\n\nSo a parser that can scan a DTD and make a usable create table (...) line would\nbe very helpful. One which could compare a DTD to an existing SQL table and map\nXML data correctly. (Or error if conversion from data to SQL types yields an\nerror.)\n\nDuring a database export, a SQL table could be used to create a DTD.\n\nI was thinking along the line of being able to use XML as a fairly portable\nimport/export feature. Having this ability, as a generic solution, would have\nmade several tasks MUCH easier.\n\nI would also like the XML parser to be fuzzy enough to take some bad XML\n(because ALL XML is bad), because a lot of vendors like to distribute data in\nbad XML.\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 27 Jul 2001 20:11:35 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "\nmarkw wrote:\n\n: [...] Actually I have been thinking about a couple projects I have\n: done. Vendors like to think XML is a way to distribute databases.\n\nI would find it very helpful to see a table of what sorts of XML\nfunctionality each major vendor supports.\n\n\n: So a parser that can scan a DTD and make a usable create table (...) \n: line would be very helpful. [...]\n\nHmm, but hierarchically structured documents such as XML don't map\nwell to a relational model. The former tend to be recursive (e.g.,\nhave more levels of containment than the one or two that might be\nmappable to tables and columns.)\n\n\n: During a database export, a SQL table could be used to create a DTD.\n: [...]\n\nThis mapping (relational model -> XML) is more straightforward.\n\n\n- FChE\n",
"msg_date": "27 Jul 2001 22:24:31 -0400",
"msg_from": "fche@redhat.com (Frank Ch. Eigler)",
"msg_from_op": false,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "\"Frank Ch. Eigler\" wrote:\n> \n> markw wrote:\n> \n> : [...] Actually I have been thinking about a couple projects I have\n> : done. Vendors like to think XML is a way to distribute databases.\n> \n> I would find it very helpful to see a table of what sorts of XML\n> functionality each major vendor supports.\n\nActually I was thinking of databases of data, not database systems.\n\n> \n> : So a parser that can scan a DTD and make a usable create table (...)\n> : line would be very helpful. [...]\n> \n> Hmm, but hierarchically structured documents such as XML don't map\n> well to a relational model. The former tend to be recursive (e.g.,\n> have more levels of containment than the one or two that might be\n> mappable to tables and columns.)\n\nYes!!! Exactly, being able to understand the recursive nature of XML and create\nrelations on the fly would be a very cool feature.\n\n> \n> : During a database export, a SQL table could be used to create a DTD.\n> : [...]\n> \n> This mapping (relational model -> XML) is more straightforward.\n\nTotally.\n",
"msg_date": "Sat, 28 Jul 2001 03:29:12 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> wrote:\n\n\n> \"Frank Ch. Eigler\" wrote:\n> > : So a parser that can scan a DTD and make a usable create table (...)\n> > : line would be very helpful. [...]\n> >\n> > Hmm, but hierarchically structured documents such as XML don't map\n> > well to a relational model. The former tend to be recursive (e.g.,\n> > have more levels of containment than the one or two that might be\n> > mappable to tables and columns.)\n>\n> Yes!!! Exactly, being able to understand the recursive nature of XML and\ncreate\n> relations on the fly would be a very cool feature.\n\nI think there is a pretty straight forward mapping, except for one possible\nambiguity.\n\nIf an element, say <address>, is contained within another element, say\n<employee>, it could either be a column (or group of columns) in an Employee\ntable, or it could be a table Address which references Employee.\n\nWhen you say \"create relations on the fly\", what exactly do you mean? I can\nsee it would be handy to have CREATE TABLE statements written for you, but\nit seems likely that a human would want to edit them before the tables are\nactually created. You cannot infer much type information from the DTD. I\ndon't think there's a way to infer a primary key from a DTD, so you would\nwant to either specify one or add a serial column (or perhaps that would\nalways be done automatically). An XML schema would have more information,\nof course.\n\n\n\n\n\n",
"msg_date": "Sun, 29 Jul 2001 11:50:05 -0400",
"msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "Ken Hirsch wrote:\n> \n> mlw <markw@mohawksoft.com> wrote:\n> \n> > \"Frank Ch. Eigler\" wrote:\n> > > : So a parser that can scan a DTD and make a usable create table (...)\n> > > : line would be very helpful. [...]\n> > >\n> > > Hmm, but hierarchically structured documents such as XML don't map\n> > > well to a relational model. The former tend to be recursive (e.g.,\n> > > have more levels of containment than the one or two that might be\n> > > mappable to tables and columns.)\n> >\n> > Yes!!! Exactly, being able to understand the recursive nature of XML and\n> create\n> > relations on the fly would be a very cool feature.\n> \n> I think there is a pretty straight forward mapping, except for one possible\n> ambiguity.\n> \n> If an element, say <address>, is contained within another element, say\n> <employee>, it could either be a column (or group of columns) in an Employee\n> table, or it could be a table Address which references Employee.\n> \n> When you say \"create relations on the fly\", what exactly do you mean? I can\n> see it would be handy to have CREATE TABLE statements written for you, but\n> it seems likely that a human would want to edit them before the tables are\n> actually created. You cannot infer much type information from the DTD. I\n> don't think there's a way to infer a primary key from a DTD, so you would\n> want to either specify one or add a serial column (or perhaps that would\n> always be done automatically). An XML schema would have more information,\n> of course.\n\nI have been thinking about this. A lot of guessing would have to be done, of\ncourse. But, unless some extra information is specified, when you have an XML\nrecord, contained within another, the parser would have to generate its own\nprimary key and a sequence for each table. Obviously, the user should be able\nto specify the primary key for each table, but lacking that input, the XML\nparser/importer should do it automatically.\n\n\nSo this:\n\n<employee>\n<name>Bill</name>\n<position>Programmer</position>\n<address>\n\t<number>1290</number>\n\t<street>\n\t\t<name>Canton Ave</name>\n\t</street>\n\t\n\t<town>\n\t\t<name>Milton</name>\n\t</town>\n</address>\n</emplyee>\n\nThe above is almost impossible to convert to a relational format without\nadditional information or a good set of rules. However, we can determine which\nXML titles are \"containers\" and which are \"data.\" \"employee\" is a container\nbecause it has sub tags. \"position\" is \"data\" because it has no sub tags.\n\nWe can recursively scan this hierarchy, decide which are containers and which\nare data. Data gets assigned an appropriate SQL type and containers get\nseparated from the parent container, and an integer index is put in its place.\nFor each container, either a primary key is specified or created on the fly. \n\nWe insert sub containers first and pop back the primary key value, until we\nhave the whole record. The primary key could even be the OID.\n\nA second strategy is to concatenate the hierarchy into the field name, as\nstreet_name, town_name, and so on.\n\n\nWhat do you think?\n\n\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Sun, 29 Jul 2001 12:19:48 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "On Sun, Jul 29, 2001 at 12:19:48PM -0400, mlw wrote:\n> \n> <employee>\n> <name>Bill</name>\n> <position>Programmer</position>\n> <address>\n> \t<number>1290</number>\n> \t<street>\n> \t\t<name>Canton Ave</name>\n> \t</street>\n> \t\n> \t<town>\n> \t\t<name>Milton</name>\n> \t</town>\n> </address>\n> </emplyee>\n> \n> The above is almost impossible to convert to a relational format without\n> additional information or a good set of rules. However, we can determine which\n> XML titles are \"containers\" and which are \"data.\" \"employee\" is a container\n> because it has sub tags. \"position\" is \"data\" because it has no sub tags.\n> \n> We can recursively scan this hierarchy, decide which are containers and which\n> are data. Data gets assigned an appropriate SQL type and containers get\n> separated from the parent container, and an integer index is put in its place.\n> For each container, either a primary key is specified or created on the fly. \n> \n> We insert sub containers first and pop back the primary key value, until we\n> have the whole record. The primary key could even be the OID.\n> \n> A second strategy is to concatenate the hierarchy into the field name, as\n> street_name, town_name, and so on.\n> \n> \n> What do you think?\n\nWhat about attributes on tags. They're data, certainly. Do they then\npromote the tag they're in to a container?\n\nRoss\n",
"msg_date": "Sun, 29 Jul 2001 14:46:57 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> On Sun, Jul 29, 2001 at 12:19:48PM -0400, mlw wrote:\n> >\n> > <employee>\n> > <name>Bill</name>\n> > <position>Programmer</position>\n> > <address>\n> > <number>1290</number>\n> > <street>\n> > <name>Canton Ave</name>\n> > </street>\n> >\n> > <town>\n> > <name>Milton</name>\n> > </town>\n> > </address>\n> > </emplyee>\n> >\n> > The above is almost impossible to convert to a relational format without\n> > additional information or a good set of rules. However, we can determine which\n> > XML titles are \"containers\" and which are \"data.\" \"employee\" is a container\n> > because it has sub tags. \"position\" is \"data\" because it has no sub tags.\n> >\n> > We can recursively scan this hierarchy, decide which are containers and which\n> > are data. Data gets assigned an appropriate SQL type and containers get\n> > separated from the parent container, and an integer index is put in its place.\n> > For each container, either a primary key is specified or created on the fly.\n> >\n> > We insert sub containers first and pop back the primary key value, until we\n> > have the whole record. The primary key could even be the OID.\n> >\n> > A second strategy is to concatenate the hierarchy into the field name, as\n> > street_name, town_name, and so on.\n> >\n> >\n> > What do you think?\n> \n> What about attributes on tags. They're data, certainly. Do they then\n> promote the tag they're in to a container?\n\nAttribute tags are normally something you should know about before hand. There\nhas to be a number of tags which do not force a container. \n\nThis whole thing depends on a good DTD.\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Sun, 29 Jul 2001 16:09:08 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "> I was looking over the todo list and saw that someone wanted to support XML. I\n> have some quick and dirty stuff that could be used.\n> \n> OK, what should the feature look like?\n> \n> Should it be grafted onto pg_dump or should a new utility pg_xml be created?\n> \n> How strict should it be? A stricter parser is easier to write, one can use a\n> library, unfortunately most xml is crap and for the utility to be useful, it\n> has to be real fuzzy.\n> \n> Any input would be appreciated.\n\nThe updated TODO item is:\n\n\t* Add XML interface: psql, pg_dump, COPY, separate server (?)\n\nI am unsure where we want it. We could do COPY and hence a flag in\npg_dump, or psql like we do HTML from psql. A separate server would\naccept XML queries and return results.\n\nWhat do people want?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 23:19:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "> > I would find it very helpful to see a table of what sorts of XML\n> > functionality each major vendor supports.\n> \n> Actually I was thinking of databases of data, not database systems.\n\nI think we can go two ways. Allow COPY/pg_dump to read/write XML, or\nwrite some perl scripts to convert XML to/from our pg_dump format. The\nlatter seems quite easy and fast.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 23:27:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "> I have been fighting, for a while now, with idiot data vendors that think XML\n> is a cure all. The problem is that XML is a hierarchical format where as SQL is\n> a relational format.\n> \n> It would be good to get pg_dump to write an XML file and DTD, but getting\n> external sources of XML into PostgreSQL is WAY more complicated. If an XML\n> import is to be useful beyond just a different format for pg_dump, there has to\n> be some intelligent database construction based on the XML information.\n> \n> Go to mp3.com, and download some of their XML format data, first, it is bad\n> XML, second, it is hierarchical.\n> \n> I have managed to get several XML files into PostgreSQL by writing a parser,\n> and it is a huge hassle, the public parsers are too picky. I am thinking that a\n> fuzzy parser, combined with some intelligence and an XML DTD reader, could make\n> a very cool utility, one which I have not been able to find.\n> \n> Perhaps it is a two stage process? First pass creates a schema which can be\n> modified/corrected, the second pass loads the data.\n\nCan we accept only relational XML. Does that buy us anything? Are the\nother database vendors outputting heirchical XML? Are they using\nforeign/primary keys to do it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 00:00:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > I would find it very helpful to see a table of what sorts of XML\n> > > functionality each major vendor supports.\n> >\n> > Actually I was thinking of databases of data, not database systems.\n> \n> I think we can go two ways. Allow COPY/pg_dump to read/write XML, or\n> write some perl scripts to convert XML to/from our pg_dump format. The\n> latter seems quite easy and fast.\n\nI have been fighting, for a while now, with idiot data vendors that think XML\nis a cure all. The problem is that XML is a hierarchical format where as SQL is\na relational format.\n\nIt would be good to get pg_dump to write an XML file and DTD, but getting\nexternal sources of XML into PostgreSQL is WAY more complicated. If an XML\nimport is to be useful beyond just a different format for pg_dump, there has to\nbe some intelligent database construction based on the XML information.\n\nGo to mp3.com, and download some of their XML format data, first, it is bad\nXML, second, it is hierarchical.\n\nI have managed to get several XML files into PostgreSQL by writing a parser,\nand it is a huge hassle, the public parsers are too picky. I am thinking that a\nfuzzy parser, combined with some intelligence and an XML DTD reader, could make\na very cool utility, one which I have not been able to find.\n\nPerhaps it is a two stage process? First pass creates a schema which can be\nmodified/corrected, the second pass loads the data.\n",
"msg_date": "Mon, 30 Jul 2001 00:01:56 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "On Mon, 30 Jul 2001, mlw wrote:\n\n> Bruce Momjian wrote:\n> > \n> > > > I would find it very helpful to see a table of what sorts of XML\n> > > > functionality each major vendor supports.\n> > >\n> > > Actually I was thinking of databases of data, not database systems.\n> > \n> > I think we can go two ways. Allow COPY/pg_dump to read/write XML, or\n> > write some perl scripts to convert XML to/from our pg_dump format. The\n> > latter seems quite easy and fast.\n> \n\n> I have managed to get several XML files into PostgreSQL by writing a parser,\n> and it is a huge hassle, the public parsers are too picky. I am thinking that a\n> fuzzy parser, combined with some intelligence and an XML DTD reader, could make\n> a very cool utility, one which I have not been able to find.\n\nI have had the same problem. The best XML parser I could find was the\ngnome-xml library at xmlsoft.org (libxml). I am currently using this in C\nto replicate a client's legacy Notes system on to Postgres. In this case I\nwas lucky in as much as I had some input on the XML namespace etc. XML was\nused because they had already designed an XML based dump utility.\n\nHowever, the way XML is being used is very basic. Only creation of tables,\ninsert and delete are handled. Libxml works fine with this however,\nhandling DTD/XML parsing, UTF-8, UTF-16 and iso-8859-1, validation\netc.\n\nThe main problem then is that every vendor has a different xml name\nspace. If people really want to pursue this, the best thing to do would be\nto try to work with other open source database developers and design a\nsuitable XML namespace for open source databases. Naturally, there will be\nmuch contention here about he most suitable this and that. It will be\ndifficult to get a real spec going and will probably be much more\ntrouble than it is worth. As such, if this fails, then we cannot expect\nOracle, IBM, Sybase, MS and the rest to ever do it.\n\nPerhaps then it would be sufficient for pg_dump/restore to identify the\nname space of a given database dump and parse it according to that name\nspace. Based on command-line arguments, pg_restore/dump could either\ndie/ignore/transmogrify instructions in the XML which PG does not support \nor recognise. It would also be useful if pg_dump could dump data from\npostgres in the supported XML namespaces.\n\nSo it essentially comes down to how useful it will be and who has time to\ncode it up =) (as always).\n\n**Creative Solution**\n\nFor those who have too much time on their hands and have managed to\nuntangle some of the syntax in the W3C XSLT 1.0 specification, how about\nan XSL stylesheet to transform an XML based database dump from some third\nparty into (postgres) SQL. Erk! There would have to be an award for such a\nthing ;-).\n\nGavin\n\n",
"msg_date": "Mon, 30 Jul 2001 15:43:26 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "On Mon, Jul 30, 2001 at 03:43:26PM +1000, Gavin Sherry wrote:\n> On Mon, 30 Jul 2001, mlw wrote:\n> \n> I have had the same problem. The best XML parser I could find was the\n> gnome-xml library at xmlsoft.org (libxml). I am currently using this in C\n\n What happen if you use DOM type of XML parser for large file? A dump from\nSQL DB can be realy large. IMHO is for this (data dump from SQL DB) is\nbetter SAX type of XML parser.\n\n> an XSL stylesheet to transform an XML based database dump from some third\n\n Yes, it's right way how use XML.\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 30 Jul 2001 10:38:54 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > I have been fighting, for a while now, with idiot data vendors that think XML\n> > is a cure all. The problem is that XML is a hierarchical format where as SQL is\n> > a relational format.\n> >\n> > It would be good to get pg_dump to write an XML file and DTD, but getting\n> > external sources of XML into PostgreSQL is WAY more complicated. If an XML\n> > import is to be useful beyond just a different format for pg_dump, there has to\n> > be some intelligent database construction based on the XML information.\n> >\n> > Go to mp3.com, and download some of their XML format data, first, it is bad\n> > XML, second, it is hierarchical.\n> >\n> > I have managed to get several XML files into PostgreSQL by writing a parser,\n> > and it is a huge hassle, the public parsers are too picky. I am thinking that a\n> > fuzzy parser, combined with some intelligence and an XML DTD reader, could make\n> > a very cool utility, one which I have not been able to find.\n> >\n> > Perhaps it is a two stage process? First pass creates a schema which can be\n> > modified/corrected, the second pass loads the data.\n> \n> Can we accept only relational XML. Does that buy us anything? Are the\n> other database vendors outputting heirchical XML? Are they using\n> foreign/primary keys to do it?\n\nThen what's the point? Almost no one creates a non-hierarchical XML. For the\nutility to be usefull, beyond just a different format for pg_dump, it has to\ndeal with these issues and do the right thing.\n\n\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Mon, 30 Jul 2001 04:47:19 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "In article <200107300319.f6U3JGY24953@candle.pha.pa.us>,\npgman@candle.pha.pa.us (Bruce Momjian) wrote:\n\n> The updated TODO item is:\n> \n> \t* Add XML interface: psql, pg_dump, COPY, separate server (?)\n> \n> I am unsure where we want it. We could do COPY and hence a flag in\n> pg_dump, or psql like we do HTML from psql. A separate server would\n> accept XML queries and return results.\n> \n> What do people want?\n> \n\nI am interested in the side suggested by \"separate server\" -namely the\nextent that pg can provide a high-performance, transactional document\nstore with good query capabilities. \n\nOn the other hand, \"high-performance\" and \"transactional\" are not\nespecially necessary for my current work, so I'm looking at query\ncapabilities at present (thus the parser interface code I've done to\ndate). It's also worth pointing out that we (Azuli) are not working for a\nvery big market, so maybe best not to launch a huge project on account of\nmy peccadilloes with XML :)\n\n--\n\nJohn Gray\nAzuli IT\n",
"msg_date": "Mon, 30 Jul 2001 14:33:06 +0000",
"msg_from": "jgray@beansindustry.co.uk",
"msg_from_op": false,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "> > > I have managed to get several XML files into PostgreSQL by writing a parser,\n> > > and it is a huge hassle, the public parsers are too picky. I am thinking that a\n> > > fuzzy parser, combined with some intelligence and an XML DTD reader, could make\n> > > a very cool utility, one which I have not been able to find.\n> > >\n> > > Perhaps it is a two stage process? First pass creates a schema which can be\n> > > modified/corrected, the second pass loads the data.\n> > \n> > Can we accept only relational XML. Does that buy us anything? Are the\n> > other database vendors outputting heirchical XML? Are they using\n> > foreign/primary keys to do it?\n> \n> Then what's the point? Almost no one creates a non-hierarchical XML. For the\n> utility to be usefull, beyond just a different format for pg_dump, it has to\n> deal with these issues and do the right thing.\n\nOh, seems XML will be much more complicated than I thought.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 10:43:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > I have managed to get several XML files into PostgreSQL by writing a parser,\n> > > > and it is a huge hassle, the public parsers are too picky. I am thinking that a\n> > > > fuzzy parser, combined with some intelligence and an XML DTD reader, could make\n> > > > a very cool utility, one which I have not been able to find.\n> > > >\n> > > > Perhaps it is a two stage process? First pass creates a schema which can be\n> > > > modified/corrected, the second pass loads the data.\n> > >\n> > > Can we accept only relational XML. Does that buy us anything? Are the\n> > > other database vendors outputting heirchical XML? Are they using\n> > > foreign/primary keys to do it?\n> >\n> > Then what's the point? Almost no one creates a non-hierarchical XML. For the\n> > utility to be usefull, beyond just a different format for pg_dump, it has to\n> > deal with these issues and do the right thing.\n> \n> Oh, seems XML will be much more complicated than I thought.\n\nI think an XML \"output\" for pg_dump would be a pretty good/easy feature. It is\neasy to create XML.\n\n<record>\n<field1>bla bla</field1>\n<field2>foo bar</field2>\n</record>\n\nIs very easy to create. Of course a little work would be needed to take\ninformation of the field (column) types into a DTD, but the actual XML is not\nmuch more complicated than a printf, i.e. printf(\"<%s>%s</%s>\", name, data,\nname); \n\nThe real issues is reading XML. Postgres can make a DTD and XML file which can\nbe read by a strict parser, but that does not imply the inverse.\n\nAttached is an XML file from MP3.com. For an XML import to be anything but\nmarket/feature list candy, it should be able to import this sort of data,\nbecause when people say XML, this is the sort of data they are thinking about.\n\nIf you take the time to examine the file, you will see it represents four or\nfive distinct tables in a relational database. These tables are Song, Artist,\n[Cdlist,] Cd, and Genre.\n\nSong has a number of fields, plus foreign keys: Artist, Cdlist, and Genre.\nCdlist would have to have a synthetic primary key (OID, sequence?), which\ntables Cd and Song would reference. Cdlist would probably never be used in a\nquery.\n\nI think it is doable as a project (which everyone will be glad to have and\ncomplain about), but I think it is far more complicated than a one or two day\nmodification of pg_dump.\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com",
"msg_date": "Tue, 31 Jul 2001 09:31:46 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: From TODO, XML?"
},
{
"msg_contents": "Hi,\n\nWhy don't use the excellent DBIx-XML_RDB perl module ? Give it the query\nit will return XML output as you sample. With some hack you can do what you\nwant...\n\nRegards\n\nGilles DAROLD\n\nmlw wrote:\n\n> Bruce Momjian wrote:\n> >\n> > > > > I have managed to get several XML files into PostgreSQL by writing a parser,\n> > > > > and it is a huge hassle, the public parsers are too picky. I am thinking that a\n> > > > > fuzzy parser, combined with some intelligence and an XML DTD reader, could make\n> > > > > a very cool utility, one which I have not been able to find.\n> > > > >\n> > > > > Perhaps it is a two stage process? First pass creates a schema which can be\n> > > > > modified/corrected, the second pass loads the data.\n> > > >\n> > > > Can we accept only relational XML. Does that buy us anything? Are the\n> > > > other database vendors outputting heirchical XML? Are they using\n> > > > foreign/primary keys to do it?\n> > >\n> > > Then what's the point? Almost no one creates a non-hierarchical XML. For the\n> > > utility to be usefull, beyond just a different format for pg_dump, it has to\n> > > deal with these issues and do the right thing.\n> >\n> > Oh, seems XML will be much more complicated than I thought.\n>\n> I think an XML \"output\" for pg_dump would be a pretty good/easy feature. It is\n> easy to create XML.\n>\n> <record>\n> <field1>bla bla</field1>\n> <field2>foo bar</field2>\n> </record>\n>\n\n",
"msg_date": "Tue, 31 Jul 2001 17:02:41 +0200",
"msg_from": "Gilles DAROLD <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: From TODO, XML?"
},
{
"msg_contents": "Gilles DAROLD wrote:\n> \n> Hi,\n> \n> Why don't use the excellent DBIx-XML_RDB perl module ? Give it the query\n> it will return XML output as you sample. With some hack you can do what you\n> want...\n> \nThe point I was trying to make is that XML is trivial to create. It is much\nmore difficult to read. I think pg_dump is a \"better\" place for the export, in\nthat all the logic to find the fields, types, and data are already there. Were\na simple option presented, --xml, it would probably be easier to add xml export\nto pg_dump than create a new utility, but then again, I didn't write pg_dump\nand therefore do not know for sure.\n\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Tue, 31 Jul 2001 12:19:34 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: From TODO, XML?"
}
] |
[
{
"msg_contents": "> The index is only used for the line= part of the where clause\n> with your query. With many rows the \"line=\" is not selective enough \n> to justify the index.\n\nHi,\n I tried you suggestion about 'AND counterstamp > '2001-07-26 00:00:00.0' and\nit works and index is used :) But, whole query run for 10 sec (was 30s) and\nit's too much, I need about 1 sec. May I optimize my tables, queries or may I\nupgrade something from my HW (duron 700, 384MB RAM, slow scsi disk :( )? I\ndo not want solution, some hint in which part may I focus and I will go through\ndocumentation again, thank you very much.\n\nBest regards,\n Robert\n\n-- \n _\n |-| __ Robert Vojta <vojta-at-ipex.cz> -= Oo.oO =-\n |=| [Ll] IPEX, s.r.o.\n \"^\" ====`o",
"msg_date": "Fri, 27 Jul 2001 14:57:13 +0200",
"msg_from": "Robert Vojta <vojta@ipex.cz>",
"msg_from_op": true,
"msg_subject": "Re: indexes and big tables"
},
{
"msg_contents": "> The index is only used for the line= part of the where clause\n> with your query. With many rows the \"line=\" is not selective enough \n> to justify the index.\n\n I tried move only needed data into new table and change query into ...\n\nnetacc=> EXPLAIN SELECT counterfrom AS from, counterto AS to,\nfloor((980000000 - date_part('epoch', counterstamp)) / 300) AS sequence\nFROM graphs_5m WHERE line='absolonll'; NOTICE: QUERY PLAN:\n \nIndex Scan using graphs_5m_idx on graphs_5m (cost=0.00..58.38 rows=29\nwidth=24)\n \nEXPLAIN\n\n and query runs for 3-5 seconds. Any idea how to make it faster? I think,\nthat now it's ready to HW upgrade for faster result ...\n\nBest regards,\n Robert\n\n-- \n _\n |-| __ Robert Vojta <vojta-at-ipex.cz> -= Oo.oO =-\n |=| [Ll] IPEX, s.r.o.\n \"^\" ====`o",
"msg_date": "Fri, 27 Jul 2001 15:02:26 +0200",
"msg_from": "Robert Vojta <vojta@ipex.cz>",
"msg_from_op": true,
"msg_subject": "Re: indexes and big tables"
}
] |
[
{
"msg_contents": "Hi\n\nPostgreSQL7.1 is now running on AIX5L( S85, 6GB memory, 6CPU), which was\nrunning on Linux before(Pentium3, 2CPU, as far as I\nremember.......sorry......).\nThe performance( on AIX5L ) is just half as good as the one( on Linux ).\nI compiled PostgreSQL on AIX5L ofcourse.\nI haven't configured it when migrating to AIX5L though.\nAre there any problems in not tuning when migrating it to AIX5L?\nWhat should I check first?\nI can't make a head or tail of it:(\nHelp!!\n\nBest regards,\nShuichi\n\n\n\n\n\n\n",
"msg_date": "Fri, 27 Jul 2001 23:57:16 +0900",
"msg_from": "\"Leslie\" <leslie@boreas.dti.ne.jp>",
"msg_from_op": true,
"msg_subject": "PostgreSQL7.1 on AIX5L is running with too poor ferformance"
},
{
"msg_contents": "\"Leslie\" <leslie@boreas.dti.ne.jp> writes:\n> PostgreSQL7.1 is now running on AIX5L( S85, 6GB memory, 6CPU), which was\n> running on Linux before(Pentium3, 2CPU, as far as I\n> remember.......sorry......).\n> The performance( on AIX5L ) is just half as good as the one( on Linux ).\n\nHmm ... is the AIX compilation selecting an appropriate TAS\nimplementation for spinlocks? If it's falling back to semaphore-based\nspinlocks, I can easily believe that you might take a 2X performance\nhit. Possibly s_lock.h needs some additional #if tests for AIX5.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 00:36:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL7.1 on AIX5L is running with too poor ferformance "
},
{
"msg_contents": "> \"Leslie\" <leslie@boreas.dti.ne.jp> writes:\n> > PostgreSQL7.1 is now running on AIX5L( S85, 6GB memory, 6CPU), which was\n> > running on Linux before(Pentium3, 2CPU, as far as I\n> > remember.......sorry......).\n> > The performance( on AIX5L ) is just half as good as the one( on Linux ).\n> \n> Hmm ... is the AIX compilation selecting an appropriate TAS\n> implementation for spinlocks? \n\nI think yes. I have compiled 7.1 on an AIX5L box and found that TAS()\nwas replaced by:\n\n\t cs((int *) (lock), 0, 1)\n\n> If it's falling back to semaphore-based\n> spinlocks, I can easily believe that you might take a 2X performance\n> hit. Possibly s_lock.h needs some additional #if tests for AIX5.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 30 Jul 2001 22:18:07 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL7.1 on AIX5L is running with too poor\n ferformance"
}
] |
[
{
"msg_contents": "I got a mailbox full for Peter, so here is information.\n\nLarry ROsenman\n----- Forwarded message from Larry Rosenman <ler@lerctr.org> -----\n\nFrom: Larry Rosenman <ler@lerctr.org>\nSubject: Caldera OpenUNIX 8\nDate: Fri, 27 Jul 2001 11:58:01 -0500\nMessage-ID: <20010727115801.A2965@lerami.lerctr.org>\nUser-Agent: Mutt/1.3.19i\nX-Mailer: Mutt http://www.mutt.org/\nTo: peter_e@gmx.net\n\nlerami.lerctr.org is now running Caldera's OpenUNIX 8 operating system\nwhich is UnixWare 7.1.1+fixes+Linux Kernel Personality. \n\nThe tools (cc, et al) have just bug fixes. \n\nTHere is also a LINUX mode, which is the full OpenLinux 3.1 userland\non top of the UnixWare kernel with call mapping. \n\nIf you have questions, let me know.\n\nLER\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n----- End forwarded message -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Jul 2001 11:59:09 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "(forw) Caldera OpenUNIX 8"
}
] |
[
{
"msg_contents": "I believe Caldera has submitted changes to the autoconf people to\nupdate config/config.guess to support OpenUNIX 8. \n\nOur current stuff BREAKS unless you use the SCOMPAT magic to look like\na UnixWare 7.1.1 box. \n\nWho needs to pick up an update? \n\nLarry \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Jul 2001 14:49:13 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "config.guess on OpenUnix 8..."
},
{
"msg_contents": "Larry Rosenman writes:\n\n> I believe Caldera has submitted changes to the autoconf people to\n> update config/config.guess to support OpenUNIX 8.\n\nFor one thing, the autoconf people don't maintain config.{guess,sub}.\n(They don't much care for this confusion either.) Updates should be\nobtained from the ftp site mentioned later in this thread or the CVS\nserver at subversions.gnu.org. Both are stable and the ftp copy is\nsync'ed frequently.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 5 Aug 2001 21:49:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: config.guess on OpenUnix 8..."
}
] |
[
{
"msg_contents": "Skip the patch for configure.in in that last one, use this in it's\nplace (I missed one sysv5uw). \n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Fri, 27 Jul 2001 16:47:06 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "OU8..."
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Skip the patch for configure.in in that last one, use this in it's\n> place (I missed one sysv5uw). \n> \n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 23:31:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OU8..."
},
{
"msg_contents": "\nPatch applied. Thanks. Still needs updated autoconf.\n\n\n> Skip the patch for configure.in in that last one, use this in it's\n> place (I missed one sysv5uw). \n> \n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 11:01:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OU8..."
}
] |
[
{
"msg_contents": "Dear all ....\nI have problem with Cgicc implementation in c++;\nCould you tell me how to redirect from one page to certain page\nwhen I enter the submit button\n-- \nZudi Iswanto\n",
"msg_date": "Sat, 28 Jul 2001 15:34:15 +0700",
"msg_from": "Zudi Iswanto <zudi@dnet.net.id>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is the situation: You are doing a big query, but you want the results\non the web page to be paginated. ie. The user can click page 1, 2, etc.\n\nSo, you need know how many rows total would be returned, but you also only\nneed a small fraction of them.\n\nWhat is an efficient way of doing this?\n\nIt seems to me that using a CURSOR would be advantageous, however once a\nCURSOR is opened, how do you get the full row count?\n\nie. Can you do this:?\n\n1. Declare a cursor\n2. Find the total number of rows returned\n3. Fetch the subset of the rows that are required\n4. Construct a pagination based on the info from 2 and 3.\n\nIf this can't be done - how do you do it? Is the only way to repeat the\nwhole query twice, the first time doing a count(*) instead of the select\nvariables?\n\nThanks,\n\nChris\n\n\n",
"msg_date": "Mon, 30 Jul 2001 17:07:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Portal question"
},
{
"msg_contents": "I've used select count(), then a select LIMIT/OFFSET for the pages.. A\ncursor might be a better idea though I don't think you can get the total\nnumber of rows without count()'ing them.\n\nGood luck!\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Hackers\" <pgsql-hackers@postgresql.org>\nSent: Monday, July 30, 2001 5:07 AM\nSubject: [HACKERS] Portal question\n\n\n> Hi all,\n>\n> This is the situation: You are doing a big query, but you want the\nresults\n> on the web page to be paginated. ie. The user can click page 1, 2, etc.\n>\n> So, you need know how many rows total would be returned, but you also only\n> need a small fraction of them.\n>\n> What is an efficient way of doing this?\n>\n> It seems to me that using a CURSOR would be advantageous, however once a\n> CURSOR is opened, how do you get the full row count?\n>\n> ie. Can you do this:?\n>\n> 1. Declare a cursor\n> 2. Find the total number of rows returned\n> 3. Fetch the subset of the rows that are required\n> 4. Construct a pagination based on the info from 2 and 3.\n>\n> If this can't be done - how do you do it? Is the only way to repeat the\n> whole query twice, the first time doing a count(*) instead of the select\n> variables?\n>\n> Thanks,\n>\n> Chris\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 30 Jul 2001 10:42:47 -0400",
"msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>",
"msg_from_op": false,
"msg_subject": "Re: Portal question"
}
] |
[
{
"msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\tmomjian@hub.org\t01/07/30 10:50:24\n\nModified files:\n\tsrc/backend/libpq: hba.c \n\tsrc/backend/postmaster: postmaster.c \n\tsrc/backend/tcop: postgres.c \n\tsrc/include/libpq: hba.h \n\nLog message:\n\tLoad pg_hba.conf and pg_ident.conf on startup and SIGHUP into List of\n\tLists, and use that for user validation.\n\t\n\tBruce Momjian\n\n",
"msg_date": "Mon, 30 Jul 2001 10:50:25 -0400 (EDT)",
"msg_from": "Bruce Momjian - CVS <momjian@hub.org>",
"msg_from_op": true,
"msg_subject": "pgsql/src backend/libpq/hba.c backend/postmast ..."
},
{
"msg_contents": "Bruce Momjian - CVS <momjian@hub.org> writes:\n> \tLoad pg_hba.conf and pg_ident.conf on startup and SIGHUP into List of\n> \tLists, and use that for user validation.\n\nWhile this should be a nice speedup, it bothers me somewhat that the old\nbehavior of reacting immediately to pg_hba.conf and pg_ident.conf\nupdates has been changed. (And you didn't update the documentation to\nsay so --- tsk tsk.)\n\nWould it make sense to do fstat calls on these files and reload whenever\nwe observe that the file modification time has changed? That'd be an\nadditional kernel call per connection attempt, so I'm not at all sure\nI want to do it ... but it ought to be debated. Comments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Jul 2001 19:05:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "pg_hba.conf pre-parsing change"
},
{
"msg_contents": "> Bruce Momjian - CVS <momjian@hub.org> writes:\n> > \tLoad pg_hba.conf and pg_ident.conf on startup and SIGHUP into List of\n> > \tLists, and use that for user validation.\n> \n> While this should be a nice speedup, it bothers me somewhat that the old\n> behavior of reacting immediately to pg_hba.conf and pg_ident.conf\n> updates has been changed. (And you didn't update the documentation to\n> say so --- tsk tsk.)\n\nOh, I didn't realize we documented that.\n\n> Would it make sense to do fstat calls on these files and reload whenever\n> we observe that the file modification time has changed? That'd be an\n> additional kernel call per connection attempt, so I'm not at all sure\n> I want to do it ... but it ought to be debated. Comments anyone?\n\nWe could, but we don't with postgresql.conf so it made sense to keep the\nbehavior the same for the two files.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 31 Jul 2001 19:14:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_hba.conf pre-parsing change"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Would it make sense to do fstat calls on these files and reload whenever\n>> we observe that the file modification time has changed? That'd be an\n>> additional kernel call per connection attempt, so I'm not at all sure\n>> I want to do it ... but it ought to be debated. Comments anyone?\n\n> We could, but we don't with postgresql.conf so it made sense to keep the\n> behavior the same for the two files.\n\nI'm inclined to agree --- for one thing, this allows one to edit the\nfiles in place without worrying that the postmaster will pick up a\npartially-edited file. But I wanted to throw the issue out to pghackers\nto see if anyone would be really unhappy about having to SIGHUP the\npostmaster after changing the authorization conf files.\n\nIn any case, if we don't change the code, the change in behavior from\nprior releases needs to be documented...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Jul 2001 19:20:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_hba.conf pre-parsing change "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Would it make sense to do fstat calls on these files and reload whenever\n> >> we observe that the file modification time has changed? That'd be an\n> >> additional kernel call per connection attempt, so I'm not at all sure\n> >> I want to do it ... but it ought to be debated. Comments anyone?\n> \n> > We could, but we don't with postgresql.conf so it made sense to keep the\n> > behavior the same for the two files.\n> \n> I'm inclined to agree --- for one thing, this allows one to edit the\n> files in place without worrying that the postmaster will pick up a\n> partially-edited file. But I wanted to throw the issue out to pghackers\n> to see if anyone would be really unhappy about having to SIGHUP the\n> postmaster after changing the authorization conf files.\n\nOK.\n\n> In any case, if we don't change the code, the change in behavior from\n> prior releases needs to be documented...\n\nYou mean in the SGML or in the release highlight text?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 31 Jul 2001 19:39:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_hba.conf pre-parsing change"
},
{
"msg_contents": "On Tuesday 31 July 2001 19:20, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We could, but we don't with postgresql.conf so it made sense to keep the\n> > behavior the same for the two files.\n\n> I'm inclined to agree --- for one thing, this allows one to edit the\n> files in place without worrying that the postmaster will pick up a\n> partially-edited file. But I wanted to throw the issue out to pghackers\n> to see if anyone would be really unhappy about having to SIGHUP the\n> postmaster after changing the authorization conf files.\n\nHmmm... \n\nI much prefer having to SIGHUP postmaster -- that is semistandard daemon \nbehavior, no? If enough people want the other behavior, add a \npostgresql.conf setting to activate 'modification notification' for config \nfiles.\n\nIf nothing else, an 'accidental' pg_hba.conf corruption, deletion, or \nmalicious change has to have a 'confirmation' step before a running \npostmaster sees the changes.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 31 Jul 2001 19:46:57 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_hba.conf pre-parsing change"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> In any case, if we don't change the code, the change in behavior from\n>> prior releases needs to be documented...\n\n> You mean in the SGML or in the release highlight text?\n\nBoth. client_auth.sgml specifically states that editing the file is\nsufficient to make changes, and I think that it'd better be mentioned\nin the release notes too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Jul 2001 19:54:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_hba.conf pre-parsing change "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> In any case, if we don't change the code, the change in behavior from\n> >> prior releases needs to be documented...\n> \n> > You mean in the SGML or in the release highlight text?\n> \n> Both. client_auth.sgml specifically states that editing the file is\n> sufficient to make changes, and I think that it'd better be mentioned\n> in the release notes too.\n\nGot it, and pg_hba.conf talked about it too. Also, I added a mention\nof SIGHUP to pg_ident.conf.sample.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 31 Jul 2001 20:52:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_hba.conf pre-parsing change"
}
] |
[
{
"msg_contents": "set digest\n\n\n\n\n\nset \ndigest",
"msg_date": "Mon, 30 Jul 2001 11:52:59 -0400",
"msg_from": "\"BigWhat.com\" <ml.postgres@bigwhat.com>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "I have thought of a few new TODO performance items:\n\n1) Someone at O'Reilly suggested that we order our duplicate index\nentries by tid so if we are hitting the heap for lots of duplicates, the\nhits will be on sequential pages. Seems like a nice idea.\n\n2) After Tatsuo's report of running 1000 backends on pgbench and from a\nSolaris report, I think we should have a queue of backends waiting for a\nspinlock, rather than having them sleep and try again. This is\nparticularly important for multi-processor machines.\n\n3) I am reading the Solaris Internals book and there is mention of a\n\"free behind\" capability with large sequential scans. When a large\nsequential scan happens that would wipe out all the old cache entries,\nthe kernel detects this and places its previous pages first on the free\nlist. For out code, if we do a sequential scan of a table that is\nlarger than our buffer cache size, I think we should detect this and do\nthe same. See http://techdocs.postgresql.org for my performance paper\nfor an example.\n\nNew TODO entries are:\n\n\t* Order duplicate index entries by tid\n\t* Add queue of backends waiting for spinlock\n\t* Add free-behind capability for large sequential scans\n\nI will modify them with any comments people have.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 12:49:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Performance TODO items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> New TODO entries are:\n> \t* Add queue of backends waiting for spinlock\n\nI already see:\n\n* Create spinlock sleepers queue so everyone doesn't wake up at once\n\n\nBTW, I agree with Vadim's opinion that we should add a new type of lock\n(intermediate between spinlocks and lockmanager locks) rather than try\nto add new semantics onto spinlocks. For example, it'd be very nice to\ndistinguish read-only and read-write access in this new kind of lock,\nbut we can't expect spinlocks to be able to do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 15:24:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance TODO items "
},
{
"msg_contents": "> 3) I am reading the Solaris Internals book and there is mention of a\n> \"free behind\" capability with large sequential scans. When a large\n> sequential scan happens that would wipe out all the old cache entries,\n> the kernel detects this and places its previous pages first\n> on the free list. For out code, if we do a sequential scan of a table\n> that is larger than our buffer cache size, I think we should detect\n> this and do the same. See http://techdocs.postgresql.org for my\n> performance paper for an example.\n>\n> New TODO entries are:\n>\n> \t* Order duplicate index entries by tid\n> \t* Add queue of backends waiting for spinlock\n> \t* Add free-behind capability for large sequential scans\n\nSo why do we cache sequetially-read pages? Or at least not have an\noption to control it?\n\nOracle (to the best of my knowledge) does NOT cache pages read by a\nsequential index scan for at least two reasons/assumptions (two being\nall that I can recall):\n\n1. Caching pages for sequential scans over sufficiently large tables\nwill just cycle the cache. The pages that will be cached at the end of\nthe query will be the last N pages of the table, so when the same\nsequential query is run again, the scan from the beginning of the table\nwill start flushing the oldest cached pages which are more than likely\ngoing to be the ones that will be needed at the end of the scan, etc,\netc. In a multi-user environment, the effect is worse.\n\n2. Concurrent or consective queries in a dynamic database will not\ngenerate plans that use the same sequential scans, so they will tend to\nthrash the cache.\n\nNow there are some databases where the same general queries are run time\nafter time and caching the pages from sequential scans does make sense,\nbut in larger, enterprise-type systems, indices are created to help\nspeed up the most used queries and the sequential cache entries only\nserve to clutter the cache and flush the useful pages.\n\nIs there any way that caching pages read in by a sequential scan could\nbe made a configurable-option?\n\nAny chance someone could run pgbench on a test system set up to not\ncache sequential reads?\n\nDarren\n\n",
"msg_date": "Mon, 30 Jul 2001 15:32:50 -0400",
"msg_from": "\"Darren King\" <darrenk@insightdist.com>",
"msg_from_op": false,
"msg_subject": "RE: Performance TODO items"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > New TODO entries are:\n> > \t* Add queue of backends waiting for spinlock\n> \n> I already see:\n> \n> * Create spinlock sleepers queue so everyone doesn't wake up at once\n\nThat is an old copy of the TODO. I reworded it. You will only see this\nnow:\n\n\t* Improve spinlock code, perhaps with OS semaphores, sleeper queue, or\n\t spining to obtain lock on multi-cpu systems\n\n> BTW, I agree with Vadim's opinion that we should add a new type of lock\n> (intermediate between spinlocks and lockmanager locks) rather than try\n> to add new semantics onto spinlocks. For example, it'd be very nice to\n> distinguish read-only and read-write access in this new kind of lock,\n> but we can't expect spinlocks to be able to do that.\n\nYes, I agree too. On a uniprocessor machine, if I can't get the\nspinlock, I want to yield the cpu, ideally for less than 10ms. On a\nmulti-cpu machine, if the lock is held by another CPU and that process\nis running, we want to spin waiting for the lock to free. If not, we\ncan sleep. We basically need some more sophisticated semantics around\nthese locks, or move to OS semaphores.\n\nIn fact, can't we sleep on an interruptible system call, and send\nsignals to processes when we release the lock? That way we don't have\nthe 10ms sleep limitation. One idea is to have a bytes for each backend\nin shared memory and have each backend set the bit if it is waiting for\nthe semaphore. There would be no contention with multiple backends\nregistering their sleep at the same time.\n\nWe have seen reports of 4-cpu systems having poor performance while the\nsystem is only 12% busy, perhaps because the processes are all sleeping\nwaiting for the next tick.\n\nI think multi-cpu machines are going to give us new challenges.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 17:10:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Performance TODO items"
},
{
"msg_contents": "> So why do we cache sequetially-read pages? Or at least not have an\n> option to control it?\n> \n> Oracle (to the best of my knowledge) does NOT cache pages read by a\n> sequential index scan for at least two reasons/assumptions (two being\n> all that I can recall):\n> \n> 1. Caching pages for sequential scans over sufficiently large tables\n> will just cycle the cache. The pages that will be cached at the end of\n> the query will be the last N pages of the table, so when the same\n> sequential query is run again, the scan from the beginning of the table\n> will start flushing the oldest cached pages which are more than likely\n> going to be the ones that will be needed at the end of the scan, etc,\n> etc. In a multi-user environment, the effect is worse.\n> \n> 2. Concurrent or consective queries in a dynamic database will not\n> generate plans that use the same sequential scans, so they will tend to\n> thrash the cache.\n> \n> Now there are some databases where the same general queries are run time\n> after time and caching the pages from sequential scans does make sense,\n> but in larger, enterprise-type systems, indices are created to help\n> speed up the most used queries and the sequential cache entries only\n> serve to clutter the cache and flush the useful pages.\n> \n> Is there any way that caching pages read in by a sequential scan could\n> be made a configurable-option?\n> \n> Any chance someone could run pgbench on a test system set up to not\n> cache sequential reads?\n\nYep, that is the issue. If the whole table fits in the cache, it is\ngreat. If not, it is useless or worse because it forces out other\npages. Right now the cache is oldest-out and doesn't know anything\nabout access patterns. We would need to get that info passed in the\ncache, probably some function parameter.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 17:14:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Performance TODO items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have thought of a few new TODO performance items:\n> 1) Someone at O'Reilly suggested that we order our duplicate index\n> entries by tid so if we are hitting the heap for lots of duplicates, the\n> hits will be on sequential pages. Seems like a nice idea.\n\nA more general solution is for indexscan to collect up a bunch of TIDs\nfrom the index, sort them in-memory by TID order, and then probe into\nthe heap with those TIDs. This is better than the above because you get\nnice ordering of the heap accesses across multiple key values, not just\namong the tuples with the same key. (In a unique or near-unique index,\nthe above idea is nearly worthless.)\n\nIn the best case there are few enough TIDs retrieved from the index that\nyou can just do this once, but even if there are lots of TIDs, it should\nbe a win to do this in batches of a few thousand TIDs. Essentially we\ndecouple indexscans into separate index-access and heap-access phases.\n\nOne big problem is that this doesn't interact well with concurrent VACUUM:\nour present solution for concurrent VACUUM assumes that indexscans hold\na pin on an index page until they've finished fetching the pointed-to\nheap tuples. Another objection is that we'd have a harder time\nimplementing the TODO item of marking an indextuple dead when its\nassociated heaptuple is dead. Anyone see a way around these problems?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 17:53:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance TODO items "
},
{
"msg_contents": "> A more general solution is for indexscan to collect up a bunch of TIDs\n> from the index, sort them in-memory by TID order, and then probe into\n> the heap with those TIDs. This is better than the above because you get\n> nice ordering of the heap accesses across multiple key values, not just\n> among the tuples with the same key. (In a unique or near-unique index,\n> the above idea is nearly worthless.)\n> \n> In the best case there are few enough TIDs retrieved from the index that\n> you can just do this once, but even if there are lots of TIDs, it should\n> be a win to do this in batches of a few thousand TIDs. Essentially we\n> decouple indexscans into separate index-access and heap-access phases.\n> \n> One big problem is that this doesn't interact well with concurrent VACUUM:\n> our present solution for concurrent VACUUM assumes that indexscans hold\n> a pin on an index page until they've finished fetching the pointed-to\n> heap tuples. Another objection is that we'd have a harder time\n> implementing the TODO item of marking an indextuple dead when its\n> associated heaptuple is dead. Anyone see a way around these problems?\n\nInteresting. I figured the cache could keep most pages in such a case. \nI was thinking more of helping file system readahead by requesting the\nearlier block first in a mult-block request. Not sure how valuable that\nwould be.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 18:11:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Performance TODO items"
}
] |
[
{
"msg_contents": "> New TODO entries are:\n> \n> \t* Order duplicate index entries by tid\n\nIn other words - add tid to index key: very old idea.\n\n> \t* Add queue of backends waiting for spinlock\n\nWe shouldn't mix two different approaches for different\nkinds of short-time internal locks - in one cases we need in\nlight lmgr (when we're going to keep lock long enough, eg for IO)\nand in another cases we'd better to proceed with POSIX' mutex-es\nor semaphores instead of spinlocks. Queueing backends waiting\nfor spinlock sounds like nonsense - how are you going to protect\nsuch queue? With spinlocks? -:)\n\nVadim\n",
"msg_date": "Mon, 30 Jul 2001 10:12:22 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Performance TODO items"
},
{
"msg_contents": "> > New TODO entries are:\n> > \n> > \t* Order duplicate index entries by tid\n> \n> In other words - add tid to index key: very old idea.\n\nI was thinking during index creation, it would be nice to order them by\ntid, but not do lots of work to keep it that way.\n\n> > \t* Add queue of backends waiting for spinlock\n> \n> We shouldn't mix two different approaches for different\n> kinds of short-time internal locks - in one cases we need in\n> light lmgr (when we're going to keep lock long enough, eg for IO)\n> and in another cases we'd better to proceed with POSIX' mutex-es\n> or semaphores instead of spinlocks. Queueing backends waiting\n> for spinlock sounds like nonsense - how are you going to protect\n> such queue? With spinlocks? -:)\n\nYes, I guess so but hopefully we can spin waiting for the queue lock\nrather than sleep. We could use POSIX spinlocks/semaphores now but we\ndon't because of performance, right?\n\nShould we be spinning waiting for spinlock on multi-cpu machines? Is\nthat the answer?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 13:15:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance TODO items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Should we be spinning waiting for spinlock on multi-cpu machines? Is\n> that the answer?\n\nA multi-CPU machine is actually the only case where a true spinlock\n*does* make sense. On a single CPU you might as well yield the CPU\nimmediately, because you have no chance of getting the lock until the\ncurrent holder is allowed to run again. On a multi CPU it's a\nreasonable guess that the current holder is running on one of the other\nCPUs and may release the lock soon (\"soon\" == less than a process\ndispatch cycle, hence busy-wait is better than release CPU).\n\nWe are currently using spinlocks for a lot of situations where the mean\ntime spent holding the lock is probably larger than \"soon\" as defined\nabove. We should have a different lock implementation for those cases.\nTrue spinlocks should be reserved for protecting code where the maximum\ntime spent holding the lock is guaranteed to be very short.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 15:29:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance TODO items "
}
] |
[
{
"msg_contents": "> > > \t* Order duplicate index entries by tid\n> > \n> > In other words - add tid to index key: very old idea.\n> \n> I was thinking during index creation, it would be nice to\n> order them by tid, but not do lots of work to keep it that way.\n\nI hear this \"not do lots of work\" so often from you -:)\nDays of simplicity are gone, Bruce. To continue, this project\nrequires more and more complex solutions.\n\n> > > \t* Add queue of backends waiting for spinlock\n> > \n> > We shouldn't mix two different approaches for different\n> > kinds of short-time internal locks - in one cases we need in\n> > light lmgr (when we're going to keep lock long enough, eg for IO)\n> > and in another cases we'd better to proceed with POSIX' mutex-es\n> > or semaphores instead of spinlocks. Queueing backends waiting\n> > for spinlock sounds like nonsense - how are you going to protect\n> > such queue? With spinlocks? -:)\n> \n> Yes, I guess so but hopefully we can spin waiting for the queue lock\n> rather than sleep. We could use POSIX spinlocks/semaphores now but we\n> don't because of performance, right?\n\nNo. As long as no one proved with test that mutexes are bad for\nperformance...\nFunny, such test would require ~ 1 day of work.\n\n> Should we be spinning waiting for spinlock on multi-cpu machines? Is\n> that the answer?\n\nWhat do you mean?\n\nVadim\n",
"msg_date": "Mon, 30 Jul 2001 10:45:30 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Performance TODO items"
},
{
"msg_contents": "> > > > \t* Order duplicate index entries by tid\n> > > \n> > > In other words - add tid to index key: very old idea.\n> > \n> > I was thinking during index creation, it would be nice to\n> > order them by tid, but not do lots of work to keep it that way.\n> \n> I hear this \"not do lots of work\" so often from you -:)\n> Days of simplicity are gone, Bruce. To continue, this project\n> requires more and more complex solutions.\n\nYep. I can dream. :-)\n\n> > > > \t* Add queue of backends waiting for spinlock\n> > > \n> > > We shouldn't mix two different approaches for different\n> > > kinds of short-time internal locks - in one cases we need in\n> > > light lmgr (when we're going to keep lock long enough, eg for IO)\n> > > and in another cases we'd better to proceed with POSIX' mutex-es\n> > > or semaphores instead of spinlocks. Queueing backends waiting\n> > > for spinlock sounds like nonsense - how are you going to protect\n> > > such queue? With spinlocks? -:)\n> > \n> > Yes, I guess so but hopefully we can spin waiting for the queue lock\n> > rather than sleep. We could use POSIX spinlocks/semaphores now but we\n> > don't because of performance, right?\n> \n> No. As long as no one proved with test that mutexes are bad for\n> performance...\n> Funny, such test would require ~ 1 day of work.\n\nGood question. I know the number of function calls to spinlock stuff is\nhuge. Seems real semaphores may be a big win on multi-cpu boxes.\n\n> > Should we be spinning waiting for spinlock on multi-cpu machines? Is\n> > that the answer?\n> \n> What do you mean?\n\nOn a single-cpu machine, if you can't get the spinlock, you should just\nsleep and let the process who had it continue. On a multi-cpu machine,\nyou perhaps should just keep trying, hoping that the process who holds\nit finishes soon. I wonder is we should find some kind of test on\npostmaster startup that would test to see if we have multiple cpu's and\nchange from spinlock sleeping to spinlock spin-waiting.\n\nAdd to this that fact we can't sleep for less than on clock tick on most\nmachines, 10ms, and the spinlock stuff needs work.\n\nTODO updated:\n\n* Improve spinlock code, perhaps with OS semaphores, sleeper queue, or \n spining to obtain lock on multi-cpu systems\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 13:58:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance TODO items"
},
{
"msg_contents": "On Mon, 30 Jul 2001, Bruce Momjian wrote:\n\n> * Improve spinlock code, perhaps with OS semaphores, sleeper queue, or\n> spining to obtain lock on multi-cpu systems\n\nYou may be interested in a discussion which happened over on\nlinux-kernel a few months ago.\n\nQuite a lot of people want a lightweight userspace semaphore,\nand for pretty much the same reasons.\n\nLinus proposed a pretty interesting solution which has the\nsame minimal overhead as the current spinlocks in the non-\ncontention case, but avoids the spin where there's contention:\n\nhttp://www.mail-archive.com/linux-kernel%40vger.kernel.org/msg39615.html\n\nMatthew.\n\n",
"msg_date": "Tue, 31 Jul 2001 10:12:14 +0100 (BST)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Performance TODO items"
},
{
"msg_contents": "> On Mon, 30 Jul 2001, Bruce Momjian wrote:\n> \n> > * Improve spinlock code, perhaps with OS semaphores, sleeper queue, or\n> > spining to obtain lock on multi-cpu systems\n> \n> You may be interested in a discussion which happened over on\n> linux-kernel a few months ago.\n> \n> Quite a lot of people want a lightweight userspace semaphore,\n> and for pretty much the same reasons.\n> \n> Linus proposed a pretty interesting solution which has the\n> same minimal overhead as the current spinlocks in the non-\n> contention case, but avoids the spin where there's contention:\n> \n> http://www.mail-archive.com/linux-kernel%40vger.kernel.org/msg39615.html\n\nYes, many OS's have user-space spinlocks, for the same performance\nreasons (no kernel call).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 31 Jul 2001 09:18:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance TODO items"
}
] |
[
{
"msg_contents": "> > > We could use POSIX spinlocks/semaphores now but we\n> > > don't because of performance, right?\n> > \n> > No. As long as no one proved with test that mutexes are bad for\n> > performance...\n> > Funny, such test would require ~ 1 day of work.\n> \n> Good question. I know the number of function calls to spinlock stuff\n> is huge. Seems real semaphores may be a big win on multi-cpu boxes.\n\nOk, being tired of endless discussions I'll try to use mutexes instead\nof spinlocks and run pgbench on my Solaris WS 10 and E4500 (4 CPU) boxes.\n\nVadim\n",
"msg_date": "Mon, 30 Jul 2001 11:17:44 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Performance TODO items"
},
{
"msg_contents": "> > > > We could use POSIX spinlocks/semaphores now but we\n> > > > don't because of performance, right?\n> > > \n> > > No. As long as no one proved with test that mutexes are bad for\n> > > performance...\n> > > Funny, such test would require ~ 1 day of work.\n> > \n> > Good question. I know the number of function calls to spinlock stuff\n> > is huge. Seems real semaphores may be a big win on multi-cpu boxes.\n> \n> Ok, being tired of endless discussions I'll try to use mutexes instead\n> of spinlocks and run pgbench on my Solaris WS 10 and E4500 (4 CPU) boxes.\n\nI have updated the TODO list with:\n\n * Improve spinlock code \n o use SysV semaphores or queue of backends waiting on the lock\n o wakeup sleeper or sleep for less than one clock tick \n o spin for lock on multi-cpu machines, yield on single cpu machines\n o read/write locks\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Sep 2001 20:03:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance TODO items"
}
] |
[
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010730 09:45]:\n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010729 22:36]:\n> > > \n> > > I can patch configure.in, but not config.*. That comes from autoconf.\n> > The problem is the config.guess and config.sub NEED TO BE UPDATED \n> > to recognize OpenUNIX 8. \n> > \n> > How do we get autoconf and the config/* directory updated in OUR CVS? \n> > \n> \n> Marc has to update autoconf on the CVS server.\nErr, we ship config.guess and config.sub in our CVS, what I did was \npull those two files from ftp://ftp.gnu.org/gnu/config per the\ninstructions when I picked up OU8. \n\nWhy can't we just update that? \n\nLER\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Jul 2001 17:41:43 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Err, we ship config.guess and config.sub in our CVS, what I did was \n> pull those two files from ftp://ftp.gnu.org/gnu/config per the\n> instructions when I picked up OU8. \n\n> Why can't we just update that? \n\nIf autoconf releases were happening on a regular basis, we could get\naway with just tracking the released version of autoconf for these\nfiles. However, they aren't and we can't. Our past practice has been\nto pull config.guess and config.sub from the autoconf project's CVS tree\nwhenever we approach a release, and that's what I think we should do\nagain as 7.2 approaches.\n\nPeter E. is probably more up on this than I am --- let's postpone the\ndiscussion till he returns from vacation. I see no reason to be in\na hurry about it.\n\nMeanwhile, Larry, if you have any issues with the versions of these\nfiles that are on the GNU servers then you need to talk to the upstream\nfolks. I definitely do not believe in distributing copies that we've\nmodified for ourselves.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 19:03:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: OpenUnix 8 Patchj "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010730 18:03]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Err, we ship config.guess and config.sub in our CVS, what I did was \n> > pull those two files from ftp://ftp.gnu.org/gnu/config per the\n> > instructions when I picked up OU8. \n> \n> > Why can't we just update that? \n> \n> If autoconf releases were happening on a regular basis, we could get\n> away with just tracking the released version of autoconf for these\n> files. However, they aren't and we can't. Our past practice has been\n> to pull config.guess and config.sub from the autoconf project's CVS tree\n> whenever we approach a release, and that's what I think we should do\n> again as 7.2 approaches.\n> \n> Peter E. is probably more up on this than I am --- let's postpone the\n> discussion till he returns from vacation. I see no reason to be in\n> a hurry about it.\n> \n> Meanwhile, Larry, if you have any issues with the versions of these\n> files that are on the GNU servers then you need to talk to the upstream\n> folks. I definitely do not believe in distributing copies that we've\n> modified for ourselves.\nI didn't modify them. I just pulled them from\nftp://ftp.gnu.org/pub/gnu/config and diff'ed them against our CVS to\ngenerate the patch. \n\nI didn't touch byte one. \n\nThey DID pick up the necessary change for OpenUnix 8. \n\nLarry\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Jul 2001 18:04:53 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "I wrote:\n> If autoconf releases were happening on a regular basis, we could get\n> away with just tracking the released version of autoconf for these\n> files. However, they aren't and we can't.\n\nJust moments after writing that, I was startled to read on another\nmailing list that the long-mythical Autoconf 2.50 is released!\n\nWe should probably consider updating from autoconf 2.13 as our project\nstandard to 2.50. However, I'd recommend waiting till Peter E. returns\nfrom his vacation to see what his opinion about it is. IIRC, he's been\nfollowing that project, which I have not been.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 19:34:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Autoconf 2.50 is out (was Re: Re: OpenUnix 8 Patch)"
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010730 18:34]:\n> I wrote:\n> > If autoconf releases were happening on a regular basis, we could get\n> > away with just tracking the released version of autoconf for these\n> > files. However, they aren't and we can't.\n> \n> Just moments after writing that, I was startled to read on another\n> mailing list that the long-mythical Autoconf 2.50 is released!\n> \n> We should probably consider updating from autoconf 2.13 as our project\n> standard to 2.50. However, I'd recommend waiting till Peter E. returns\n> from his vacation to see what his opinion about it is. IIRC, he's been\n> following that project, which I have not been.\nI also see LOTS of complaints about compat issues, which is why I just\npulled the 2 files from ftp://ftp.gnu.org/pub/gnu/config when I\ngenerated that patch. Looks like they updated them again today:\n\n$ ftp ftp.gnu.org\nConnected to gnuftp.gnu.org.\n220 ProFTPD 1.2.0pre10 Server (ProFTPD) [gnuftp.gnu.org]\n331 Anonymous login ok, send your complete e-mail address as password.\n230-If you have any problems with the GNU software or its downloading,\nplease\n refer your questions to <gnu@gnu.org>.\n \n There are several mirrors of this archive, a complete list can be\nfound on\n http://www.gnu.org/order/ftp.html. Please use one of the mirrors if\npossible.\n \n Archives of the GNU mailing lists can be found in\n ftp://ftp-mailing-list-archives.gnu.org/.\n \n Please note that the directory structure on ftp.gnu.org was\nredisorganzied\n fairly recently, such that there is a directory for each program.\nOne side\n effect of this is that if you cd into the gnu directory, and do\n > ls emacs*\n you will get a list of all the files in the emacs directory, but it\nwill not\n be obvious from the ls output that you have to `cd emacs' before you\ncan\n download those files.\n \n Note further the non-GNU programs that were formerly in gnu/ have\nmoved to\n gnu/non-gnu/. Most of them were just pointers in the format\nprogram.README.\n If you are looking for such a file, be sure to check gnu/non-gnu/.\n230 Anonymous access granted, restrictions apply.\nRemote system type is UNIX.\nUsing binary mode to transfer files.\nftp> cd pub/gnu/config\n250 CWD command successful.\nftp> pwd\n257 \"/gnu/config\" is current directory.\nftp> dir\n200 PORT command successful.\n150 Opening ASCII mode data connection for file list.\n-rw-r--r-- 1 ftp ftp 38214 Jul 30 14:00 config.guess\n-rw-r--r-- 1 ftp ftp 27872 Jul 30 14:00 config.sub\n226-Transfer complete.\n226 Quotas off\nftp> \n\n\nLarry\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Jul 2001 18:37:24 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Autoconf 2.50 is out (was Re: Re: OpenUnix 8 Patch)"
},
{
"msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [010730 09:45]:\n> > > * Bruce Momjian <pgman@candle.pha.pa.us> [010729 22:36]:\n> > > > \n> > > > I can patch configure.in, but not config.*. That comes from autoconf.\n> > > The problem is the config.guess and config.sub NEED TO BE UPDATED \n> > > to recognize OpenUNIX 8. \n> > > \n> > > How do we get autoconf and the config/* directory updated in OUR CVS? \n> > > \n> > \n> > Marc has to update autoconf on the CVS server.\n> Err, we ship config.guess and config.sub in our CVS, what I did was \n> pull those two files from ftp://ftp.gnu.org/gnu/config per the\n> instructions when I picked up OU8. \n> \n> Why can't we just update that? \n\nBecause when I run autoconf for another configure.in change, your\nchanges get blown away.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 19:42:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010730 18:42]:\n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010730 09:45]:\n> > > > * Bruce Momjian <pgman@candle.pha.pa.us> [010729 22:36]:\n> > > > > \n> > > > > I can patch configure.in, but not config.*. That comes from autoconf.\n> > > > The problem is the config.guess and config.sub NEED TO BE UPDATED \n> > > > to recognize OpenUNIX 8. \n> > > > \n> > > > How do we get autoconf and the config/* directory updated in OUR CVS? \n> > > > \n> > > \n> > > Marc has to update autoconf on the CVS server.\n> > Err, we ship config.guess and config.sub in our CVS, what I did was \n> > pull those two files from ftp://ftp.gnu.org/gnu/config per the\n> > instructions when I picked up OU8. \n> > \n> > Why can't we just update that? \n> \n> Because when I run autoconf for another configure.in change, your\n> changes get blown away.\nno they won't. the changes are in our source tree......\n\nLER\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Jul 2001 18:46:37 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "> > Meanwhile, Larry, if you have any issues with the versions of these\n> > files that are on the GNU servers then you need to talk to the upstream\n> > folks. I definitely do not believe in distributing copies that we've\n> > modified for ourselves.\n> I didn't modify them. I just pulled them from\n> ftp://ftp.gnu.org/pub/gnu/config and diff'ed them against our CVS to\n> generate the patch. \n> \n> I didn't touch byte one. \n> \n> They DID pick up the necessary change for OpenUnix 8. \n\nWhich I think means we just need to update autoconf on hub.org. At\nleast that is where I run autoconf so I am sure I have our standard\nversion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 19:48:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [010730 18:42]:\n> > > * Bruce Momjian <pgman@candle.pha.pa.us> [010730 09:45]:\n> > > > > * Bruce Momjian <pgman@candle.pha.pa.us> [010729 22:36]:\n> > > > > > \n> > > > > > I can patch configure.in, but not config.*. That comes from autoconf.\n> > > > > The problem is the config.guess and config.sub NEED TO BE UPDATED \n> > > > > to recognize OpenUNIX 8. \n> > > > > \n> > > > > How do we get autoconf and the config/* directory updated in OUR CVS? \n> > > > > \n> > > > \n> > > > Marc has to update autoconf on the CVS server.\n> > > Err, we ship config.guess and config.sub in our CVS, what I did was \n> > > pull those two files from ftp://ftp.gnu.org/gnu/config per the\n> > > instructions when I picked up OU8. \n> > > \n> > > Why can't we just update that? \n> > \n> > Because when I run autoconf for another configure.in change, your\n> > changes get blown away.\n> no they won't. the changes are in our source tree......\n\nSo how did they get their originally. I thought they were generated by\nrunning configure, or by autoconf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 19:49:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010730 19:10]:\n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010730 18:42]:\n> > > > * Bruce Momjian <pgman@candle.pha.pa.us> [010730 09:45]:\n> > > > > > * Bruce Momjian <pgman@candle.pha.pa.us> [010729 22:36]:\n> > > > > > > \n> > > > > > > I can patch configure.in, but not config.*. That comes from autoconf.\n> > > > > > The problem is the config.guess and config.sub NEED TO BE UPDATED \n> > > > > > to recognize OpenUNIX 8. \n> > > > > > \n> > > > > > How do we get autoconf and the config/* directory updated in OUR CVS? \n> > > > > > \n> > > > > \n> > > > > Marc has to update autoconf on the CVS server.\n> > > > Err, we ship config.guess and config.sub in our CVS, what I did was \n> > > > pull those two files from ftp://ftp.gnu.org/gnu/config per the\n> > > > instructions when I picked up OU8. \n> > > > \n> > > > Why can't we just update that? \n> > > \n> > > Because when I run autoconf for another configure.in change, your\n> > > changes get blown away.\n> > no they won't. the changes are in our source tree......\n> \n> So how did they get their originally. I thought they were generated by\n> running configure, or by autoconf.\nNope, they just need to be there. I've run autoconf to pick up MY\nchange to test it, and the files are still the one's I ftp'd from \ngnu.org. \n\nI belive they just need to be there. \n\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Jul 2001 19:11:48 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010730 19:10]:\n> > > Meanwhile, Larry, if you have any issues with the versions of these\n> > > files that are on the GNU servers then you need to talk to the upstream\n> > > folks. I definitely do not believe in distributing copies that we've\n> > > modified for ourselves.\n> > I didn't modify them. I just pulled them from\n> > ftp://ftp.gnu.org/pub/gnu/config and diff'ed them against our CVS to\n> > generate the patch. \n> > \n> > I didn't touch byte one. \n> > \n> > They DID pick up the necessary change for OpenUnix 8. \n> \n> Which I think means we just need to update autoconf on hub.org. At\n> least that is where I run autoconf so I am sure I have our standard\n> version.\nDon't think so, as I said elsewhere, there are compat issues with \n2.50 of Autoconf. \n\nIf you would TRY my patch you would see that it works. \n\nI *DO* need others to test, however, as there *MAY* be other changes\nin naming from what we had to these. \n\nLER\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Jul 2001 19:13:02 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "> > > > Because when I run autoconf for another configure.in change, your\n> > > > changes get blown away.\n> > > no they won't. the changes are in our source tree......\n> > \n> > So how did they get their originally. I thought they were generated by\n> > running configure, or by autoconf.\n> Nope, they just need to be there. I've run autoconf to pick up MY\n> change to test it, and the files are still the one's I ftp'd from \n> gnu.org. \n> \n> I belive they just need to be there. \n\nWe need to figure out who handles autoconf updates and let them handle\nthose files. Tom seems to think it is Peter E, who is on vacation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 20:14:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "> > Which I think means we just need to update autoconf on hub.org. At\n> > least that is where I run autoconf so I am sure I have our standard\n> > version.\n> Don't think so, as I said elsewhere, there are compat issues with \n> 2.50 of Autoconf. \n> \n> If you would TRY my patch you would see that it works. \n> \n> I *DO* need others to test, however, as there *MAY* be other changes\n> in naming from what we had to these. \n\nI just can't go in there and start changing those files, especially\nsince I don't know where they originally came from or who added them. \nLet's wait and see if the proper person appears. If no one shows up, we\nwill have to struggle through it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 20:15:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010730 19:15]:\n> > > Which I think means we just need to update autoconf on hub.org. At\n> > > least that is where I run autoconf so I am sure I have our standard\n> > > version.\n> > Don't think so, as I said elsewhere, there are compat issues with \n> > 2.50 of Autoconf. \n> > \n> > If you would TRY my patch you would see that it works. \n> > \n> > I *DO* need others to test, however, as there *MAY* be other changes\n> > in naming from what we had to these. \n> \n> I just can't go in there and start changing those files, especially\n> since I don't know where they originally came from or who added them. \n> Let's wait and see if the proper person appears. If no one shows up, we\n> will have to struggle through it.\nOK, but I'm broken for CVS updates until this gets in for OU8. \n\nLER\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Jul 2001 19:17:34 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n>> Which I think means we just need to update autoconf on hub.org. At\n>> least that is where I run autoconf so I am sure I have our standard\n>> version.\n\n> Don't think so, as I said elsewhere, there are compat issues with \n> 2.50 of Autoconf. \n\nIt'd certainly be folly to switch without Peter's goahead --- unless\nsomeone else wants to take over his position as configure-meister...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 20:56:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: OpenUnix 8 Patchj "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010730 19:56]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> >> Which I think means we just need to update autoconf on hub.org. At\n> >> least that is where I run autoconf so I am sure I have our standard\n> >> version.\n> \n> > Don't think so, as I said elsewhere, there are compat issues with \n> > 2.50 of Autoconf. \n> \n> It'd certainly be folly to switch without Peter's goahead --- unless\n> someone else wants to take over his position as configure-meister...\nWhen is Peter expected back? \n\nLER\n\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Jul 2001 19:57:28 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: OpenUnix 8 Patchj"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Err, we ship config.guess and config.sub in our CVS, what I did was \n>> pull those two files from ftp://ftp.gnu.org/gnu/config per the\n>> instructions when I picked up OU8. \n>> \n>> Why can't we just update that? \n\n> Because when I run autoconf for another configure.in change, your\n> changes get blown away.\n\nYou're missing the point, Bruce. config.guess and config.sub are not\nconfigure outputs, they are independent source files.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 20:57:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: OpenUnix 8 Patchj "
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> When is Peter expected back? \n\nSometime in August, I think. Check the archives for his last few\npostings --- he mentioned what his schedule was before he left, IIRC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 20:59:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: OpenUnix 8 Patchj "
},
{
"msg_contents": "Tom Lane writes:\n\n> Just moments after writing that, I was startled to read on another\n> mailing list that the long-mythical Autoconf 2.50 is released!\n\nLast I checked 2.51 was also released. AC 2.50 had some quality issues in\nmy mind which were probably fixed by now. If we see a need we can update;\nI suppose it depends on the release schedule. (Note that some non-trivial\npatches will be needed before 2.5x will work on our configure.in. I have\nthese mostly worked out.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 5 Aug 2001 21:55:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf 2.50 is out (was Re: [PATCHES] Re: OpenUnix 8 Patch)"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Last I checked 2.51 was also released. AC 2.50 had some quality issues in\n> my mind which were probably fixed by now. If we see a need we can update;\n> I suppose it depends on the release schedule. (Note that some non-trivial\n> patches will be needed before 2.5x will work on our configure.in. I have\n> these mostly worked out.)\n\nI think it's your call whether to do it now or wait. It'd be nice not\nto have the project's version of Autoconf be a moving target, however\n--- if the various developers who commit configure changes don't all\nhave the same Autoconf that's on hub.org, we have problems. If you\nthink there's likely to be a 2.52 soon, maybe we should wait.\n\nAnother thing to ask is what the newer autoconf will buy us. I remember\nyou mentioning some things that sounded nice, but I forget details.\n\nAs far as schedule goes, I'm still thinking 7.2 beta by the end of\nAugust, but that's just MHO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Aug 2001 16:17:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Autoconf 2.50 is out (was Re: [PATCHES] Re: OpenUnix 8 Patch)"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010805 14:51]:\n> Tom Lane writes:\n> \n> > Just moments after writing that, I was startled to read on another\n> > mailing list that the long-mythical Autoconf 2.50 is released!\n> \n> Last I checked 2.51 was also released. AC 2.50 had some quality issues in\n> my mind which were probably fixed by now. If we see a need we can update;\n> I suppose it depends on the release schedule. (Note that some non-trivial\n> patches will be needed before 2.5x will work on our configure.in. I have\n> these mostly worked out.)\nThe patch I submitted was from the FTP site. Can you at least commit\nthose? \n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 5 Aug 2001 19:23:38 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Autoconf 2.50 is out (was Re: [PATCHES] Re: OpenUnix 8 Patch)"
}
] |
[
{
"msg_contents": "\nThe developer's corner will soon be going away. I'm in the process of\nputting together a developer's site. Different URL, different look,\nbeta announcements will be there, regression database will be there,\ndevelopement docs, etc. If you want a sneak preview:\n\n\thttp://developer.postgresql.org/\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 30 Jul 2001 19:16:01 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "developer's website"
},
{
"msg_contents": "> The developer's corner will soon be going away. I'm in the process of\n> putting together a developer's site. Different URL, different look,\n> beta announcements will be there, regression database will be there,\n> developement docs, etc. If you want a sneak preview:\n>\n> \thttp://developer.postgresql.org/\n\nRight now, a good part of what I mirror (and my traffic) for the web\nsite are of the devel site. Will a third set of mirrors (www / ftp /\ndevel) need to be setup, or will the main site handle all the load?\n\n- Brandon\n\n----------------------------------------------------------------------------\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Mon, 30 Jul 2001 19:43:40 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: developer's website"
},
{
"msg_contents": "On Mon, 30 Jul 2001, bpalmer wrote:\n\n> > The developer's corner will soon be going away. I'm in the process of\n> > putting together a developer's site. Different URL, different look,\n> > beta announcements will be there, regression database will be there,\n> > developement docs, etc. If you want a sneak preview:\n> >\n> > \thttp://developer.postgresql.org/\n>\n> Right now, a good part of what I mirror (and my traffic) for the web\n> site are of the devel site. Will a third set of mirrors (www / ftp /\n> devel) need to be setup, or will the main site handle all the load?\n\nDon't know just yet for sure.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 30 Jul 2001 19:51:33 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "Re: developer's website"
}
] |
[
{
"msg_contents": "I sent the email below to the creator of contrib/vacuumlo/ with no reply\njust yet.\n\nIs it possible to get his code included in the main vacuumdb program for\nsupport to vacuum orphaned large objects?\n\nOr... Any suggestions, what do people think?\n\nThanks.\n\n---------- Forwarded message ----------\nDate: Tue, 24 Jul 2001 09:45:01 +1000 (EST)\nFrom: Grant <grant@conprojan.com.au>\nTo: peter@retep.org.uk\nSubject: vacuumlo.\n\nG'day,\n\nI've recently discovered that LO's do not get deleted when a record\nreferencing that OID is removed. I'm assuming you created the program and\ndo you think it is possible to get this included as an argument for\nvacuumdb?\n\nvacuumdb -o <db>\n\nOr something along those lines so scanning for orphan large objects could\nbe done from the main vacuum binary?\n\nWhat do you think?\n\nThanks.\n\n\n\n",
"msg_date": "Tue, 31 Jul 2001 10:14:50 +1000 (EST)",
"msg_from": "Grant <grant@conprojan.com.au>",
"msg_from_op": true,
"msg_subject": "vacuumlo."
},
{
"msg_contents": "Grant <grant@conprojan.com.au> writes:\n> Is it possible to get [vacuumlo] included in the main vacuumdb program for\n> support to vacuum orphaned large objects?\n\nHmm. I'm not convinced that vacuumlo is ready for prime time...\nin particular, how safe is it in the presence of concurrent\ntransactions that might be adding or removing LOs?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 21:29:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo. "
},
{
"msg_contents": "> > Is it possible to get [vacuumlo] included in the main vacuumdb program for\n> > support to vacuum orphaned large objects?\n> \n> Hmm. I'm not convinced that vacuumlo is ready for prime time...\n> in particular, how safe is it in the presence of concurrent\n> transactions that might be adding or removing LOs?\n\nI see large objects for each database are stored in pg_largeobject referenced\nby the loid. So when I delete a file from a table containing an oid type I have\nto make sure to delete the matching row(s) from pg_largeobject.\n\nCan you see a scenario where a programmer would forget to delete the data from\npg_largeobject and the database becoming very large filled with orphaned large\nobjects? Or am I on the wrong track?\n\n",
"msg_date": "Tue, 31 Jul 2001 11:52:10 +1000 (EST)",
"msg_from": "Grant <grant@conprojan.com.au>",
"msg_from_op": true,
"msg_subject": "Re: vacuumlo. "
},
{
"msg_contents": "Grant <grant@conprojan.com.au> writes:\n> Can you see a scenario where a programmer would forget to delete the\n> data from pg_largeobject and the database becoming very large filled\n> with orphaned large objects?\n\nSure. My point wasn't that the functionality isn't needed, it's that\nI'm not sure vacuumlo does it well enough to be ready to promote to\nthe status of mainstream code. It needs more review and testing before\nwe can move it out of /contrib.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Jul 2001 22:21:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo. "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Grant <grant@conprojan.com.au> writes:\n> > Can you see a scenario where a programmer would forget to delete the\n> > data from pg_largeobject and the database becoming very large filled\n> > with orphaned large objects?\n> \n> Sure. My point wasn't that the functionality isn't needed, it's that\n> I'm not sure vacuumlo does it well enough to be ready to promote to\n> the status of mainstream code. It needs more review and testing before\n> we can move it out of /contrib.\n> \n\nIIRC vacuumlo doesn't take the type lo(see contrib/lo) into\naccount. I'm suspicious if vacuumlo is reliable.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 31 Jul 2001 13:16:23 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo."
},
{
"msg_contents": "> > > Can you see a scenario where a programmer would forget to delete the\n> > > data from pg_largeobject and the database becoming very large filled\n> > > with orphaned large objects?\n> > \n> > Sure. My point wasn't that the functionality isn't needed, it's that\n> > I'm not sure vacuumlo does it well enough to be ready to promote to\n> > the status of mainstream code. It needs more review and testing before\n> > we can move it out of /contrib.\n> > \n> \n> IIRC vacuumlo doesn't take the type lo(see contrib/lo) into\n> account. I'm suspicious if vacuumlo is reliable.\n\nThis was my round about way of asking if something to combat this issue\ncan be placed in the to do list. :)\n\nThanks.\n\n",
"msg_date": "Tue, 31 Jul 2001 14:25:00 +1000 (EST)",
"msg_from": "Grant <grant@conprojan.com.au>",
"msg_from_op": true,
"msg_subject": "Re: vacuumlo."
},
{
"msg_contents": "> > > > Can you see a scenario where a programmer would forget to delete the\n> > > > data from pg_largeobject and the database becoming very large filled\n> > > > with orphaned large objects?\n> > > \n> > > Sure. My point wasn't that the functionality isn't needed, it's that\n> > > I'm not sure vacuumlo does it well enough to be ready to promote to\n> > > the status of mainstream code. It needs more review and testing before\n> > > we can move it out of /contrib.\n> > > \n> > \n> > IIRC vacuumlo doesn't take the type lo(see contrib/lo) into\n> > account. I'm suspicious if vacuumlo is reliable.\n> \n> This was my round about way of asking if something to combat this issue\n> can be placed in the to do list. :)\n\nAdded to TODO:\n\n\t* Improve vacuum of large objects (/contrib/vacuumlo)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 31 Jul 2001 09:21:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo."
},
{
"msg_contents": "> > > > IIRC vacuumlo doesn't take the type lo(see contrib/lo) into\n> > > > account. I'm suspicious if vacuumlo is reliable.\n> > >\n> > > This was my round about way of asking if something to combat this issue\n> > > can be placed in the to do list. :)\n> > \n> > Added to TODO:\n> > \n> > * Improve vacuum of large objects (/contrib/vacuumlo)\n> > \n> \n> Is it possible for vacuumlo to be moved out of /contrib ?\n> As far as I see, there's no perfect solution for vacuumlo.\n\nNot sure myself. Let's see what others say.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 31 Jul 2001 21:25:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo."
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > > Can you see a scenario where a programmer would forget to delete the\n> > > > > data from pg_largeobject and the database becoming very large filled\n> > > > > with orphaned large objects?\n> > > >\n> > > > Sure. My point wasn't that the functionality isn't needed, it's that\n> > > > I'm not sure vacuumlo does it well enough to be ready to promote to\n> > > > the status of mainstream code. It needs more review and testing before\n> > > > we can move it out of /contrib.\n> > > >\n> > >\n> > > IIRC vacuumlo doesn't take the type lo(see contrib/lo) into\n> > > account. I'm suspicious if vacuumlo is reliable.\n> >\n> > This was my round about way of asking if something to combat this issue\n> > can be placed in the to do list. :)\n> \n> Added to TODO:\n> \n> * Improve vacuum of large objects (/contrib/vacuumlo)\n> \n\nIs it possible for vacuumlo to be moved out of /contrib ?\nAs far as I see, there's no perfect solution for vacuumlo.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 01 Aug 2001 10:25:54 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo."
}
] |
[
{
"msg_contents": "Is it just me or is an address on the hackers list who's mail is handled\nby wmail.metro.taejon.kr not existant?\n\nOn Tue, 31 Jul 2001, Mail Delivery Subsystem wrote:\n\n> The original message was received at Tue, 31 Jul 2001 14:25:00 +1000 (EST)\n> from IDENT:grant@conprojan.com.au\n> \n> -- The following addresses had permanent fatal errors --\n> pgsql-hackers@postgresql.org\n\n",
"msg_date": "Tue, 31 Jul 2001 15:23:25 +1000 (EST)",
"msg_from": "Grant <grant@conprojan.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Returned mail: User unknown"
},
{
"msg_contents": "Grant <grant@conprojan.com.au> writes:\n> Is it just me or is an address on the hackers list who's mail is handled\n> by wmail.metro.taejon.kr not existant?\n\nI've had to institute a sendmail access block against that site :-(\nIt bounces a useless complaint for every damn posting I make. What's\nworse is that it looks like it's trying to deliver extra copies to\nthe people named in the To:/CC: lines --- if it somehow fails to fail\nto deliver those copies, it's spamming.\n\nYo Marc, are you awake? These losers should be blocked from our lists\npermanently (or at least till they install less-broken mail software).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Jul 2001 01:50:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Returned mail: User unknown "
},
{
"msg_contents": "> Grant <grant@conprojan.com.au> writes:\n> > Is it just me or is an address on the hackers list who's mail is handled\n> > by wmail.metro.taejon.kr not existant?\n> \n> I've had to institute a sendmail access block against that site :-(\n> It bounces a useless complaint for every damn posting I make. What's\n> worse is that it looks like it's trying to deliver extra copies to\n> the people named in the To:/CC: lines --- if it somehow fails to fail\n> to deliver those copies, it's spamming.\n> \n> Yo Marc, are you awake? These losers should be blocked from our lists\n> permanently (or at least till they install less-broken mail software).\n\nI already reported it to him yesterday. I have blocked them via\nsendmail here too, and sent mail to their postmaster.\n\nThe strange part of it is that these emails arrives in my mailbox marked\nas \"already read\" which is kind of errie. I see the new mail, but it\nsays I already read it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 31 Jul 2001 09:23:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Returned mail: User unknown"
},
{
"msg_contents": "\nShould be fixed ... meant to respond as soon as I fixed it, but got onto\nsomething else at the time :(\n\nOn Tue, 31 Jul 2001, Bruce Momjian wrote:\n\n> > Grant <grant@conprojan.com.au> writes:\n> > > Is it just me or is an address on the hackers list who's mail is handled\n> > > by wmail.metro.taejon.kr not existant?\n> >\n> > I've had to institute a sendmail access block against that site :-(\n> > It bounces a useless complaint for every damn posting I make. What's\n> > worse is that it looks like it's trying to deliver extra copies to\n> > the people named in the To:/CC: lines --- if it somehow fails to fail\n> > to deliver those copies, it's spamming.\n> >\n> > Yo Marc, are you awake? These losers should be blocked from our lists\n> > permanently (or at least till they install less-broken mail software).\n>\n> I already reported it to him yesterday. I have blocked them via\n> sendmail here too, and sent mail to their postmaster.\n>\n> The strange part of it is that these emails arrives in my mailbox marked\n> as \"already read\" which is kind of errie. I see the new mail, but it\n> says I already read it.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Tue, 31 Jul 2001 12:34:38 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: Returned mail: User unknown"
}
] |
[
{
"msg_contents": "Anyone else getting these? Are these supposed to go to list subscribers?\n\nTim\n\n-------- Original Message --------\nSubject: Majordomo Delivery Error\nDate: Tue, 31 Jul 2001 09:43:58 -0400 (EDT)\nFrom: pgsql-hackers-owner+M11605@postgresql.org\nTo: pgsql-hackers-owner+M11605@postgresql.org\n\n\n\nThis message was created automatically by mail delivery software.\nA Majordomo message could not be delivered to the following addresses:\n\n dana@pixelenvy.ca:\n 450 4.7.1 ... Can not check MX records for recipient host pixelenvy.ca\n\n mail2db@mail2db.circumsolutions.com:\n 450 4.7.1 ... Can not check MX records for recipient host mail2db.circumsolutions.com\n\n tjmailarch@techjockey.net:\n 450 4.7.1 ... Can not check MX records for recipient host techjockey.net\n\n-- Original message omitted --\n\n\n-- \nTimothy H. Keitt\nDepartment of Ecology and Evolution\nState University of New York at Stony Brook\nStony Brook, New York 11794 USA\nPhone: 631-632-1101, FAX: 631-632-7626\nhttp://life.bio.sunysb.edu/ee/keitt/\n\n\n",
"msg_date": "Tue, 31 Jul 2001 09:48:52 -0400",
"msg_from": "\"Timothy H. Keitt\" <tklistaddr@keittlab.bio.sunysb.edu>",
"msg_from_op": true,
"msg_subject": "[Fwd: Majordomo Delivery Error]"
},
{
"msg_contents": "\"Timothy H. Keitt\" <tklistaddr@keittlab.bio.sunysb.edu> writes:\n> Anyone else getting these? Are these supposed to go to list subscribers?\n\nWas it a bounceback from wmail.metro.taejon.kr? They seem to have some\nrather broken mail delivery software in place there. Bruce and I have\nboth asked Marc to remove that address from the PG mail lists, but Marc's\nnot responding (might be off on vacation or some such...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Jul 2001 10:56:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Majordomo Delivery Error] "
},
{
"msg_contents": "On Tue, 31 Jul 2001, Tom Lane wrote:\n\n> \"Timothy H. Keitt\" <tklistaddr@keittlab.bio.sunysb.edu> writes:\n> > Anyone else getting these? Are these supposed to go to list subscribers?\n>\n> Was it a bounceback from wmail.metro.taejon.kr? They seem to have some\n> rather broken mail delivery software in place there. Bruce and I have\n> both asked Marc to remove that address from the PG mail lists, but Marc's\n> not responding (might be off on vacation or some such...)\n\nI just got a note from him, he got it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 31 Jul 2001 11:56:34 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Majordomo Delivery Error] "
}
] |
[
{
"msg_contents": "Hi,\n\nAccidentally I've ran out of disk space during intensive database write\noperations. At the begining the backend went to recovery mode a few times,\nand continued the work, but finally it has terribly died \n\t( Segfault at backend/utils/mmgr/mcxt.c 222 )\n\nAfter this event, the recovery has stoped this way:\n\n[version: postgres 7.1 beta6\nOK, I know this is very old... I do the upgrade soon, but I am\ninterested in what this error message means, could this happen with the\nnewser versions and so on.....]\n\n\npgdevel@road:/home/postgreSQL/postgres7.1$ postgres -O -P -Ddata performance\nDEBUG: database system shutdown was interrupted at 2001-07-31 16:05:04 CEST\nDEBUG: CheckPoint record at (1, 2941129720)\nDEBUG: Redo record at (1, 2936012956); Undo record at (1, 2936012808); Shutdown FALSE\nDEBUG: NextTransactionId: 5699520; NextOid: 17895453\nDEBUG: database system was not properly shut down; automatic recovery in progress...\nDEBUG: redo starts at (1, 2936012956)\nDEBUG: open(logfile 1 seg 179) failed: No such file or directory\nDEBUG: redo done at (1, 3003121540)\nFATAL 2: XLogWrite: write request is past end of log\npgdevel@road:/home/postgreSQL/postgres7.1$\n\n\nAttila\n\n",
"msg_date": "Tue, 31 Jul 2001 16:27:55 +0200 (CEST)",
"msg_from": "Meszaros Attila <tilla@draconis.csoma.elte.hu>",
"msg_from_op": true,
"msg_subject": "Recovery error"
},
{
"msg_contents": "Meszaros Attila <tilla@draconis.elte.hu> writes:\n> Accidentally I've ran out of disk space during intensive database write\n> operations. At the begining the backend went to recovery mode a few times,\n> and continued the work, but finally it has terribly died \n> \t( Segfault at backend/utils/mmgr/mcxt.c 222 )\n\nA backtrace from that segfault might be interesting, if you can provide\none.\n\n> [version: postgres 7.1 beta6\n> OK, I know this is very old... I do the upgrade soon,\n\nI suggest \"now\", not \"soon\" :-(. The betas had some bugs in logfile\nrecovery, and I think you've been bit by one. This looks familiar:\n\n> FATAL 2: XLogWrite: write request is past end of log\n\nYou might be able to recover your data by updating to 7.1.2 and running\ncontrib/pg_resetxlog before you start the new postmaster.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 14:10:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recovery error "
}
] |
[
{
"msg_contents": "\nhello all\nI have a postgresql 7.0\nand I'm trying to update to 7.1.2 using rpms\nbut some files is missing\nlike:\nlibcrypto.so.0\nlibssl.so.0\n\nanyone knows what package i can find this files??\n\nthanks...\n",
"msg_date": "31 Jul 2001 18:24:57 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "Update to 7.1.2 Question..."
},
{
"msg_contents": "\"gabriel\" <gabriel@workingnetsp.com.br> writes:\n\n> hello all\n> I have a postgresql 7.0\n> and I'm trying to update to 7.1.2 using rpms\n> but some files is missing\n> like:\n> libcrypto.so.0\n> libssl.so.0\n> \n> anyone knows what package i can find this files??\n\nopenssl\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "31 Jul 2001 15:06:31 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Update to 7.1.2 Question..."
},
{
"msg_contents": "gabriel wrote:\n> \n> hello all\n> I have a postgresql 7.0\n> and I'm trying to update to 7.1.2 using rpms\n> but some files is missing\n> like:\n> libcrypto.so.0\n> libssl.so.0\n> \n> anyone knows what package i can find this files??\n> \n> thanks...\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nThese should be available on RedHat's update pages.\n\nlibssl.so comes from OpenSSH\nlibcrypt comes from crypt.\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Tue, 31 Jul 2001 16:19:53 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Update to 7.1.2 Question..."
}
] |
[
{
"msg_contents": "(My last try at this vanished into the aether, so I am repeating...)\nPatchers,\nI hope using an attachment is not breaking list protocol...\nPostGIS is a GIS extension to PostgreSQL 7.1.x. Lots of info is at\nhttp://postgis.refractions.net\nThe source is set up to compile cleanly from under 'contrib'. The only\nMakefile quirk is a switch to allow the environment variable PGSQL_SRC\nto override the usual contrib defaults about where the pgsql source tree\nis and where installation should happen.\nThanks,\nPaul\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_",
"msg_date": "Tue, 31 Jul 2001 16:59:49 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "contrib/postgis spatial extensions"
},
{
"msg_contents": "> (My last try at this vanished into the aether, so I am repeating...)\n> Patchers,\n> I hope using an attachment is not breaking list protocol...\n> PostGIS is a GIS extension to PostgreSQL 7.1.x. Lots of info is at\n> http://postgis.refractions.net\n> The source is set up to compile cleanly from under 'contrib'. The only\n> Makefile quirk is a switch to allow the environment variable PGSQL_SRC\n> to override the usual contrib defaults about where the pgsql source tree\n> is and where installation should happen.\n> Thanks,\n> Paul\n> \n\nYes, I have kept your original email. Can I have a vote on whether this\nshould be in contrib? Our logic of putting plugin stuff and conversion\nstuff in contrib clearly matches this patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Aug 2001 12:54:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n> \n> > (My last try at this vanished into the aether, so I am repeating...)\n> > Patchers,\n> > I hope using an attachment is not breaking list protocol...\n> > PostGIS is a GIS extension to PostgreSQL 7.1.x. Lots of info is at\n> > http://postgis.refractions.net\n> > The source is set up to compile cleanly from under 'contrib'. The only\n> > Makefile quirk is a switch to allow the environment variable PGSQL_SRC\n> > to override the usual contrib defaults about where the pgsql source tree\n> > is and where installation should happen.\n> > Thanks,\n> > Paul\n> >\n> \n> Yes, I have kept your original email. Can I have a vote on whether this\n> should be in contrib? Our logic of putting plugin stuff and conversion\n> stuff in contrib clearly matches this patch.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Tue, 07 Aug 2001 10:21:53 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "I'm for adding this to contrib, but is there any chance of getting it\nreleased under our BSD-style license, not GPL? If it's GPL then it\nwill never be a candidate to move out of contrib and into the main tree,\nwhich seems like something we'd like to do with it eventually. The GPL\nlicense forbids us from merging GPL'd code into BSD-licensed code,\nso the best we can do with GPL'd code is keep it at arm's length in\nthe contrib area.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 14:17:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions "
},
{
"msg_contents": "> I'm for adding this to contrib, but is there any chance of getting it\n> released under our BSD-style license, not GPL? If it's GPL then it\n> will never be a candidate to move out of contrib and into the main tree,\n> which seems like something we'd like to do with it eventually. The GPL\n> license forbids us from merging GPL'd code into BSD-licensed code,\n> so the best we can do with GPL'd code is keep it at arm's length in\n> the contrib area.\n\nOne thing that made me hesistate is this. These numbers are in 1k:\n\t\n\t93 ./doc/html\n\t147 ./doc\n\t58 ./examples/wkb_reader\n\t59 ./examples\n\t21 ./jdbc/org/postgis\n\t22 ./jdbc/org\n\t5 ./jdbc/examples\n\t30 ./jdbc\n\t136 ./loader\n\t164 ./regress\n\t760 .\n\nThe package is 760k. I know GIS is a major feature, but that size had\nme concerned.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Aug 2001 14:26:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "I just did a little scruft hunt, and the only thing which is expendable\nis 'regress', which is not really any good right now. (just doesn't\nwork, we need to redo it). Everything else has some kind of reasonable\npurpose... \nGenerally, it as you said: this is a major feature and has a\nconcommitant quantity of code.\n\nBruce Momjian wrote:\n\n> The package is 760k. I know GIS is a major feature, but that size had\n> me concerned.\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Tue, 07 Aug 2001 11:46:36 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The package is 760k. I know GIS is a major feature, but that size had\n> me concerned.\n\nWell, 18k of that is the GPL COPYING file ;-)\n\nSeriously, the size doesn't bother me too much. Possibly some space could\nbe shaved by decreasing the size of the regression test data. And maybe\nwe don't need to include both XML and HTML versions of the docs. And we\ndefinitely don't need an executable file for examples/wkb_reader/readwkb;\nperhaps there are some other files that don't belong in a source\ndistribution?\n\nBut it's under 200K compressed as-is, and given the amount of\nfunctionality added that seems like a worthwhile tradeoff. The existing\ngeometric datatypes in PG are really only proof-of-concept, academic-toy\nquality. This looks like the beginning of a considerably superior\nreplacement.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 14:54:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions "
},
{
"msg_contents": "This is something which has been discussed on our list in the past. The\nmain argument for BSD to date has been as you noted: to get eventual\nmainline pgsql inclusion. The main argument against, in my view, is that\nwe want to be a place where open geomatics code can aggregate. At the\nmoment, most geospatial algorithms are only available under commercial\nlicencing regimes: until we get a critical mass of open projects and\nworkable code, inertia will continue to point us towards closed source\nand development.\n\nWe'll talk about this a bit on the list and see what the PostGIS users\nthink.\n\nTom Lane wrote:\n> \n> I'm for adding this to contrib, but is there any chance of getting it\n> released under our BSD-style license, not GPL? If it's GPL then it\n> will never be a candidate to move out of contrib and into the main tree,\n> which seems like something we'd like to do with it eventually. The GPL\n> license forbids us from merging GPL'd code into BSD-licensed code,\n> so the best we can do with GPL'd code is keep it at arm's length in\n> the contrib area.\n> \n> regards, tom lane\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Tue, 07 Aug 2001 11:54:10 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "Paul Ramsey <pramsey@refractions.net> writes:\n> This is something which has been discussed on our list in the past. The\n> main argument for BSD to date has been as you noted: to get eventual\n> mainline pgsql inclusion. The main argument against, in my view, is that\n> we want to be a place where open geomatics code can aggregate.\n\nA fair point. The BSD theory about this is that open source process\nis enough better than closed source that you can achieve critical mass\nanyway, but if you haven't seen that happen for yourself it's hard to\ntake on faith.\n\nA possible compromise is to accept postgis as a contrib item under GPL\nnow, with the thought that someday (after you feel you've achieved\ncritical mass) you might relicense it as BSD so we can put it in the\nmainline postgres code. However that bothers me somewhat: your current\nand near-future contributors might say that they contributed on the\nunderstanding that the license is GPL, and they don't want their\ncontributions relicensed. In practice it's awfully tough to change\nthe license of an established project without making people unhappy.\nYou're probably better off making your decision and sticking to it.\n\n> We'll talk about this a bit on the list and see what the PostGIS users\n> think.\n\nFair enough. Thanks for listening.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 15:06:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions "
},
{
"msg_contents": "Paul Ramsey <pramsey@refractions.net> writes:\n> ... We don't mind living in contrib, and\n> since we are going to have different release cycles would probably find\n> it easier to move our development along as an extension than as an\n> integral part of the distribution.\n\nOkay, we'll plan on playing it that way indefinitely. Thanks for\nthinking about the issues, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 17:49:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions "
},
{
"msg_contents": "OK, on the licencing issue, we have decided to keep the PostGIS\nextension as GPL. We think it is important that at least one part of the\nopen source GIS community be GPL. And it doesn't hurt that Dave and I\nare both Communists. <wink> :) We don't mind living in contrib, and\nsince we are going to have different release cycles would probably find\nit easier to move our development along as an extension than as an\nintegral part of the distribution.\n\nOn the size side, I'll go through and blow away all the redundancies\nwhich Tom listed and any more I can find for the package to go under\ncontrib. Then I'll post a new tarball for consideration. \n\nThanks guys,\nPaul\n\nTom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The package is 760k. I know GIS is a major feature, but that size had\n> > me concerned.\n> \n> Well, 18k of that is the GPL COPYING file ;-)\n> \n> Seriously, the size doesn't bother me too much. Possibly some space could\n> be shaved by decreasing the size of the regression test data. And maybe\n> we don't need to include both XML and HTML versions of the docs. And we\n> definitely don't need an executable file for examples/wkb_reader/readwkb;\n> perhaps there are some other files that don't belong in a source\n> distribution?\n> \n> But it's under 200K compressed as-is, and given the amount of\n> functionality added that seems like a worthwhile tradeoff. The existing\n> geometric datatypes in PG are really only proof-of-concept, academic-toy\n> quality. This looks like the beginning of a considerably superior\n> replacement.\n> \n> regards, tom lane\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Wed, 08 Aug 2001 14:50:43 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> (My last try at this vanished into the aether, so I am repeating...)\n> Patchers,\n> I hope using an attachment is not breaking list protocol...\n> PostGIS is a GIS extension to PostgreSQL 7.1.x. Lots of info is at\n> http://postgis.refractions.net\n> The source is set up to compile cleanly from under 'contrib'. The only\n> Makefile quirk is a switch to allow the environment variable PGSQL_SRC\n> to override the usual contrib defaults about where the pgsql source tree\n> is and where installation should happen.\n> Thanks,\n> Paul\n> \n> -- \n> __\n> /\n> | Paul Ramsey\n> | Refractions Research\n> | Email: pramsey@refractions.net\n> | Phone: (250) 885-0632\n> \\_\n\n[ application/x-gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Aug 2001 17:58:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "\nOops, hold until I receive new, smaller one.\n\n> (My last try at this vanished into the aether, so I am repeating...)\n> Patchers,\n> I hope using an attachment is not breaking list protocol...\n> PostGIS is a GIS extension to PostgreSQL 7.1.x. Lots of info is at\n> http://postgis.refractions.net\n> The source is set up to compile cleanly from under 'contrib'. The only\n> Makefile quirk is a switch to allow the environment variable PGSQL_SRC\n> to override the usual contrib defaults about where the pgsql source tree\n> is and where installation should happen.\n> Thanks,\n> Paul\n> \n> -- \n> __\n> /\n> | Paul Ramsey\n> | Refractions Research\n> | Email: pramsey@refractions.net\n> | Phone: (250) 885-0632\n> \\_\n\n[ application/x-gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Aug 2001 17:59:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> OK, silly binaries removed, HTML docs only, broken regress removed and\n> (added bonus) accurate uninstall SQL added.\n> 105K compressed, 640K uncompressed.\n> \n> Bruce Momjian wrote:\n> > \n> > Oops, hold until I receive new, smaller one.\n> > \n> > > (My last try at this vanished into the aether, so I am repeating...)\n> > > Patchers,\n> > > I hope using an attachment is not breaking list protocol...\n> > > PostGIS is a GIS extension to PostgreSQL 7.1.x. Lots of info is at\n> > > http://postgis.refractions.net\n> > > The source is set up to compile cleanly from under 'contrib'. The only\n> > > Makefile quirk is a switch to allow the environment variable PGSQL_SRC\n> > > to override the usual contrib defaults about where the pgsql source tree\n> > > is and where installation should happen.\n> > > Thanks,\n> > > Paul\n> \n> -- \n> __\n> /\n> | Paul Ramsey\n> | Refractions Research\n> | Email: pramsey@refractions.net\n> | Phone: (250) 885-0632\n> \\_\n\n[ application/x-gzip is not supported, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Aug 2001 18:20:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "> Tom Lane writes:\n> \n> > I'm for adding this to contrib, but is there any chance of getting it\n> > released under our BSD-style license, not GPL? If it's GPL then it\n> > will never be a candidate to move out of contrib and into the main tree,\n> > which seems like something we'd like to do with it eventually. The GPL\n> > license forbids us from merging GPL'd code into BSD-licensed code,\n> > so the best we can do with GPL'd code is keep it at arm's length in\n> > the contrib area.\n> \n> I think I have a problem with this approach. If you think these types are\n> the way to go for PostgreSQL then they should be available under a\n> BSD-style license or we should not support them by shipping them in the\n> distribution. You are thereby effectively declaring the development on\n> the existing geometry types obsolete in favour of something not as free as\n> we like.\n> \n> In particular, what kind of situation, legal and otherwise, are you\n> creating for someone who may in the future want to implement a free\n> version of these types?\n> \n> This PostGIS project seems big enough that they can handle their own\n> releases. I have a general problem with everything that comes our way\n> being stuffed in contrib (imagine Perl shipping the whole CPAN in its\n> tarball), but half a meg seems to be too much.\n\nOK, patch on hold.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Aug 2001 18:24:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "Tom Lane writes:\n\n> I'm for adding this to contrib, but is there any chance of getting it\n> released under our BSD-style license, not GPL? If it's GPL then it\n> will never be a candidate to move out of contrib and into the main tree,\n> which seems like something we'd like to do with it eventually. The GPL\n> license forbids us from merging GPL'd code into BSD-licensed code,\n> so the best we can do with GPL'd code is keep it at arm's length in\n> the contrib area.\n\nI think I have a problem with this approach. If you think these types are\nthe way to go for PostgreSQL then they should be available under a\nBSD-style license or we should not support them by shipping them in the\ndistribution. You are thereby effectively declaring the development on\nthe existing geometry types obsolete in favour of something not as free as\nwe like.\n\nIn particular, what kind of situation, legal and otherwise, are you\ncreating for someone who may in the future want to implement a free\nversion of these types?\n\nThis PostGIS project seems big enough that they can handle their own\nreleases. I have a general problem with everything that comes our way\nbeing stuffed in contrib (imagine Perl shipping the whole CPAN in its\ntarball), but half a meg seems to be too much.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 9 Aug 2001 00:24:28 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions "
},
{
"msg_contents": "OK, silly binaries removed, HTML docs only, broken regress removed and\n(added bonus) accurate uninstall SQL added.\n105K compressed, 640K uncompressed.\n\nBruce Momjian wrote:\n> \n> Oops, hold until I receive new, smaller one.\n> \n> > (My last try at this vanished into the aether, so I am repeating...)\n> > Patchers,\n> > I hope using an attachment is not breaking list protocol...\n> > PostGIS is a GIS extension to PostgreSQL 7.1.x. Lots of info is at\n> > http://postgis.refractions.net\n> > The source is set up to compile cleanly from under 'contrib'. The only\n> > Makefile quirk is a switch to allow the environment variable PGSQL_SRC\n> > to override the usual contrib defaults about where the pgsql source tree\n> > is and where installation should happen.\n> > Thanks,\n> > Paul\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_",
"msg_date": "Wed, 08 Aug 2001 15:29:48 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "\n> In particular, what kind of situation, legal and otherwise, are you\n> creating for someone who may in the future want to implement a free\n> version of these types?\n\n Well, legal problems none at all: to paraphrase a bad rap tune, \npeople can \"do what they like, be how they like\". Our existance does \nnot create a legal constraint on people to do whatever they want with \ntheir own code.\n Otherwise problems, just the problem of our being there first: most \npeople won't be sufficiently loyal to the BSD licence to want to \nreimplement the whole OpenGIS spec when there's already an open source \nversion around.\n\n> This PostGIS project seems big enough that they can handle their own\n> releases. I have a general problem with everything that comes our way\n> being stuffed in contrib (imagine Perl shipping the whole CPAN in its\n> tarball), but half a meg seems to be too much.\n\n This is a valid point, but the line around where things get added or\nnot seems fuzzy. GIS support is as useful as soundex and probabilistic\nmatching functions, it is just inconveniently more involved and \ntherefor bigger. Is size the determinant? Licencing? Both? This is\nsomething which probably should be hashed out in general. What \nconstitues the core product and (just as important) what are the\npackaging standards for non-core modules which will allow them to\nbe added to the core with a minimum of effort (CPAN is an excellent\nexample).\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Wed, 08 Aug 2001 16:25:02 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "Paul Ramsey writes:\n\n> Otherwise problems, just the problem of our being there first: most\n> people won't be sufficiently loyal to the BSD licence to want to\n> reimplement the whole OpenGIS spec when there's already an open source\n> version around.\n\nThat's exactly my point. We at PostgreSQL have confirmed many times that\nthe BSD license is not just a historical accident but a desired feature of\nour product. I think we should not compromise that by effectively\nendorsing and supporting a partial replacement for our product that does\nnot meet these standards.\n\n> This is\n> something which probably should be hashed out in general. What\n> constitues the core product and (just as important) what are the\n> packaging standards for non-core modules which will allow them to\n> be added to the core with a minimum of effort (CPAN is an excellent\n> example).\n\nGood points. I think before long we need some sort of extension\nrepository as well.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 9 Aug 2001 17:24:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "> Paul Ramsey writes:\n> \n> > Otherwise problems, just the problem of our being there first: most\n> > people won't be sufficiently loyal to the BSD licence to want to\n> > reimplement the whole OpenGIS spec when there's already an open source\n> > version around.\n> \n> That's exactly my point. We at PostgreSQL have confirmed many times that\n> the BSD license is not just a historical accident but a desired feature of\n> our product. I think we should not compromise that by effectively\n> endorsing and supporting a partial replacement for our product that does\n> not meet these standards.\n\nOK, I have thought about this for a day, and I have some ideas. First,\nlet me say that GIS capability would be a major PostgreSQL feature, and\nwould showcase our extensibility.\n\nSecond, let me mention that our license is designed to allow a company\nto take PostgreSQL, spend lots of time adding some neat data type, and\nthen sell a closed version to recoup their expenses. We also want to be\nconsiderate of others who don't want their work used in this way and\nwant their code GPL'ed.\n\nWith that said, I think there are three issues with the GIS patch:\n\n\tsize\n\tlicense (GPL)\n\tduplication of existing types\n\nLet me suggest a solution. What if we took the part of the GIS code\nthat duplicated our existing code (geometric types) and replaced what we\nhad in the core distribution with the GIS version. The geometric types\nare one of the few areas that have been untended over the years. Seems\na new implementation, based on the GIS specification, would be a great\nidea. We would have to add some backward compatibility stuff to help\npeople load their old data and port their applications, but it may be a\nbig win for PostgreSQL.\n\nSecond, we could give the GIS folks CVS permission so they could\nmaintain the new geometric types.\n\nThird, we could take the remaining GIS-specific part of PostGIS and move\nit into /contrib with a GPL.\n\nThis would tie the PostGIS project closer to PostgreSQL, giving them\ngreater visibility and increase the use of PostGIS. This makes a\nnon-GPL GIS on top of PostgreSQL even less likely because PostGIS will\nbe much more visible and GIS people will be directly involved with core\nPostgreSQL features.\n\nIt also reduces the size of the patch, because we are removing existing\ncode that was never really maintained.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Aug 2001 23:31:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "[ why is this thread hiding in -patches? It should be on -hackers or\n -general, methinks. ]\n\nBruce Momjian <pgman@candle.pha.pa.us> writes:\n> Let me suggest a solution. What if we took the part of the GIS code\n> that duplicated our existing code (geometric types) and replaced what we\n> had in the core distribution with the GIS version.\n\nThis is a complete nonstarter unless the GIS guys are willing to accept\nBSD licensing of that part of their code; which I doubt given Paul's\nprior comments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Aug 2001 23:47:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions "
},
{
"msg_contents": "> [ why is this thread hiding in -patches? It should be on -hackers or\n> -general, methinks. ]\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Let me suggest a solution. What if we took the part of the GIS code\n> > that duplicated our existing code (geometric types) and replaced what we\n> > had in the core distribution with the GIS version.\n> \n> This is a complete nonstarter unless the GIS guys are willing to accept\n> BSD licensing of that part of their code; which I doubt given Paul's\n> prior comments.\n\nI talked to him on the phone today, and he was interested. My basic\nreasoning was that if they don't want a commerical GIS developed on\nPostgreSQL, their best defense is to get involved with the core code,\nand make their GPL implementation in /contrib so popular that it doesn't\nmake any sense for a company to make one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Aug 2001 00:13:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "> It also reduces the size of the patch, because we are removing existing\n> code that was never really maintained.\n\nYou are forgetting the work I and others have put into it (mostly three\nor four years ago iirc). But I'm sure that the GIS code is an\nimprovement ;)\n\n - Thomas\n",
"msg_date": "Fri, 10 Aug 2001 04:19:13 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "> > It also reduces the size of the patch, because we are removing existing\n> > code that was never really maintained.\n> \n> You are forgetting the work I and others have put into it (mostly three\n> or four years ago iirc). But I'm sure that the GIS code is an\n> improvement ;)\n\nLet me add that PostGIS has worked closely with Oleg and all their\nindexing is based on GIST.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Aug 2001 11:07:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/postgis spatial extensions"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> [ why is this thread hiding in -patches? It should be on -hackers or\n> -general, methinks. ]\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Let me suggest a solution. What if we took the part of the GIS code\n> > that duplicated our existing code (geometric types) and replaced what we\n> > had in the core distribution with the GIS version.\n> \n> This is a complete nonstarter unless the GIS guys are willing to accept\n> BSD licensing of that part of their code; which I doubt given Paul's\n> prior comments.\n> \n> regards, tom lane\n\nHi Tom,\nI have discussed this with Dave Blasby, who has done all of the\nprogramming to date (and will no doubt pop up here soon to put his oar\nin). There are a few issues germain to us in this:\n\n1) Protection of important intellectual property under the GPL so that a\ncore of geospatial algorithms can begin to coallesce.\n2) Promotion of PostGIS as a central OpenGIS component (the University\nof Minnesota Mapserver is another) which will hopefully bring our\nbusiness some consulting work over time.\n3) Promition of PostgreSQL/PostGIS as an open-source alternative to\nthings like OracleSpatial or SDE/Oracle.\n\nOur feeling is that the basic database objects and their hooks into GiST\nare not the core of IP we are interested in protecting. The most\nimportant code for PostGIS and open source GIS is not yet incorporated:\nit is the overlay, union, binary predicate algorithms specificed by the\nOpenGIS spec. Those are the bits we want to have GPL'ed. We are not\naverse to having the objects and spatial indexing under BSD and in the\ncore pgsql distribution, but would like the rest of the OpenGIS Simple\nFeature Spec to be part of a GPL package (the functions, the supporting\ntriggers and consistency maintainance devices, blah blah blah).\n\nSo,\n\n1) we can do by maintaining the important OpenGIS algorithms in an\nexternal package while the objects and indexes are brought into the\npgsql main tree\n2) and 3) are better served by being part of the main tree, where\neveryone can use the main objects, and the savants can learn about\nOpenGIS and move on to the complete package.\n\nNow, why would you want these objects?\n\n- they are toastable, so one of the big GIS usability bugaboos with the\nold geometries\n- they are indexable, using GiST, and do lossy indexing so \"large\npolygon\" bugaboo is not a problem\n- they follow an existing spec for GIS-in-a-database\n- they support polygons-with-holes\n- 3d coordinates supported\n\nWhy don't you want these objects?\n\n- some of the existing funcionatily is missing, because it is not in the\nOpenGIS spec\n- no circles, or arcs\n- different canonical representations (EG, a point is 'POINT(1 2)' not\n'(1,2)'\n- superannuation of alot of the operator notation\nin short...\n- not backward compatible\n\nI'm sure there's other reasons as well.\n\nSomething I would like Dave to comment on is how cleanly we can split\nthe object/indexing from the OpenGIS spec'ed support tables and\nreference systems. I am thinking about the canonical representation in\nparticular, which could be pretty ugly with the SRS id's hanging in\nthere for no purpose. The OpenGIS spec is at\nhttp://www.opengis.org/techno/specs/99-049.pdf\n",
"msg_date": "Mon, 13 Aug 2001 12:31:31 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "PostGIS spatial extensions"
},
{
"msg_contents": "> Something I would like Dave to comment on is how cleanly we can split\n> the object/indexing from the OpenGIS spec'ed support tables and\n> reference systems. I am thinking about the canonical representation in\n> particular, which could be pretty ugly with the SRS id's hanging in\n> there for no purpose. The OpenGIS spec is at\n> http://www.opengis.org/techno/specs/99-049.pdf\n\nI am thinking we can turn off the prefix tags with some postgresql.conf\noption.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Aug 2001 15:53:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] PostGIS spatial extensions"
},
{
"msg_contents": "I think it would be great for PostgreSQL to be an 'OpenGIS Simple\nFeature Specification for SQL' compliant database with robust spatial\noperators right-out-of-the-box. \n\nCurrently, PostGIS implements most of the OpenGIS specification. The\nunimplemented portions are the important; spatial operators (the DE-9IM\nspatial relationship matrix) and boolean functions (union, intersection,\nXOR, etc�). Since these are extremely difficult algorithms, the PostGIS\nteam will probably translate the JTS (Java Topology Suite) to C++. The\nJTS is a soon-to-be-released robust Java implementation of the OpenGIS\nsimple feature type. Vivid Solutions (cf.\nhttp://www.vividsolutions.com/jts/jtshome.htm) will be releasing it\nunder the LGPL. JTS is the only open-source robust spatial library I've\never heard of. The PostGIS developers and Vivid Solutions want this to\nremain Free Software and not be co-opted and closed.\n\nSince PostgreSQL cannot have LGPL code in its core, this would make it\nimpossible to ever have a fully-compliant PostGIS in its core. In fact,\nits unlikely that anyone will spend the huge effort required creating a\nBSD equivalent spatial library when there is already a LGPL one\navailable. \n\nThis leaves the option for creating a semi-compliant OpenGIS core inside\nPostgreSQL and having a LGPL add-on for the complex spatial operations\n(making a fully compliant implementation).\n\nThe next question is, of course, what does 'semi-compliant' mean? Or,\nmore interesting, why would you want a semi-compliant database? For\nmost people's simple tasks, the built in geometry types are adequate. \nThose interested in doing more complex tasks will probably want the full\nOpenGIS implementation. \n\nA few people have suggested that we simplify PostGIS, release it as BSD,\nand use that in the core of PostgreSQL. The simplified PostGIS would\nhave the basic types, indexing, and a few operations (those following\nPostGIS development, this is very much like version 0.5 and earlier). \nThe 'full' PostGIS (with JTS) would have the entire OpenGIS spec. \n\nUnfortunately, this is easier said than done. The full implementation\nrequires a properly maintained metadata table (with information about\nevery geometry column in the DB), a spatial referencing system table\n(info about each map projection used), and each geometry must have\nspatial referencing information. The JTS may also require precision\ngrid (offset/scale) information in each geometry. This would make it\nreally difficult (and confusing) to upgrade to the fully compliant\nversion from the partially compliant version - friction I don't want. \n\nSecondly, as paul has already pointed out, there wouldn't be very many\noperations you could do on these objects. \n\ndave\n\nFor those reading the OpenGIS spec, PostGIS is most accurately described\nas \"SQL92 with Geometry Types Implementation of FeatureTables\".\n",
"msg_date": "Mon, 13 Aug 2001 15:57:10 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": false,
"msg_subject": "Re: PostGIS spatial extensions"
},
{
"msg_contents": "Dave Blasby <dblasby@refractions.net> writes:\n> [snip] Vivid Solutions (cf.\n> http://www.vividsolutions.com/jts/jtshome.htm) will be releasing it\n> under the LGPL.\n> [snip]\n> This leaves the option for creating a semi-compliant OpenGIS core inside\n> PostgreSQL and having a LGPL add-on for the complex spatial operations\n> (making a fully compliant implementation).\n\nUm, the tarfile that Paul sent us contained the GPL license, not LGPL.\nThere's a pretty substantial difference. Please clarify exactly which\nlicense you intend to use.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 19:13:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGIS spatial extensions "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Dave Blasby <dblasby@refractions.net> writes:\n> > [snip] Vivid Solutions (cf.\n> > http://www.vividsolutions.com/jts/jtshome.htm) will be releasing it\n> > under the LGPL.\n> > [snip]\n> > This leaves the option for creating a semi-compliant OpenGIS core inside\n> > PostgreSQL and having a LGPL add-on for the complex spatial operations\n> > (making a fully compliant implementation).\n> \n> Um, the tarfile that Paul sent us contained the GPL license, not LGPL.\n> There's a pretty substantial difference. Please clarify exactly which\n> license you intend to use.\n\nPostGIS is currently released under the GPL, and is developed by\nRefractions Research.\nJTS (Java Topology Suite) will be released under the LGPL, and is\ndeveloped by Vivid Solutions.\n\nJTS hasnt been released yet, and it will need to be converted to C++\nbefore it could be incorporated into PostGIS.\n\nEven if PostGIS is converted to BSD at some point in the future, it will\nalways have a LGPL \ncomponent if we decide to use the JTS to do the complex spatial\nrelations and operations.\n\nSorry for the confusion,\n\ndave\n",
"msg_date": "Mon, 13 Aug 2001 16:41:16 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": false,
"msg_subject": "Re: PostGIS spatial extensions"
},
{
"msg_contents": "Dave Blasby wrote:\n> \n\n> The next question is, of course, what does 'semi-compliant' mean? Or,\n> more interesting, why would you want a semi-compliant database? For\n> most people's simple tasks, the built in geometry types are adequate.\n> Those interested in doing more complex tasks will probably want the full\n> OpenGIS implementation.\n\nI would argue that for most people's simple tasks the built-in geometry\ntypes are in fact not adequate. The fact that they choke on large\nobjects and are mostly not indexable (polgyons and boxes excepted)\nshould be enough to discourage most people with GIS intentions.\n\nI would tend to say that a semi-compliant database would be good enough\nto hack with, but not good enough to plug-n-play with an existing\nOpenGIS client. It would include the objects, indexing and accessors.\nDump data in, search real fast, dump data out.\n\nA more philosophical question would be whether a semi-compliant database\nis desirable from a public good point of view: semi-compliant\ninfrastructure will encourage non-standard applications, which will in\nturn weaken the raison d'etre of the standard in the first place.\n\n> A few people have suggested that we simplify PostGIS, release it as BSD,\n> and use that in the core of PostgreSQL. The simplified PostGIS would\n> have the basic types, indexing, and a few operations (those following\n> PostGIS development, this is very much like version 0.5 and earlier).\n> The 'full' PostGIS (with JTS) would have the entire OpenGIS spec.\n> \n> Unfortunately, this is easier said than done. The full implementation\n> requires a properly maintained metadata table (with information about\n> every geometry column in the DB), a spatial referencing system table\n> (info about each map projection used), and each geometry must have\n> spatial referencing information. The JTS may also require precision\n> grid (offset/scale) information in each geometry. This would make it\n> really difficult (and confusing) to upgrade to the fully compliant\n> version from the partially compliant version - friction I don't want.\n> \n> Secondly, as paul has already pointed out, there wouldn't be very many\n> operations you could do on these objects.\n\nYou forgot to finish your thought :) \"Therefore, I do not think we\nshould cleave the distribution into a BSD core and GPL support package.\"\nI am not opposed to that philisophically, but I really do think that\nBruce's suggestion regarding becoming more integrated has merits around\nacceptance by the larger PgSQL community. Being a good neighbor means\nboth receiving and giving.\n\nPerhaps we could back up at this point and revisit 'contrib' ... at what\npoint in the size/licence/redundace spectrum do we become reasonable\ncandidates for 'contrib', if ever? The current tenor seems to be that at\n600K/GPL/point-line-polygon we are \"too big\"/\"too restrictive and/or too\nfree\"/\"overlapping\". Would moving on any of those axes be sufficient, or\ndo we have to address all three (practically speaking, I not think there\nis anything to be done about size).\n",
"msg_date": "Mon, 13 Aug 2001 17:08:54 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: PostGIS spatial extensions"
},
{
"msg_contents": "Paul Ramsey writes:\n\n> Perhaps we could back up at this point and revisit 'contrib' ... at what\n> point in the size/licence/redundace spectrum do we become reasonable\n> candidates for 'contrib', if ever? The current tenor seems to be that at\n> 600K/GPL/point-line-polygon we are \"too big\"/\"too restrictive and/or too\n> free\"/\"overlapping\". Would moving on any of those axes be sufficient, or\n> do we have to address all three (practically speaking, I not think there\n> is anything to be done about size).\n\nHistorically, contrib was the place for small pieces of code that a)\ncould/would/should not go into the core for some reason, b) were\nunreasonable to distribute otherwise (too small, not general enough), and\nc) served as examples of how to use the type/functione extension features.\n\nYou satisfy a), you do not satisfy b), and I doubt that c) is still\napplicable.\n\nProjects that are as organized, professional, and value-adding as yours is\ncan surely stand on their own. I compare this to the recently released\nOpenFTS. If we start including projects of this size we'd explode in size\nand maintenance overhead.\n\nI don't want to make the impression that I don't like you guys. It's just\nthat we have to realize that there is a *lot* of coding using PostgreSQL\nthese days, and it's unreasonable to include all of this in our\ndistribution, while at the other end people are crying about removing the\ndocumentation from the tarball because it's too big already.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 14 Aug 2001 23:21:28 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: PostGIS spatial extensions"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> Projects that are as organized, professional, and value-adding as yours is\n> can surely stand on their own. I compare this to the recently released\n> OpenFTS. If we start including projects of this size we'd explode in size\n> and maintenance overhead.\n\nFair enough... perhaps we should turn then to some kind of discussion on\npackaging standards for postgresql extensions? \n\n- One of the things we have run up against is that for most linux\ndistributions, the postgresql-devel package does not include postgres.h\nin the header package. This is not necessary for client-side programs,\nbut it is for server-side extensions. So people cannot compile our\nextension without jettisoning their RPM version of postgresql and moving\nto the tarball.\n- Compile our own RPM you say? Yes and no. We could provide a SRPM, but\nthen we have the same problem: absent a complete postgresql source tree,\nwe cannot compile. And even if we *do* provide our own RPM...\n- Where should extensions be installed by default? The RPM package has\nsome rules, the tarball has some other rules. Should extensions spread\nthemselves out over the postgresql tree (libs under lib, docs under doc,\netc) or should they be self-contained (postgis/lib postgis/doc) under\nsome other location?\n\nIn order to provide a rational RPM source package I ended up having to\nprovide a complete SRPM of postgresql with the postgis stuff bundled in.\nYou must build the whole package in order to get the postgis component.\nThe issue of the extensions dependance on the core is pretty important. \n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n",
"msg_date": "Tue, 14 Aug 2001 17:20:20 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: PostGIS spatial extensions"
},
{
"msg_contents": " Projects that are as organized, professional, and value-adding as yours is\n can surely stand on their own. I compare this to the recently released\n OpenFTS. If we start including projects of this size we'd explode in size\n and maintenance overhead.\n\nDoesn't this discussion indicate that the time is fast approaching, if\nnot already past, for some type of system for handling installation of\n3rd party software?\n\nIt would seem that two prerequisites would need to be satisfied to do\nthis:\n\n- Definition and implementation of the interface to be provided for\n extensions. Presumably, this would involve defining a well-designed\n set of public header files and associated libraries at the right\n level of granularity. For example, encapsulating each type in its\n own header file with a standardized set of operations defined in a\n server-side library would be extremely valuable. The library (or\n libraries) could be used to link the backend and installed for\n extensions to take advantage of preexisting types when others need\n to construct new more complex ones.\n\n- Definition and implementation of a consistent extension management\n system for retrieving, compiling, and installing extensions. This\n could even be used for installing the system itself, thereby making\n the entire operation of managing the software consistent.\n\nI point out that the NetBSD pkgsrc system[1] does the latter in an\nextremely flexible and well-designed manner, and has been a major\nfoundation for the openpackages project. It even includes 7 distinct\npackages[2] for different elements of PostgreSQL, not including a\nnumber of other packages broken out for different interfaces. The\nsame system could be adopted for managing 3rd party extensions.\n\nHaving been involved in defining what header files to install, and\nhaving been actively involved in developing new types for use in our\ninstallation, I can say that external packaging of PostgreSQL, local\nextension of PostgreSQL, and management of 3rd party software would be\ngreatly enhanced by an effort to address the two prerequisites\nmentioned above.\n\nCheers,\nBrook\n\n---------------------------------------------------------------------------\n[1] http://www.netbsd.org/Documentation/software/packages.html\n[2] ftp://ftp.netbsd.org/pub/NetBSD/packages/pkgsrc/databases/README.html\n",
"msg_date": "Wed, 15 Aug 2001 09:38:37 -0600 (MDT)",
"msg_from": "Brook Milligan <brook@biology.nmsu.edu>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: PostGIS spatial extensions"
},
{
"msg_contents": "Paul Ramsey writes:\n\n> - One of the things we have run up against is that for most linux\n> distributions, the postgresql-devel package does not include postgres.h\n> in the header package. This is not necessary for client-side programs,\n> but it is for server-side extensions. So people cannot compile our\n> extension without jettisoning their RPM version of postgresql and moving\n> to the tarball.\n\nThe 7.1 RPMs should contain the server side headers somewhere. Earlier\nversions only included a not very well defined subset of them.\n\n> - Where should extensions be installed by default? The RPM package has\n> some rules, the tarball has some other rules. Should extensions spread\n> themselves out over the postgresql tree (libs under lib, docs under doc,\n> etc) or should they be self-contained (postgis/lib postgis/doc) under\n> some other location?\n\nThis is a matter taste, or of the file system standard of the system you\nuse. If you use autoconf and thus the GNU layout for your source package\nthen the default is going to end up something like\n\n/usr/local/lib/postgis/postgis.so\n/usr/local/share/postgis/install-postgis.sql\n\nFor binary distributions you'd fiddly with the configure --xxxdir flags a\nlittle.\n\nMaybe you had in mind some sort of standard layout under a standard\ndirectory, such as /usr/lib/postgresql/site-stuff (cf. perl), but this\nsort of a arrangement is a major pain. For instance, it won't allow\nnon-root installs.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 15 Aug 2001 18:09:37 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: PostGIS spatial extensions"
},
{
"msg_contents": "\n\nPeter Eisentraut wrote:\n\n> The 7.1 RPMs should contain the server side headers somewhere. Earlier\n> versions only included a not very well defined subset of them.\n\nIndeed they do (nice!), which brings me to a different question: \n\n1 - I download the tarball\n2 - ./configure ; make ; make install \n3 - Delete the source tree\n\nI now have a complete working pgsql installation, with all the libs to\nrun the server, and all the headers to build custom clients, but *not*\nenough headers to build server extensions, because postgres.h is\nmissing. However, if I have an RPM-based installation, I *will* have\nthe server headers I need. Why do we discriminate against people who\ncompile from the tarball?\n\n> This is a matter taste, or of the file system standard of the system you\n> use. If you use autoconf and thus the GNU layout for your source package\n> then the default is going to end up something like\n> \n> /usr/local/lib/postgis/postgis.so\n> /usr/local/share/postgis/install-postgis.sql\n> \n> For binary distributions you'd fiddly with the configure --xxxdir flags a\n> little.\n> \n> Maybe you had in mind some sort of standard layout under a standard\n> directory, such as /usr/lib/postgresql/site-stuff (cf. perl), but this\n> sort of a arrangement is a major pain. For instance, it won't allow\n> non-root installs.\n\nI am tempted to start moving the postgis release to a completely\nindependant package (not living in contrib by default), with its own\nconfigure script, etc etc, but until the availability of postgres.h is\nresolved that might be ill-advised.\n",
"msg_date": "Wed, 15 Aug 2001 10:13:55 -0700",
"msg_from": "Paul Ramsey <pramsey@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: PostGIS spatial extensions"
},
{
"msg_contents": "Brook Milligan writes:\n\n> Doesn't this discussion indicate that the time is fast approaching, if\n> not already past, for some type of system for handling installation of\n> 3rd party software?\n\nYes.\n\n> - Definition and implementation of the interface to be provided for\n> extensions. Presumably, this would involve defining a well-designed\n> set of public header files and associated libraries at the right\n> level of granularity.\n\nThis is not going to happen. One of the features of the extensibility is\nthat you can extend pretty much everything.\n\n> - Definition and implementation of a consistent extension management\n> system for retrieving, compiling, and installing extensions.\n\nRetrieving and compiling are solved problems. I don't want us to get\ninvolved in creating build tools.\n\nAs for installing the extensions into the server, there's clearly some\nroom for improvement. I was just thinking that we could have a simple\npackage management system where any number of functions, tables, types,\netc. could belong to a package and then you can remove everything with one\ncommand. I think this is what SQL might mean with modules.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 15 Aug 2001 19:53:09 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: PostGIS spatial extensions"
},
{
"msg_contents": "I would take a hard look at R's extension packaging system \n(www.r-project.org). Its the best in the business. It consolidates all \naspects of creating packages, including configuring, building, run-time \nlinking, documentation and testing. It also allows non-root users to \ninstall packages in their own account.\n\nTim\n\nPeter Eisentraut wrote:\n\n>Paul Ramsey writes:\n>\n>>- One of the things we have run up against is that for most linux\n>>distributions, the postgresql-devel package does not include postgres.h\n>>in the header package. This is not necessary for client-side programs,\n>>but it is for server-side extensions. So people cannot compile our\n>>extension without jettisoning their RPM version of postgresql and moving\n>>to the tarball.\n>>\n>\n>The 7.1 RPMs should contain the server side headers somewhere. Earlier\n>versions only included a not very well defined subset of them.\n>\n>>- Where should extensions be installed by default? The RPM package has\n>>some rules, the tarball has some other rules. Should extensions spread\n>>themselves out over the postgresql tree (libs under lib, docs under doc,\n>>etc) or should they be self-contained (postgis/lib postgis/doc) under\n>>some other location?\n>>\n>\n>This is a matter taste, or of the file system standard of the system you\n>use. If you use autoconf and thus the GNU layout for your source package\n>then the default is going to end up something like\n>\n>/usr/local/lib/postgis/postgis.so\n>/usr/local/share/postgis/install-postgis.sql\n>\n>For binary distributions you'd fiddly with the configure --xxxdir flags a\n>little.\n>\n>Maybe you had in mind some sort of standard layout under a standard\n>directory, such as /usr/lib/postgresql/site-stuff (cf. perl), but this\n>sort of a arrangement is a major pain. For instance, it won't allow\n>non-root installs.\n>\n\n-- \nTimothy H. Keitt\nDepartment of Ecology and Evolution\nState University of New York at Stony Brook\nStony Brook, New York 11794 USA\nPhone: 631-632-1101, FAX: 631-632-7626\nhttp://life.bio.sunysb.edu/ee/keitt/\n\n\n\n",
"msg_date": "Wed, 15 Aug 2001 17:07:07 -0400",
"msg_from": "\"Timothy H. Keitt\" <tklistaddr@keittlab.bio.sunysb.edu>",
"msg_from_op": false,
"msg_subject": "Re: PostGIS spatial extensions"
},
{
"msg_contents": "Paul Ramsey <pramsey@refractions.net> writes:\n> However, if I have an RPM-based installation, I *will* have\n> the server headers I need. Why do we discriminate against people who\n> compile from the tarball?\n\nWe don't. We do, however, assume that they read the installation\ninstructions:\n\n The standard install installs only the header files needed for client application development. If you plan to do any\n server-side program development (such as custom functions or datatypes written in C), then you may want to install\n the entire PostgreSQL include tree into your target include directory. To do that, enter \n\n gmake install-all-headers\n\n This adds a megabyte or two to the install footprint, and is only useful if you don't plan to keep the whole source tree\n around for reference. (If you do, you can just use the source's include directory when building server-side software.) \n\nIf Peter's notion of installing server-side headers into a separate\nsubdirectory pans out, it might be worth thinking about installing all\nheaders all the time. Right now I'd vote against it, on the grounds\nthat it adds too much include-file clutter for something that very few\npeople need.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Aug 2001 17:46:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGIS spatial extensions "
}
] |
[
{
"msg_contents": "Curious if anyone has done any work on client side connection pooling\nrecently? I'm thinking pooling multiplexed against transaction commits?\n\nAZ\n\n\n\n",
"msg_date": "Wed, 1 Aug 2001 01:44:17 -0400",
"msg_from": "\"August Zajonc\" <junk-pgsql@aontic.com>",
"msg_from_op": true,
"msg_subject": "Client Side Connection Pooling"
},
{
"msg_contents": "\"August Zajonc\" <junk-pgsql@aontic.com> writes:\n\n> Curious if anyone has done any work on client side connection pooling\n> recently? I'm thinking pooling multiplexed against transaction commits?\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nWhat does this phrase mean exactly?\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n",
"msg_date": "07 Aug 2001 13:43:04 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Client Side Connection Pooling"
},
{
"msg_contents": "Connection pooling can be done two places. Server side or client side,\nthough client side in reality may be a middle-tier layer, not an actual\napplication.\n\nOne possible pooling model is to have a bunch of worker connections opened\nto the pgsql instance. Then as sql statements arrive the they are routed\nthrough an available connection that is open but not doing any work. So 100\ninbound connection may be \"multiplexed\" to 10 outbound connections to the\npgsql instance.\n\nOne issue is if a transaction is started with a BEGIN statement, or if the\nisolation level is serializable or something. During the life time of a\ntransaction it is important not to multiplex otherwise statements appear to\nbe part of a transaction they don't belong to, or commits commit on a\ndifferent connection then a BEGIN was started on. Since pgsql defaults to an\nautocommit model, most normal sql statements can be multiplexed willy-nilly,\nbut formally it more proper to say they are multiplexed on transaction\nboundries (and there just happens to be a transaction commit behind most\nstatements).\n\nOr something like that,\n\nAugust\n\nThis assumes transactions are defined along the connection.\n\n\n\"Doug McNaught\" <doug@wireboard.com> wrote in message\nnews:m31ymnlqaf.fsf@belphigor.mcnaught.org...\n> \"August Zajonc\" <junk-pgsql@aontic.com> writes:\n>\n> > Curious if anyone has done any work on client side connection pooling\n> > recently? I'm thinking pooling multiplexed against transaction commits?\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n>\n> What does this phrase mean exactly?\n>\n> -Doug\n> --\n> Free Dmitry Sklyarov!\n> http://www.freesklyarov.org/\n>\n> We will return to our regularly scheduled signature shortly.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n",
"msg_date": "Tue, 7 Aug 2001 15:43:15 -0400",
"msg_from": "\"August Zajonc\" <junk-pgsql@aontic.com>",
"msg_from_op": true,
"msg_subject": "Re: Client Side Connection Pooling"
},
{
"msg_contents": "> Curious if anyone has done any work on client side connection pooling\n> recently? I'm thinking pooling multiplexed against transaction\n> commits?\n\nI did some work on an abstracted DB API (supports PostgreSQL, Oracle and\nMySQL natively), with pooling and auto reconnect which I'd be happy to send\nyou / post here.\n\nAlternatively, you can take advantage of libdbi, which I wish I had known\nabout, say 2 months earlier :)\n\nhttp://libdbi.sourceforge.net/docs/\n\nbtw - what on earth does \"multiplexed against transaction commits\" mean? The\ndefinition on dictionary.com suggests you may mean a transaction commit may\nreturn multiple connections to the pool? I really have no idea what you mean\n:)\n\nCheers,\n\n\nMark Pritchard\n\n",
"msg_date": "Wed, 8 Aug 2001 08:50:18 +1000",
"msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>",
"msg_from_op": false,
"msg_subject": "RE: Client Side Connection Pooling"
},
{
"msg_contents": "Most pooling is in essense a form of multiplexing. For transactions this can\nbe a bad thing.\n\n10 connections -> pooler -> 2 worker connections\n\nIncoming connection #1 (I1) issues BEGIN WORK\nI1 statement passed to outgoing #1 -> O1\n\nI2 and I3 statements flow through to -> O2\nI4 statement UPDATE Row_never_to_be_rolled_back_ever goes through -> O1\nI1 issues rollback -> O1\nI4 statement rolledback along with I1 statements...\n\nBy only allowing a switch on commits, you avoid this (multiplexing against\ntransaction commits). There's probably a proper term for this. Send away\ndidn't know libdbi did pooling, otherwise they look like they have a nice\nthing going.\n\nAZ\n\n> -----Original Message-----\n> From: Mark Pritchard [mailto:mark@tangent.net.au]\n> Sent: Tuesday, August 07, 2001 6:50 PM\n> To: August Zajonc; pgsql-hackers@postgresql.org\n> Subject: RE: [HACKERS] Client Side Connection Pooling\n>\n>\n> > Curious if anyone has done any work on client side connection pooling\n> > recently? I'm thinking pooling multiplexed against transaction\n> > commits?\n>\n> I did some work on an abstracted DB API (supports PostgreSQL, Oracle and\n> MySQL natively), with pooling and auto reconnect which I'd be\n> happy to send\n> you / post here.\n>\n> Alternatively, you can take advantage of libdbi, which I wish I had known\n> about, say 2 months earlier :)\n>\n> http://libdbi.sourceforge.net/docs/\n>\n> btw - what on earth does \"multiplexed against transaction\n> commits\" mean? The\n> definition on dictionary.com suggests you may mean a transaction\n> commit may\n> return multiple connections to the pool? I really have no idea\n> what you mean\n> :)\n>\n> Cheers,\n>\n>\n> Mark Pritchard\n>\n>\n\n",
"msg_date": "Tue, 7 Aug 2001 19:05:19 -0400",
"msg_from": "\"August Zajonc\" <augustz@bigfoot.com>",
"msg_from_op": false,
"msg_subject": "RE: Client Side Connection Pooling"
},
{
"msg_contents": "\"August Zajonc\" <junk-pgsql@aontic.com> writes:\n\n> One possible pooling model is to have a bunch of worker connections opened\n> to the pgsql instance. Then as sql statements arrive the they are routed\n> through an available connection that is open but not doing any work. So 100\n> inbound connection may be \"multiplexed\" to 10 outbound connections to the\n> pgsql instance.\n\n[very lucid explanation snipped]\n\nThanks, makes perfect sense. Really, almost any pooling system can be \nlooked at that way, since you have N threads that may need\nconnections, and M connections available. Of course a thread needs to \nhang on to a connection throughout any transactions it creates.\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n",
"msg_date": "07 Aug 2001 23:25:15 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Client Side Connection Pooling"
}
] |
[
{
"msg_contents": "Sorry for posting this messages into the list.\nIt was intended for Peter E., but it looks like\nhis personal mailbox is over quota... Hopefully,\nhe will scan through the posts in the list once\nhe's back from vacation, and the message won't get \nlost.\n\nSerguei\n\n----- Original Message ----- \nFrom: Serguei Mokhov <sa_mokho@alcor.concordia.ca>\nTo: Peter Eisentraut <peter_e@gmx.net>\nSent: Wednesday, August 01, 2001 1:38 AM\nSubject: Re: [HACKERS] Translators wanted\n\n\n> Hello Peter,\n> \n> There was a little typo in line 73 in the original file libpq.pot:\n> \n> #: fe-connect.c:713\n> #, c-format\n> msgid \"could not socket to non-blocking mode: %s\\n\"\n> \n> missing the word 'set' between 'not' & 'socket'... Despite\n> I'm not a native English speaker/writer, I strongly believe\n> it should be there:\n> \n> msgid \"could not set socket to non-blocking mode: %s\\n\"\n> \n> I corrected the message in my translations; however, I \n> didn't update the sources.\n> \n> The .PO file is sent to the pgsql-patches list.\n> \n> By the time you're back from vacation,\n> I might have some other things translated...\n> \n> Have a good day,\n> Serguei\n> \n> ----- Original Message ----- \n> From: Peter Eisentraut <peter_e@gmx.net>\n> To: <pgsql-general@postgresql.org>\n> Sent: Sunday, July 15, 2001 6:13 PM\n> Subject: [HACKERS] Translators wanted\n> \n> \n> > Those of you who wanted to help translating the messages of PostgreSQL\n> > programs and libraries, you can get started now. I've put up a page\n> > explaining things a bit, with links to pages that explain things a bit\n> > more, at\n> > \n> > http://www.ca.postgresql.org/~petere/nls.html\n> > \n> > Please arrange yourselves with other volunteering speakers of your\n> > language. Results should be sent to the pgsql-patches list.\n> > \n> > You have a few days to ask me questions about this, then I'll be off on\n> > vacation and looking forward to a lot of progress when I get back. ;-)\n\n\n",
"msg_date": "Wed, 1 Aug 2001 02:01:24 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": true,
"msg_subject": "Fw: Translators wanted"
},
{
"msg_contents": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n>> Hello Peter,\n>> \n>> There was a little typo in line 73 in the original file libpq.pot:\n>> \n>> #: fe-connect.c:713\n>> #, c-format\n>> msgid \"could not socket to non-blocking mode: %s\\n\"\n>> \n>> missing the word 'set' between 'not' & 'socket'... \n\nYes. Peter noticed and fixed that typo in the fe-connect.c original a\nfew weeks ago --- but it looks like he forgot to update de.po to match.\nI've committed the change to CVS. Thanks!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 10:10:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Translators wanted "
},
{
"msg_contents": "Tom Lane writes:\n\n> Yes. Peter noticed and fixed that typo in the fe-connect.c original a\n> few weeks ago --- but it looks like he forgot to update de.po to match.\n> I've committed the change to CVS. Thanks!\n\nPlease, do not feel obligated to propogate every message fix to every po\nfile immediately. We'd never get any work done this way. (And depending\non the character sets used you might fail horribly anyway.) Translation\nteams that want to maintain a complete, high quality translation will do\nmessage catalog merges frequently, and it's only a couple of key strokes\n(in Emacs anway) to correct these sort of things then.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 5 Aug 2001 22:10:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Translators wanted "
}
] |
[
{
"msg_contents": "The same applies as to my previous post...\nSorry again.\n\nS.\n\n----- Original Message ----- \nFrom: Serguei Mokhov <sa_mokho@alcor.concordia.ca>\nTo: Peter Eisentraut <peter_e@gmx.net>\nSent: Wednesday, August 01, 2001 1:50 AM\nSubject: Re: [HACKERS] Translators wanted\n\n\n> ----- Original Message ----- \n> From: Peter Eisentraut <peter_e@gmx.net>\n> To: Serguei Mokhov <sa_mokho@alcor.concordia.ca>\n> Cc: <pgsql-general@postgresql.org>\n> Sent: Monday, July 16, 2001 4:03 PM\n> \n> \n> > Serguei Mokhov writes:\n> > \n> > > Are there people working on the translation into the Russian language?\n> > > If yes, then what messages are you working on and what encoding are you using?\n> > > I can start translating the messages, just want to make sure so that we\n> > > don't duplicate the effort.\n> > \n> > Use the koi8-r encoding unless you have strong reasons against it.\n> \n> Well, the KOI8-R is the standard encoding, no objection. However, Win32 apps\n> use Windows-1251, which is pretty common on Win machines (e.g. pgAdmin tool\n> on Windows will have to have messages in this exactly encoding), or console\n> Windows apps by historical reasons (from DOS) use the 866 code page. If people\n> write standard Windows or console client, which rely on the messages will\n> get garbage most likely or will switch back to English ones.\n> \n> I can send the translated messages in the all mentioned encodings, but the\n> problem is how will you place those files in the tree (according to the naming\n> conventions ll[_RR].po one can have only one language per region per component)\n> and plus, backbends probably have no way to know what kind of clients are\n> connected to them and which encoding is more appropriate for the given client\n> in the same language... These problems prevent different clients with the same\n> language but different encoding schemes equally well display those messages to the\n> user unless someone is willing (and has ideas how) to find a solution to the problems.\n> \n> Serguei\n\n\n",
"msg_date": "Wed, 1 Aug 2001 02:02:13 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": true,
"msg_subject": "Fw: Translators wanted"
}
] |
[
{
"msg_contents": "Is the only way to create DB in a C code to connect to Template1 and then exec the SQL string \"CREATE DATABASE databasename\" ?\nCan I create DB without connecting to template1 ?\n\n\n\n\n\n\n\nIs the only way to create DB in a C code to connect \nto Template1 and then exec the SQL string \"CREATE DATABASE databasename\" \n?\nCan I create DB without connecting to template1 \n?",
"msg_date": "Wed, 1 Aug 2001 09:30:37 +0200",
"msg_from": "\"Lorenzo De Vito\" <lorenzodevito@email.it>",
"msg_from_op": true,
"msg_subject": "Creating DB"
}
] |
[
{
"msg_contents": "Dear all :\n\nhelp me please : \nI compile my c++ program to connect Postgres Sql with command line :\nc++ -I /usr/local/pgsql/include -L /usr/local/pgsql/lib -lecpg -lpq -g -o capek.cgi capek.cc\n\nand I ve got eror \n\n\" Segmentation fault (core dumped) \"\n\ncould any body tell me what happen ..\n\nZudi Iswanto\n",
"msg_date": "Wed, 1 Aug 2001 17:12:25 +0700",
"msg_from": "Zudi Iswanto <zudi@dnet.net.id>",
"msg_from_op": true,
"msg_subject": "ECPG eror ..."
},
{
"msg_contents": "On Wed, Aug 01, 2001 at 05:12:25PM +0700, Zudi Iswanto wrote:\n> I compile my c++ program to connect Postgres Sql with command line :\n> c++ -I /usr/local/pgsql/include -L /usr/local/pgsql/lib -lecpg -lpq -g -o capek.cgi capek.cc\n> \n> and I ve got eror \n> \n> \" Segmentation fault (core dumped) \"\n> \n\nWhen do you get this? During compilation? During execution? \n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 1 Aug 2001 17:28:36 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: ECPG eror ..."
}
] |
[
{
"msg_contents": "sometimes i'm getting:\n\nNOTICE: Child itemid in update-chain marked as unused - can't\ncontinue repair_frag\n\nduring a simple \"vacuum\", db is online.\npg version 7.1, on debian linux kernel 2.4.\nwhat's the problem?\n\nthanks,\nvalter m.\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp\n\n",
"msg_date": "Wed, 01 Aug 2001 14:52:00 +0200",
"msg_from": "\"V. M.\" <txian@hotmail.com>",
"msg_from_op": true,
"msg_subject": "NOTICE: Child itemid in update-chain marked as unused..."
},
{
"msg_contents": "\"V. M.\" <txian@hotmail.com> writes:\n> sometimes i'm getting:\n> NOTICE: Child itemid in update-chain marked as unused - can't\n> continue repair_frag\n\n> during a simple \"vacuum\", db is online.\n> pg version 7.1, on debian linux kernel 2.4.\n> what's the problem?\n\nThe source code says:\n\n /*\n * This means that in the middle of chain there\n * was tuple updated by older (than XmaxRecent)\n * xaction and this tuple is already deleted by\n * me. Actually, upper part of chain should be\n * removed and seems that this should be handled\n * in scan_heap(), but it's not implemented at the\n * moment and so we just stop shrinking here.\n */\n\nIn short, an unimplemented special case in VACUUM's logic that tries to\ncompact out free space by moving tuples around.\n\nMost people never see this message though. There must be something\nunusual about the pattern of updates being done on this particular\ntable, that you see it often.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 10:24:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOTICE: Child itemid in update-chain marked as unused... "
}
] |
[
{
"msg_contents": "\nWhile looking at what needs to be done with some\nof the referential actions to make them work\nbetter under deferred constraints, I noticed something\nwhich I think is a bug.\n\nsszabo=> create table base (a int unique);\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'base_a_key' for\ntable 'base'\nCREATE\nsszabo=> create table deriv (a int references base(a) on update cascade, b\nint);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\nsszabo=> drop index base_a_key;\nDROP\n/* Note: the reason I drop the unique index is because\nof the brokenness of our unique constraint for the a=a+1\nupdate below, not because I don't want the constraint. */\nsszabo=> insert into base values (2);\nINSERT 783232 1\nsszabo=> insert into base values (3);\nINSERT 783233 1\nsszabo=> insert into deriv values (2,1);\nINSERT 783234 1\nsszabo=> insert into deriv values (3,1);\nINSERT 783235 1\nsszabo=> update base set a=a+1;\nUPDATE 2\nsszabo=> select * from deriv;\n a | b \n---+---\n 4 | 1\n 4 | 1\n(2 rows)\n\nThe output from the select, should I believe be (3,1), (4,1)\nnot (4,1), (4,1). I think we're violating General Rule 4 (I think\nthat's it) on the referential constraint definition (\"For every \nrow of the referenced table, its matching rows, unique matching \nrows, and non-unique matching rows are determined immediately\nbefore the execution of any SQL-statement. No new matching\nrows are added during the execution of that SQL-statement.\")\nbecause when the update cascade gets done for the 2 row, we've\nchanged the (2,1) to (3,1) which then gets hit by the update\ncascade on the 3 row. \n\nI was wondering if you had any thoughts on an easy way around\nit within what we have. :)\n\n",
"msg_date": "Wed, 1 Aug 2001 10:10:54 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Problem with FK referential actions"
},
{
"msg_contents": "Stephan Szabo wrote:\n> The output from the select, should I believe be (3,1), (4,1)\n> not (4,1), (4,1). I think we're violating General Rule 4 (I think\n> that's it) on the referential constraint definition (\"For every\n> row of the referenced table, its matching rows, unique matching\n> rows, and non-unique matching rows are determined immediately\n> before the execution of any SQL-statement. No new matching\n> rows are added during the execution of that SQL-statement.\")\n> because when the update cascade gets done for the 2 row, we've\n> changed the (2,1) to (3,1) which then gets hit by the update\n> cascade on the 3 row.\n>\n> I was wondering if you had any thoughts on an easy way around\n> it within what we have. :)\n\n I think you're right in that it is a bug and where the\n problem is. Now to get around it isn't easy. Especially in\n the deferred constraint area, it is important that the\n triggers see the changes made during all commands. But for\n the cascade to hit the right rows only, the scans (done with\n key qualification) would have to be done with a scan command\n counter equal to the original queries command counter.\n\n The old (more buggy?) behaviour should've been this annoying\n \"triggered data change violation\". But some folks thought\n it'd be a good idea to rip out that bug. See, these are the\n days when you miss the old bugs :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 1 Aug 2001 13:48:22 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with FK referential actions"
},
{
"msg_contents": "On Wed, 1 Aug 2001, Jan Wieck wrote:\n\n> Stephan Szabo wrote:\n> > The output from the select, should I believe be (3,1), (4,1)\n> > not (4,1), (4,1). I think we're violating General Rule 4 (I think\n> > that's it) on the referential constraint definition (\"For every\n> > row of the referenced table, its matching rows, unique matching\n> > rows, and non-unique matching rows are determined immediately\n> > before the execution of any SQL-statement. No new matching\n> > rows are added during the execution of that SQL-statement.\")\n> > because when the update cascade gets done for the 2 row, we've\n> > changed the (2,1) to (3,1) which then gets hit by the update\n> > cascade on the 3 row.\n> >\n> > I was wondering if you had any thoughts on an easy way around\n> > it within what we have. :)\n> \n> I think you're right in that it is a bug and where the\n> problem is. Now to get around it isn't easy. Especially in\n> the deferred constraint area, it is important that the\n> triggers see the changes made during all commands. But for\n> the cascade to hit the right rows only, the scans (done with\n> key qualification) would have to be done with a scan command\n> counter equal to the original queries command counter.\n\nI was afraid you were going to say something like that (basically\ntravelling short periods backwards in time). :( Is this something\nthat already can be done or would it require new support structure?\n\nAlso, I'm unconvinced that referential actions on deferred constraints\nactually defer, or at least the rows they act on don't defer (excepting\nno action of course) given general rule 4, unless the statement\nit's referring to is the commit, but then general rule 5 (for example)\ndoesn't make sense since the pk rows aren't marked for deletion during \nthe commit (unless I'm really missing something).\n\n> The old (more buggy?) behaviour should've been this annoying\n> \"triggered data change violation\". But some folks thought\n> it'd be a good idea to rip out that bug. See, these are the\n> days when you miss the old bugs :-)\n\n:) Actually, I think that technically what we're doing actually \nis a triggered data change violation as well, since it's within\none statement. \n\n\n",
"msg_date": "Wed, 1 Aug 2001 12:10:14 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem with FK referential actions"
}
] |
[
{
"msg_contents": "I am not sure if people noticed the signature lines, but the Toronto Red\nHat developers have started submitting patches based on TODO items. \nTheir involvement will help PostgreSQL improve even faster. Welcome\naboard folks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 13:29:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Red Hat developers"
}
] |
[
{
"msg_contents": "Given Hiroshi's objections, and the likelihood of compatibility problems\nfor existing applications, I am now thinking that it's not a good idea to\nturn off OID generation by default. (At least not for 7.2 --- maybe in\nsome future release we could change the default.)\n\nBased on the discussion so far, here is an attempt to flesh out the\ndetails of what to do with OIDs for 7.2:\n\n1. Add an optional clause \"WITH OIDS\" or \"WITHOUT OIDS\" to CREATE TABLE.\nThe default behavior will be WITH OIDS.\n\nNote: there was some discussion of a GUC variable to control the default.\nI'm leaning against this, mainly because having one would mean that\npg_dump *must* write WITH OIDS or WITHOUT OIDS in every CREATE TABLE;\nelse it couldn't be sure that the database schema would be correctly\nreconstructed. That would create dump-script portability problems and\nnegate some of the point of having a GUC variable in the first place.\nSo I'm thinking a fixed default is better.\n\nNote: an alternative syntax possibility is to make it look like the \"with\"\noption clauses for functions and indexes: \"WITH (oids)\" or \"WITH (noOids)\".\nThis is uglier today, but would start to look more attractive if we invent\nadditional CREATE TABLE options in the future --- there'd be a place to\nput 'em. Comments?\n\n2. A child table will be forced to have OIDs if any of its parents do,\neven if WITHOUT OIDS is specified in the child's CREATE command. This is\non the theory that the OID ought to act like an inherited column.\n\n3. For a table without OIDs, no entry will be made in pg_attribute for\nthe OID column, so an attempt to reference the OID column will draw a\n\"no such column\" error. (An alternative is to allow OID to read as nulls,\nbut it seemed that people preferred the error to be raised.)\n\n4. When inserting into an OID-less table, the INSERT result string will\nalways show 0 for the OID.\n\n5. A \"relhasoids\" boolean column will be added to pg_class to signal\nwhether a table has OIDs or not.\n\n6. COPY out WITH OIDS will ignore the \"WITH OIDS\" specification if the\ntable has no OIDs. (Alternative possibility: raise an error --- is that\nbetter?) COPY in WITH OIDS will silently drop the incoming OID values.\n\n7. Physical tuple headers won't change. If no OIDs are assigned for a\nparticular table, the OID field in the header will be left zero.\n\n8. OID generation will be disabled for those system tables that don't need\nit --- pg_listener, pg_largeobject, and pg_attribute being some major\noffenders that consume lots of OIDs.\n\n9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\npg_description will be modified so that its primary key is (object type,\nobject OID, column number) --- this also solves the problem that comments\nbreak if there are duplicate OIDs in different system tables. The object\ntype is the OID of the system catalog in which the object OID appears.\nThe column number field will be zero for all object types except columns.\nFor a column comment, the object type and OID fields will refer to the\nparent table, and column number will be nonzero.\n\n10. pg_dump will be modified to do the appropriate things with OIDs. Are\nthere any other application programs that need to change?\n\n\nWe had also talked about adding an INSERT ... RETURNING feature to allow\napplications to eliminate their dependence on looking at the OID returned\nby an INSERT command. I think this is a good idea, but there are still\na number of unsolved issues about how it should interact with rules.\nAccordingly, I'm not going to try to include it in this batch of work.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 13:42:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "OID wraparound: summary and proposal"
},
{
"msg_contents": "On Wed, 1 Aug 2001, Tom Lane wrote:\n\n> Based on the discussion so far, here is an attempt to flesh out the\n> details of what to do with OIDs for 7.2:\n> \n> 1. Add an optional clause \"WITH OIDS\" or \"WITHOUT OIDS\" to CREATE TABLE.\n> The default behavior will be WITH OIDS.\n> \n> Note: there was some discussion of a GUC variable to control the default.\n> \n> Note: an alternative syntax possibility is to make it look like the \"with\"\n> option clauses for functions and indexes: \"WITH (oids)\" or \"WITH (noOids)\".\n> This is uglier today, but would start to look more attractive if we invent\n> additional CREATE TABLE options in the future --- there'd be a place to\n> put 'em. Comments?\n\nI think a fixed default and placing it in parentheses are probably good \nideas.\n\n> 3. For a table without OIDs, no entry will be made in pg_attribute for\n> the OID column, so an attempt to reference the OID column will draw a\n> \"no such column\" error. (An alternative is to allow OID to read as nulls,\n> but it seemed that people preferred the error to be raised.)\n\nOkay, at least the foreign key stuff will need to change (since it does a\nselect oid), but I don't think it ever does anything with that except\ncheck for existance, so I could probably make it select 1 as reasonable\nreplacement.\n\n\n",
"msg_date": "Wed, 1 Aug 2001 12:21:19 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "> Given Hiroshi's objections, and the likelihood of compatibility problems\n> for existing applications, I am now thinking that it's not a good idea to\n> turn off OID generation by default. (At least not for 7.2 --- maybe in\n> some future release we could change the default.)\n\nThis seems good. People with oid concerns usually have 1-2 huge tables\nand the rest are small.\n\n> Based on the discussion so far, here is an attempt to flesh out the\n> details of what to do with OIDs for 7.2:\n> \n> 1. Add an optional clause \"WITH OIDS\" or \"WITHOUT OIDS\" to CREATE TABLE.\n> The default behavior will be WITH OIDS.\n\nMakes sense.\n\n> Note: there was some discussion of a GUC variable to control the default.\n> I'm leaning against this, mainly because having one would mean that\n> pg_dump *must* write WITH OIDS or WITHOUT OIDS in every CREATE TABLE;\n> else it couldn't be sure that the database schema would be correctly\n> reconstructed. That would create dump-script portability problems and\n> negate some of the point of having a GUC variable in the first place.\n> So I'm thinking a fixed default is better.\n\nGood point.\n\n> Note: an alternative syntax possibility is to make it look like the \"with\"\n> option clauses for functions and indexes: \"WITH (oids)\" or \"WITH (noOids)\".\n> This is uglier today, but would start to look more attractive if we invent\n> additional CREATE TABLE options in the future --- there'd be a place to\n> put 'em. Comments?\n\nI don't like the parens. Looks ugly and I am not used to seeing them\nused that way. I can imagine later using WITH NOOIDS, NOBIBBLE, BABBLE.\nMaybe the syntax should be WITH OID, WITH NOOID?\n\n\n> 2. A child table will be forced to have OIDs if any of its parents do,\n> even if WITHOUT OIDS is specified in the child's CREATE command. This is\n> on the theory that the OID ought to act like an inherited column.\n\nGood point.\n\n> 3. For a table without OIDs, no entry will be made in pg_attribute for\n> the OID column, so an attempt to reference the OID column will draw a\n> \"no such column\" error. (An alternative is to allow OID to read as nulls,\n> but it seemed that people preferred the error to be raised.)\n\nMakes sense.\n\n> 6. COPY out WITH OIDS will ignore the \"WITH OIDS\" specification if the\n> table has no OIDs. (Alternative possibility: raise an error --- is that\n> better?) COPY in WITH OIDS will silently drop the incoming OID values.\n\nObviously, the case here is that COPY WITH OIDS alone on a non-oid table\nshould throw an error, while pg_dump -o should work on a database with\nmixed oid/non-oid. I think the right thing would be to have pg_dump\ncheck pg_class.relhasoids and issue a proper COPY statement to match the\nexisting table.\n\n> 7. Physical tuple headers won't change. If no OIDs are assigned for a\n> particular table, the OID field in the header will be left zero.\n> \n> 8. OID generation will be disabled for those system tables that don't need\n> it --- pg_listener, pg_largeobject, and pg_attribute being some major\n> offenders that consume lots of OIDs.\n> \n> 9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\n> pg_description will be modified so that its primary key is (object type,\n> object OID, column number) --- this also solves the problem that comments\n> break if there are duplicate OIDs in different system tables. The object\n> type is the OID of the system catalog in which the object OID appears.\n> The column number field will be zero for all object types except columns.\n> For a column comment, the object type and OID fields will refer to the\n> parent table, and column number will be nonzero.\n\nSounds like a hack. I still prefer pg_attribute to have oids. Can we\nhave temp tables have no pg_attribute oids? A hack on a hack?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 16:05:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> 6. COPY out WITH OIDS will ignore the \"WITH OIDS\" specification if the\n>> table has no OIDs. (Alternative possibility: raise an error --- is that\n>> better?) COPY in WITH OIDS will silently drop the incoming OID values.\n\n> Obviously, the case here is that COPY WITH OIDS alone on a non-oid table\n> should throw an error, while pg_dump -o should work on a database with\n> mixed oid/non-oid. I think the right thing would be to have pg_dump\n> check pg_class.relhasoids and issue a proper COPY statement to match the\n> existing table.\n\npg_dump clearly will need to do that, so it isn't really going to be the\nissue. The question is what to do when a less-clueful app issues a COPY\nWITH OIDS on an OID-less table. For input, I see no downside to just\nignoring the incoming OIDs. For output, I can see three reasonable\npossibilities:\n\n\tA. Pretend WITH OIDS wasn't mentioned. This might seem to be\n\t\"do the right thing\", but a rather strong objection is that the\n\tapp will not get back the data it was expecting.\n\n\tB. Return NULLs or 0s for the OIDs column.\n\n\tC. Raise an error and refuse to do the copy at all.\n\nC is probably the most conservative answer.\n\n>> 9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\n>> pg_description will be modified so that its primary key is (object type,\n>> object OID, column number) --- this also solves the problem that comments\n>> break if there are duplicate OIDs in different system tables. The object\n>> type is the OID of the system catalog in which the object OID appears.\n>> The column number field will be zero for all object types except columns.\n>> For a column comment, the object type and OID fields will refer to the\n>> parent table, and column number will be nonzero.\n\n> Sounds like a hack.\n\nHow so? pg_description is broken anyway given that we don't enforce OID\nuniqueness across system catalogs. Also, in the future we could\nconsider overloading the <column number> column to have meanings for\nother object types. I could imagine using it to attach documentation to\neach of the input arguments of a function, for example.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 16:14:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> 6. COPY out WITH OIDS will ignore the \"WITH OIDS\" specification if the\n> >> table has no OIDs. (Alternative possibility: raise an error --- is that\n> >> better?) COPY in WITH OIDS will silently drop the incoming OID values.\n> \n> > Obviously, the case here is that COPY WITH OIDS alone on a non-oid table\n> > should throw an error, while pg_dump -o should work on a database with\n> > mixed oid/non-oid. I think the right thing would be to have pg_dump\n> > check pg_class.relhasoids and issue a proper COPY statement to match the\n> > existing table.\n> \n> pg_dump clearly will need to do that, so it isn't really going to be the\n> issue. The question is what to do when a less-clueful app issues a COPY\n> WITH OIDS on an OID-less table. For input, I see no downside to just\n> ignoring the incoming OIDs. For output, I can see three reasonable\n> possibilities:\n> \n> \tA. Pretend WITH OIDS wasn't mentioned. This might seem to be\n> \t\"do the right thing\", but a rather strong objection is that the\n> \tapp will not get back the data it was expecting.\n> \n> \tB. Return NULLs or 0s for the OIDs column.\n> \n> \tC. Raise an error and refuse to do the copy at all.\n> \n> C is probably the most conservative answer.\n\nIf we fail on load, we should fail on dump. Why not fail on COPY WITH\nOIDS on a non-oid table?\n\n\n\n> >> 9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\n> >> pg_description will be modified so that its primary key is (object type,\n> >> object OID, column number) --- this also solves the problem that comments\n> >> break if there are duplicate OIDs in different system tables. The object\n> >> type is the OID of the system catalog in which the object OID appears.\n> >> The column number field will be zero for all object types except columns.\n> >> For a column comment, the object type and OID fields will refer to the\n> >> parent table, and column number will be nonzero.\n> \n> > Sounds like a hack.\n> \n> How so? pg_description is broken anyway given that we don't enforce OID\n> uniqueness across system catalogs. Also, in the future we could\n\nWe have a script to detect them and the oid counter it unique. In what\nway do we not enforce it.\n\n> consider overloading the <column number> column to have meanings for\n> other object types. I could imagine using it to attach documentation to\n> each of the input arguments of a function, for example.\n\nInteresting idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 17:21:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> For input, I see no downside to just\n>> ignoring the incoming OIDs. For output, I can see three reasonable\n>> possibilities:\n>> \n>> A. Pretend WITH OIDS wasn't mentioned. This might seem to be\n>> \"do the right thing\", but a rather strong objection is that the\n>> app will not get back the data it was expecting.\n>> \n>> B. Return NULLs or 0s for the OIDs column.\n>> \n>> C. Raise an error and refuse to do the copy at all.\n>> \n>> C is probably the most conservative answer.\n\n> If we fail on load, we should fail on dump. Why not fail on COPY WITH\n> OIDS on a non-oid table?\n\nI'm confused --- I was proposing that we *not* fail on load. What's the\npoint of failing on load?\n\n>> How so? pg_description is broken anyway given that we don't enforce OID\n>> uniqueness across system catalogs. Also, in the future we could\n\n> We have a script to detect them and the oid counter it unique. In what\n> way do we not enforce it.\n\nIn a running system, once the OID counter wraps around there's no\nguarantee that you won't have duplicate OIDs in different system\ntables. The only enforcement mechanism we have is the unique indexes,\nand those will only check per-table. However, that's fine --- it's\nas much as we need. For everything except pg_description, that is.\nSince pg_description currently makes an unchecked and uncheckable\nassumption of global uniqueness of OIDs, it's broken.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 18:09:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Given Hiroshi's objections, and the likelihood of compatibility problems\n> for existing applications, I am now thinking that it's not a good idea to\n> turn off OID generation by default. (At least not for 7.2 --- maybe in\n> some future release we could change the default.)\n> \n\nWould OIDs be globally unique or per table ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 02 Aug 2001 08:28:56 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> \n>> Given Hiroshi's objections, and the likelihood of compatibility problems\n>> for existing applications, I am now thinking that it's not a good idea to\n>> turn off OID generation by default.\n\n> Would OIDs be globally unique or per table ?\n\nSame as now: if you have a unique index on 'em, they're unique within a\ntable; otherwise, no guarantee at all (once the system wraps around).\n\nWe should document this state of affairs better, of course, but I'm not\nproposing to change it. The point here is just to let people suppress\nOIDs for tables that don't need them, and thereby postpone OID wraparound.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 19:32:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >>\n> >> Given Hiroshi's objections, and the likelihood of compatibility problems\n> >> for existing applications, I am now thinking that it's not a good idea to\n> >> turn off OID generation by default.\n> \n> > Would OIDs be globally unique or per table ?\n> \n> Same as now: if you have a unique index on 'em, they're unique within a\n> table; otherwise, no guarantee at all (once the system wraps around).\n> \n\nOIDs per table seems more important than others.\n\nStrangely enough, I've seen no objection to optional OIDs\nother than mine. Probably it was my mistake to have formulated\na plan on the flimsy assumption. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 02 Aug 2001 09:58:14 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Maybe I'm being horribly stupid here, but....\n\nIf the thinking is that some tables can escape having an OID, thus meaning OIDs\ncan be controlled by table, how hard would it be to have an OID range on a per\ntable basis?\n\nWhere each table to have its own notion of an OID, then OID wrap/depletion\nshould be minimal.\n\n\nTom Lane wrote:\n> \n> Given Hiroshi's objections, and the likelihood of compatibility problems\n> for existing applications, I am now thinking that it's not a good idea to\n> turn off OID generation by default. (At least not for 7.2 --- maybe in\n> some future release we could change the default.)\n> \n> Based on the discussion so far, here is an attempt to flesh out the\n> details of what to do with OIDs for 7.2:\n> \n> 1. Add an optional clause \"WITH OIDS\" or \"WITHOUT OIDS\" to CREATE TABLE.\n> The default behavior will be WITH OIDS.\n> \n> Note: there was some discussion of a GUC variable to control the default.\n> I'm leaning against this, mainly because having one would mean that\n> pg_dump *must* write WITH OIDS or WITHOUT OIDS in every CREATE TABLE;\n> else it couldn't be sure that the database schema would be correctly\n> reconstructed. That would create dump-script portability problems and\n> negate some of the point of having a GUC variable in the first place.\n> So I'm thinking a fixed default is better.\n> \n> Note: an alternative syntax possibility is to make it look like the \"with\"\n> option clauses for functions and indexes: \"WITH (oids)\" or \"WITH (noOids)\".\n> This is uglier today, but would start to look more attractive if we invent\n> additional CREATE TABLE options in the future --- there'd be a place to\n> put 'em. Comments?\n> \n> 2. A child table will be forced to have OIDs if any of its parents do,\n> even if WITHOUT OIDS is specified in the child's CREATE command. This is\n> on the theory that the OID ought to act like an inherited column.\n> \n> 3. For a table without OIDs, no entry will be made in pg_attribute for\n> the OID column, so an attempt to reference the OID column will draw a\n> \"no such column\" error. (An alternative is to allow OID to read as nulls,\n> but it seemed that people preferred the error to be raised.)\n> \n> 4. When inserting into an OID-less table, the INSERT result string will\n> always show 0 for the OID.\n> \n> 5. A \"relhasoids\" boolean column will be added to pg_class to signal\n> whether a table has OIDs or not.\n> \n> 6. COPY out WITH OIDS will ignore the \"WITH OIDS\" specification if the\n> table has no OIDs. (Alternative possibility: raise an error --- is that\n> better?) COPY in WITH OIDS will silently drop the incoming OID values.\n> \n> 7. Physical tuple headers won't change. If no OIDs are assigned for a\n> particular table, the OID field in the header will be left zero.\n> \n> 8. OID generation will be disabled for those system tables that don't need\n> it --- pg_listener, pg_largeobject, and pg_attribute being some major\n> offenders that consume lots of OIDs.\n> \n> 9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\n> pg_description will be modified so that its primary key is (object type,\n> object OID, column number) --- this also solves the problem that comments\n> break if there are duplicate OIDs in different system tables. The object\n> type is the OID of the system catalog in which the object OID appears.\n> The column number field will be zero for all object types except columns.\n> For a column comment, the object type and OID fields will refer to the\n> parent table, and column number will be nonzero.\n> \n> 10. pg_dump will be modified to do the appropriate things with OIDs. Are\n> there any other application programs that need to change?\n> \n> We had also talked about adding an INSERT ... RETURNING feature to allow\n> applications to eliminate their dependence on looking at the OID returned\n> by an INSERT command. I think this is a good idea, but there are still\n> a number of unsolved issues about how it should interact with rules.\n> Accordingly, I'm not going to try to include it in this batch of work.\n> \n> Comments?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 01 Aug 2001 21:06:19 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> how hard would it be to have an OID range on a per\n> table basis?\n\nThe existing OID generator is a system-wide counter, and couldn't\nreasonably be expected to do something like that.\n\nThere was some talk of (in essence) eliminating the present OID\ngenerator mechanism and giving each table its own sequence object for\ngenerating per-table OIDs. It's an interesting thought, but I'm\nconcerned about the overhead involved. At the very least we'd need to\nreimplement sequence objects in a lower-overhead fashion (eg, make 'em\nrows in a pg_sequence table rather than free-standing almost-tables).\n\nMight be worth doing someday, but I think it's orthogonal to what I'm\nproposing at present. There'd still be a need to suppress OID\ngeneration on tables that don't need OIDs and might have more than\n4 billion inserts during their lifetime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 21:16:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> For input, I see no downside to just\n> >> ignoring the incoming OIDs. For output, I can see three reasonable\n> >> possibilities:\n> >> \n> >> A. Pretend WITH OIDS wasn't mentioned. This might seem to be\n> >> \"do the right thing\", but a rather strong objection is that the\n> >> app will not get back the data it was expecting.\n> >> \n> >> B. Return NULLs or 0s for the OIDs column.\n> >> \n> >> C. Raise an error and refuse to do the copy at all.\n> >> \n> >> C is probably the most conservative answer.\n> \n> > If we fail on load, we should fail on dump. Why not fail on COPY WITH\n> > OIDS on a non-oid table?\n> \n> I'm confused --- I was proposing that we *not* fail on load. What's the\n> point of failing on load?\n\nI meant to say we should fail on dump _and_ load. If we don't we are\nthrowing away the oid's they are loading because though the table has no\noid column. Seems like something that should fail.\n\n\n> \n> >> How so? pg_description is broken anyway given that we don't enforce OID\n> >> uniqueness across system catalogs. Also, in the future we could\n> \n> > We have a script to detect them and the oid counter it unique. In what\n> > way do we not enforce it.\n> \n> In a running system, once the OID counter wraps around there's no\n> guarantee that you won't have duplicate OIDs in different system\n> tables. The only enforcement mechanism we have is the unique indexes,\n> and those will only check per-table. However, that's fine --- it's\n> as much as we need. For everything except pg_description, that is.\n> Since pg_description currently makes an unchecked and uncheckable\n> assumption of global uniqueness of OIDs, it's broken.\n\nIf you consider random table creation failures acceptible. In oid\nwraparound, whether pg_description could point to two rows with the same\noid is the smallest part of our problem. I think the whole idea we can\nrun reliably with an oid wraparound is questionable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 21:50:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Tom Lane wrote:\n>\n> Given Hiroshi's objections, and the likelihood of compatibility problems\n> for existing applications, I am now thinking that it's not a good idea to\n> turn off OID generation by default. (At least not for 7.2 --- maybe in\n> some future release we could change the default.)\n>\n> Based on the discussion so far, here is an attempt to flesh out the\n> details of what to do with OIDs for 7.2:\n\nAlso OIDS should be promoted to 8-byte integers at some future time.\n\n>\n> 1. Add an optional clause \"WITH OIDS\" or \"WITHOUT OIDS\" to CREATE TABLE.\n> The default behavior will be WITH OIDS.\n>\n> Note: there was some discussion of a GUC variable to control the default.\n> I'm leaning against this, mainly because having one would mean that\n> pg_dump *must* write WITH OIDS or WITHOUT OIDS in every CREATE TABLE;\n> else it couldn't be sure that the database schema would be correctly\n> reconstructed. That would create dump-script portability problems and\n> negate some of the point of having a GUC variable in the first place.\n> So I'm thinking a fixed default is better.\n>\n> Note: an alternative syntax possibility is to make it look like the \"with\"\n> option clauses for functions and indexes: \"WITH (oids)\" or \"WITH (noOids)\".\n> This is uglier today, but would start to look more attractive if we invent\n> additional CREATE TABLE options in the future --- there'd be a place to\n> put 'em. Comments?\n>\n> 2. A child table will be forced to have OIDs if any of its parents do,\n> even if WITHOUT OIDS is specified in the child's CREATE command. This is\n> on the theory that the OID ought to act like an inherited column.\n>\n> 3. For a table without OIDs, no entry will be made in pg_attribute for\n> the OID column, so an attempt to reference the OID column will draw a\n> \"no such column\" error. (An alternative is to allow OID to read as nulls,\n> but it seemed that people preferred the error to be raised.)\n>\n> 4. When inserting into an OID-less table, the INSERT result string will\n> always show 0 for the OID.\n>\n> 5. A \"relhasoids\" boolean column will be added to pg_class to signal\n> whether a table has OIDs or not.\n>\n> 6. COPY out WITH OIDS will ignore the \"WITH OIDS\" specification if the\n> table has no OIDs. (Alternative possibility: raise an error --- is that\n> better?) COPY in WITH OIDS will silently drop the incoming OID values.\n>\n> 7. Physical tuple headers won't change. If no OIDs are assigned for a\n> particular table, the OID field in the header will be left zero.\n>\n> 8. OID generation will be disabled for those system tables that don't need\n> it --- pg_listener, pg_largeobject, and pg_attribute being some major\n> offenders that consume lots of OIDs.\n\n1-8 sounds good\n\n> 9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\n> pg_description will be modified so that its primary key is (object type,\n> object OID, column number) --- this also solves the problem that comments\n> break if there are duplicate OIDs in different system tables. \n\nHm.. To me this sounds like allowing duplicates in an unique index in\ncase\nthere happen to be duplicate keys there ;)\n\nIMHO duplicate OID's in system tables should be treated as bug's - if\nthey \nare there they are meant to break stuff.\n\n> The object\n> type is the OID of the system catalog in which the object OID appears.\n> The column number field will be zero for all object types except columns.\n> For a column comment, the object type and OID fields will refer to the\n> parent table, and column number will be nonzero.\n\nWhat happens to columns added to inherited tables ?\n\n> 10. pg_dump will be modified to do the appropriate things with OIDs. Are\n> there any other application programs that need to change?\n>\n> We had also talked about adding an INSERT ... RETURNING feature to allow\n> applications to eliminate their dependence on looking at the OID returned\n> by an INSERT command. I think this is a good idea, but there are still\n> a number of unsolved issues about how it should interact with rules.\n> Accordingly, I'm not going to try to include it in this batch of work.\n>\n",
"msg_date": "Fri, 03 Aug 2001 01:08:54 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> >> 9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\n> >> pg_description will be modified so that its primary key is (object type,\n> >> object OID, column number) --- this also solves the problem that comments\n> >> break if there are duplicate OIDs in different system tables.\n> \n> > Hm.. To me this sounds like allowing duplicates in an unique index in\n> > case there happen to be duplicate keys there ;)\n> \n> Unless you want to implement a global unique index \n\nOr insert/update trigger\n\n> that can enforce uniqueness of OIDs across all the system tables, I don't \n> think that approach is tenable. \n\nAs I wrote in another mail to this list, AFAIK OID is supposed to be \nObject Identifier - something that can be used to identify any object \nin a unique fashion.\n\nWhen (and if ;) we will implement SQL3's UNDER, we should, IMHO, make a\nprimary \nkey inherited and *unique* over all tables created UNDER the main table,\nmeaning \nthat we will need a way to have uniqueness constraint spanning multiple \ntables. \n\n( At least logically multiple tables, as IMHO UNDER with its single\ninheritance is \n best implemented in a single table with a bit more flexible column\nstructure. )\n\nAt that time we could theoretically inherit all system tables that have\nOID \ncolumn from table \"pg_system(oid oid primary key);\"\n\n> pg_description is broken as it stands. Bruce\n> doesn't like the \"column number\" part of my proposal --- I suppose he'd\n> rather see the pg_description key as just <object type, object OID> with\n> object type referring to pg_attribute if it's a comment on column.\n> That would work too as far as fixing the lack of uniqueness goes, but it\n> still leaves us with pg_attribute as a significant consumer of OIDs.\n\nThat would probably be a problem with 4-byte OIDs, there is an ample \nsupply of 8-byte ones\n\nI do like dropping OID from pg_listener, as it is a mostly empty and\nreally \nrapidly changing table, but I see little value in dropping oid from\npg_attribute.\n\nBTW, don't indexes, triggers or saved plans use OIDs from pg_attribute ?\n\n> Since the major point of this exercise (in my mind) is cutting the rate\n> of consumption of OIDs to postpone wraparound, I want to suppress OIDs\n> in pg_attribute, and to do that I have to add the column number to\n> pg_description.\n\nI still think that going to 8-byte OIDs would be the best use of your\ntime ;)\n\nIf you can make the size of oid's a compile time option, then even\nbetter.\n\nPostponing the wraparound by the means you describe may be a fools\nerrand anyway, \nas there are other ways to quickly consume oids that are very likely as\ncommon as \nthose involving pg_listener, pg_largeobject, and pg_attribute.\n\nAlso computers still get faster, and disks still get bigger at the rate\nI doubt \nyou will be able to match by finding ways to postpone the wraparound.\n\nSo here I'd like to contradict Vadim's claim that the time of simple\nsolutions is \nover for PostgreSQL - making OID bigger is at least conceptually simple,\nit's just \n\"a small matter of programming\" ;)\n\n--------------\nHannu\n",
"msg_date": "Fri, 03 Aug 2001 02:19:04 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "mlw wrote:\n> \n> I posted this question earlier, but it looks like it never made it on.\n> \n> If you can control the OIDs on a per table basis, and some tables need not even\n> have any, why not let each table have its own OID range? Essentially, each\n> record will be numbered relative to 0 on its table?\n> \n> That would really cut down the OID wrap around problem, and allow records to\n> have a notion of serialization.\n \nWhat would the meaning of such an \"OID\" be ?\n\nApart from duplicating the primary key that is ?\n\n------------------\nHannu\n",
"msg_date": "Fri, 03 Aug 2001 02:25:51 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Tom Lane wrote:\n>> Based on the discussion so far, here is an attempt to flesh out the\n>> details of what to do with OIDs for 7.2:\n\n> Also OIDS should be promoted to 8-byte integers at some future time.\n\nPerhaps, but I'm trying to focus on what to do for 7.2...\n\n>> 9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\n>> pg_description will be modified so that its primary key is (object type,\n>> object OID, column number) --- this also solves the problem that comments\n>> break if there are duplicate OIDs in different system tables. \n\n> Hm.. To me this sounds like allowing duplicates in an unique index in\n> case there happen to be duplicate keys there ;)\n\nUnless you want to implement a global unique index that can enforce\nuniqueness of OIDs across all the system tables, I don't think that\napproach is tenable. pg_description is broken as it stands. Bruce\ndoesn't like the \"column number\" part of my proposal --- I suppose he'd\nrather see the pg_description key as just <object type, object OID> with\nobject type referring to pg_attribute if it's a comment on column.\nThat would work too as far as fixing the lack of uniqueness goes, but it\nstill leaves us with pg_attribute as a significant consumer of OIDs.\nSince the major point of this exercise (in my mind) is cutting the rate\nof consumption of OIDs to postpone wraparound, I want to suppress OIDs\nin pg_attribute, and to do that I have to add the column number to\npg_description.\n\n>> The column number field will be zero for all object types except columns.\n>> For a column comment, the object type and OID fields will refer to the\n>> parent table, and column number will be nonzero.\n\n> What happens to columns added to inherited tables ?\n\nUh, nothing as far as I can see. We don't presently support auto\ninheritance of comments-on-columns, if that's what you were asking for.\nOffhand, making that happen seems about equally easy with either\nrepresentation of pg_description, so I don't think it's an issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 19:37:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I posted this question earlier, but it looks like it never made it on.\n\nYou did post it, and I answered it: no can do with anything close to the\ncurrent implementation of the OID generator. We have one counter for\nthe whole system, not per-table state.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 19:41:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "\nI posted this question earlier, but it looks like it never made it on.\n\nIf you can control the OIDs on a per table basis, and some tables need not even\nhave any, why not let each table have its own OID range? Essentially, each\nrecord will be numbered relative to 0 on its table?\n\nThat would really cut down the OID wrap around problem, and allow records to\nhave a notion of serialization.\n\nTom Lane wrote:\n> \n> Given Hiroshi's objections, and the likelihood of compatibility problems\n> for existing applications, I am now thinking that it's not a good idea to\n> turn off OID generation by default. (At least not for 7.2 --- maybe in\n> some future release we could change the default.)\n> \n> Based on the discussion so far, here is an attempt to flesh out the\n> details of what to do with OIDs for 7.2:\n> \n> 1. Add an optional clause \"WITH OIDS\" or \"WITHOUT OIDS\" to CREATE TABLE.\n> The default behavior will be WITH OIDS.\n> \n> Note: there was some discussion of a GUC variable to control the default.\n> I'm leaning against this, mainly because having one would mean that\n> pg_dump *must* write WITH OIDS or WITHOUT OIDS in every CREATE TABLE;\n> else it couldn't be sure that the database schema would be correctly\n> reconstructed. That would create dump-script portability problems and\n> negate some of the point of having a GUC variable in the first place.\n> So I'm thinking a fixed default is better.\n> \n> Note: an alternative syntax possibility is to make it look like the \"with\"\n> option clauses for functions and indexes: \"WITH (oids)\" or \"WITH (noOids)\".\n> This is uglier today, but would start to look more attractive if we invent\n> additional CREATE TABLE options in the future --- there'd be a place to\n> put 'em. Comments?\n> \n> 2. A child table will be forced to have OIDs if any of its parents do,\n> even if WITHOUT OIDS is specified in the child's CREATE command. This is\n> on the theory that the OID ought to act like an inherited column.\n> \n> 3. For a table without OIDs, no entry will be made in pg_attribute for\n> the OID column, so an attempt to reference the OID column will draw a\n> \"no such column\" error. (An alternative is to allow OID to read as nulls,\n> but it seemed that people preferred the error to be raised.)\n> \n> 4. When inserting into an OID-less table, the INSERT result string will\n> always show 0 for the OID.\n> \n> 5. A \"relhasoids\" boolean column will be added to pg_class to signal\n> whether a table has OIDs or not.\n> \n> 6. COPY out WITH OIDS will ignore the \"WITH OIDS\" specification if the\n> table has no OIDs. (Alternative possibility: raise an error --- is that\n> better?) COPY in WITH OIDS will silently drop the incoming OID values.\n> \n> 7. Physical tuple headers won't change. If no OIDs are assigned for a\n> particular table, the OID field in the header will be left zero.\n> \n> 8. OID generation will be disabled for those system tables that don't need\n> it --- pg_listener, pg_largeobject, and pg_attribute being some major\n> offenders that consume lots of OIDs.\n> \n> 9. To continue to support COMMENT ON COLUMN when columns have no OIDs,\n> pg_description will be modified so that its primary key is (object type,\n> object OID, column number) --- this also solves the problem that comments\n> break if there are duplicate OIDs in different system tables. The object\n> type is the OID of the system catalog in which the object OID appears.\n> The column number field will be zero for all object types except columns.\n> For a column comment, the object type and OID fields will refer to the\n> parent table, and column number will be nonzero.\n> \n> 10. pg_dump will be modified to do the appropriate things with OIDs. Are\n> there any other application programs that need to change?\n> \n> We had also talked about adding an INSERT ... RETURNING feature to allow\n> applications to eliminate their dependence on looking at the OID returned\n> by an INSERT command. I think this is a good idea, but there are still\n> a number of unsolved issues about how it should interact with rules.\n> Accordingly, I'm not going to try to include it in this batch of work.\n> \n> Comments?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Thu, 02 Aug 2001 19:42:39 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> That would probably be a problem with 4-byte OIDs, there is an ample \n> supply of 8-byte ones\n\nSure, but I think we are still a few years away from being able to\nassume that every platform of interest can support 8-byte OIDs (and\nfurthermore, won't see a significant performance degradation --- keep\nin mind that widening Datum to 8 bytes is a change that affects all\ndatatypes not just Oid). There's also the Oids-are-in-the-wire-protocol\nproblem. In short, that's a long-term solution not a near-term one.\n\n> BTW, don't indexes, triggers or saved plans use OIDs from pg_attribute ?\n\nNope. pg_description is the only offender.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 20:40:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I posted this question earlier, but it looks like it never made it on.\n> \n> You did post it,\nSorry, it never got to me.\n\n> and I answered it: no can do with anything close to the\n> current implementation of the OID generator. We have one counter for\n> the whole system, not per-table state.\n\nThat's a bummer. The concept of a ROWID is really useful, especially for those\nthat come from an Oracle background, or porting Oracle queries.\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 03 Aug 2001 07:52:14 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> mlw wrote:\n> >\n> > I posted this question earlier, but it looks like it never made it on.\n> >\n> > If you can control the OIDs on a per table basis, and some tables need not even\n> > have any, why not let each table have its own OID range? Essentially, each\n> > record will be numbered relative to 0 on its table?\n> >\n> > That would really cut down the OID wrap around problem, and allow records to\n> > have a notion of serialization.\n> \n> What would the meaning of such an \"OID\" be ?\n> \n> Apart from duplicating the primary key that is ?\n\nSome other databases have the notion of a ROWID which uniquely identifies a row\nwithin a table. OID can be used for that, but it means if you use it, you must\nlimit the size of your whole database system. The other alternative is to make\na column called \"rowid\" and a sequence for it and a default of\nnextval('table_rowid'). That means more work for those porting. \n\nMy thinking was that if the \"OID\" issue was being addressed, maybe it could be\nchanged quite a bit. The problem with the current OID is that it severely\nlimits the capacity of the database AND does not carry with it enough\ninformation.\n\nFor instance, as far as I can see, one can not take an OID and make any sort of\ndetermination about what it is. One also needs to know the table and the\ndatabase from which it was retrieved. So an OID is meaningless without the\ncontextual information. Why should it be a system wide limitation when it needs\nto be used in the context of a specific table?\n\nThat way PostgreSQL has a knowable 4B (or 2B signed) record limit per table,\nnot per system. One could create a new virtual OID like thing, called SYSOID,\nor something, which is a 64 bit value, the upper 4 bytes being the OID of the\ntable from the catalog, and the lower 4 bytes being the OID of the record.\n\nThe SYSOID would really tell you something! Given a SYSOID you could find the\ndatabase, the table, and the record.\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 03 Aug 2001 08:12:45 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Tom Lane wrote:\n> \n[Snipped]\n\nI think making \"WITHOUT OIDS\" the default for table creation is the right thing\nto do. Here is my reasoning:\n\nAn OID is a system wide limitation. 4B or 2B depending on sign-ness. (could\nthere be some bugs still lurking on high OIDs?) Since the OID is a shared\nsystem wide limited resource, it should be limited to \"system\" tables which\nrequire it.\n\nTo new comers to PostgreSQL this limitation will not be obvious until they hit\nit. Then they will kick themselves for not reading more carefully.\n\nAn OID does not add any real value to the database developer. Given an OID, one\ncan not determine anything about the record it represents. One also needs the\ntable and database from which it came, and even then one has to create an index\non the OID column on the table to get to the record efficiently. It can only\nindicate the order in which records were entered.\n\nIf people need something like OID for their tables, the documented \"preferred\nway\" could be:\ncreate sequence fubar;\ncreate table fubar\n(\n\trowid\tinteger\tdefault nextval('fubar_seq'),\n\t...\n);\n\nThen explain that they can use \"WITH OID\" but there is a system wide limit.\n\n\nOn a side note: I know it is probably a lot of work, and it has been shot down\nonce, but the notion of a rowid built into a table would be useful. It would\nsolve wrap around and keep the useful functionality of OID, and be more\nefficient and robust than using the sequence.\n",
"msg_date": "Fri, 03 Aug 2001 08:47:37 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "The analog of ROWID in PostgreSQL is TID rather than OID\nbecause TID is a physical address of a tuple within a table.\nHowever there's a significant difference. Unfortunately TID\nis transient. It is changed by UPDATE and VACUUM.\nThough TIDs are unavailable for critical use, OIDs could\ncompensate the drawback. TIDs and OIDs must help each\nother if PostgreSQL needs the concept like ROWID.\n\nregards,\nHiroshi Inoue\n\n> -----Original Message-----\n> From: mlw\n>\n> Hannu Krosing wrote:\n> >\n> > mlw wrote:\n> > >\n> > > I posted this question earlier, but it looks like it never made it on.\n> > >\n> > > If you can control the OIDs on a per table basis, and some\n> tables need not even\n> > > have any, why not let each table have its own OID range?\n> Essentially, each\n> > > record will be numbered relative to 0 on its table?\n> > >\n> > > That would really cut down the OID wrap around problem, and\n> allow records to\n> > > have a notion of serialization.\n> >\n> > What would the meaning of such an \"OID\" be ?\n> >\n> > Apart from duplicating the primary key that is ?\n>\n> Some other databases have the notion of a ROWID which uniquely\n> identifies a row\n> within a table. OID can be used for that, but it means if you use\n> it, you must\n> limit the size of your whole database system. The other\n> alternative is to make\n> a column called \"rowid\" and a sequence for it and a default of\n> nextval('table_rowid'). That means more work for those porting.\n>\n\n",
"msg_date": "Sun, 5 Aug 2001 23:03:07 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> The analog of ROWID in PostgreSQL is TID rather than OID\n> because TID is a physical address of a tuple within a table.\n> However there's a significant difference. Unfortunately TID\n> is transient. It is changed by UPDATE and VACUUM.\n> Though TIDs are unavailable for critical use, OIDs could\n> compensate the drawback. TIDs and OIDs must help each\n> other if PostgreSQL needs the concept like ROWID.\n\nThat is true now, but I am saying that it should not be true. Rather than have\na single limited global resource, the current OID, if possible, tables should\nget their own notion of an OID, like a ROWID.\n\nThe ability to eliminated OID from tables is a great step, but, if one needs a\nOID behavior on tables, then one has a limit of 2B-4B rows in an entire\ndatabase system for which all tables compete.\n\nYou have even said you need the notion of an OID for some ODBC cursor stuff you\nare doing. Thus eliminating OIDs is not an option for you. \n\nThe options are:\nNo OID on a table. This breaks any code that assumes an OID must always exist.\nUse OIDs on a table. This limits the size of the database, I have already had\nto drop and reload a database once because of OID depletion (3 months).\n\nIf OIDs can become the equivalent of a ROWID, then code designed that assumes\nOID are always valid will still work, and Postgres will not run out of OIDs in\nsystem wide sense.\n\nI know I won't be doing the work to make the changes, so I am sensitive to that\nissue, but as a PostgreSQL user, I can say that I have hit the OID limit once\nalready and will continue to hit it periodically. Getting rid of OIDs may not\nbe an option for me because I planning to do some replication across several\nboxes, and that means I would use OID or use a sequence and \"default\nnextval(...).\" \n\n\n\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Sun, 05 Aug 2001 10:59:48 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "mlw wrote:\n> \n> Hiroshi Inoue wrote:\n> >\n> > The analog of ROWID in PostgreSQL is TID rather than OID\n> > because TID is a physical address of a tuple within a table.\n> > However there's a significant difference. Unfortunately TID\n> > is transient. It is changed by UPDATE and VACUUM.\n> > Though TIDs are unavailable for critical use, OIDs could\n> > compensate the drawback. TIDs and OIDs must help each\n> > other if PostgreSQL needs the concept like ROWID.\n> \n> That is true now, but I am saying that it should not be true. Rather than have\n> a single limited global resource, the current OID, if possible, tables should\n> get their own notion of an OID, like a ROWID.\n> \n\nI've objected optional OID but never objected OIDs per table.\nOIDs per table is more important than others IMHO.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 06 Aug 2001 08:33:13 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "> Based on the discussion so far, here is an attempt to flesh out the\n> details of what to do with OIDs for 7.2:\n> \n> 1. Add an optional clause \"WITH OIDS\" or \"WITHOUT OIDS\" to CREATE TABLE.\n> The default behavior will be WITH OIDS.\n\nWhat about having an additional Oid generator which solely serves for\nsupplying user tables' per row Oids? It seems relatively easy to\nimplement, comparing with 64-bit Oids or Oid-less tables. I assume\nthat the Oid wraparound problem is not so serious with user tables.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 07 Aug 2001 17:24:12 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "> > Based on the discussion so far, here is an attempt to flesh out the\n> > details of what to do with OIDs for 7.2:\n> > \n> > 1. Add an optional clause \"WITH OIDS\" or \"WITHOUT OIDS\" to CREATE TABLE.\n> > The default behavior will be WITH OIDS.\n> \n> What about having an additional Oid generator which solely serves for\n> supplying user tables' per row Oids? It seems relatively easy to\n> implement, comparing with 64-bit Oids or Oid-less tables. I assume\n> that the Oid wraparound problem is not so serious with user tables.\n\nThis is a very interesting idea. Have two oid counters, one for system\ntables and another for user tables. It isolates problems with oid\nwraparound caused by large user tables.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 7 Aug 2001 11:28:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> This is a very interesting idea. Have two oid counters, one for system\n> tables and another for user tables. It isolates problems with oid\n> wraparound caused by large user tables.\n\nWell, it'd keep user-space wraparound from affecting the system tables,\nbut given that the system tables have adequate defenses already (ie,\nunique indexes) I'm not sure that there's any point. It'd not improve\nthe picture for user-table OID uniqueness by any measurable degree.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 12:15:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "> Well, it'd keep user-space wraparound from affecting the system tables,\n> but given that the system tables have adequate defenses already (ie,\n> unique indexes) I'm not sure that there's any point. It'd not improve\n> the picture for user-table OID uniqueness by any measurable degree.\n\nBut from the point of users' view, it does not prevent \"create XXX\ncomand fails due to Oid wraparounding\" problems, no?\n\nAlso I am worried about the performance of the per table Oid\ngenerators. Even the system tables going to have that kind of\ngenerators? What would happend if there are 5k tables in a database?\nIt's not very rare situation in a large installation.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 08 Aug 2001 10:22:39 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Also I am worried about the performance of the per table Oid\n> generators.\n\nI think performance would be a big problem if we tried to implement them\njust like sequences are done now. But a shared hashtable of Oid\ngenerators (each one handled roughly like the single Oid generator\ncurrently is) would probably work okay. We'd have to work out how to\nhave a backing disk table for this hashtable, since we couldn't expect\nto have room in shared memory for all generators at all times --- but we\ncould cache all the active generators in shared memory, I'd think.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 12:39:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Also I am worried about the performance of the per table Oid\n> > generators.\n>\n> I think performance would be a big problem if we tried to implement them\n> just like sequences are done now. But a shared hashtable of Oid\n> generators (each one handled roughly like the single Oid generator\n> currently is) would probably work okay. We'd have to work out how to\n> have a backing disk table for this hashtable, since we couldn't expect\n> to have room in shared memory for all generators at all times --- but we\n> could cache all the active generators in shared memory, I'd think.\n\nMaybe I'm confused, is there no shared memory for each unique table in use?\nIf so, couldn't the Oid generator be stored there? if not, how does that\nwork?\n\nSecond, IMHO I think you are a bit too conservative with shared memory. If\none has so many active tables that their Oid generators wouldn't fit in\nshared memory, this would indicate a fairly large database, I think one\ncould be justified in requiring more resources than the minimum. PostgreSQL\nis already increasing in resource requirements. The introduction of WAL\nadded a lot of disk space for operation. A few K of shared RAM doesn't seem\nlike a lot. (Maybe I am jaded as I have bumped my shared memory to 128M)\n\nLastly, were PostgreSQL to have multiple Oid generators, each of these\ncould have its own spinlock or mutex, thus reducing competition. In an\nactive system with activity on multiple tables, this could improve\nperformance.\n\n",
"msg_date": "Wed, 08 Aug 2001 13:19:02 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Tom Lane wrote:\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Also I am worried about the performance of the per table Oid\n> > generators.\n>\n> I think performance would be a big problem if we tried to implement them\n> just like sequences are done now. But a shared hashtable of Oid\n> generators (each one handled roughly like the single Oid generator\n> currently is) would probably work okay. We'd have to work out how to\n> have a backing disk table for this hashtable, since we couldn't expect\n> to have room in shared memory for all generators at all times --- but we\n> could cache all the active generators in shared memory, I'd think.\n\n Keep also in mind that actually the uniqueness of Oid's\n across all tables is used by TOAST to determine that a\n toasted value found in the new tuple is the same than in the\n old one on heap_update() or not. If we go for a separate Oid\n per table, an UPDATE with a subselect from another table\n could get misinterpreted in the toaster, not duplicating the\n value but referencing the external value in another tables\n toast-shadow table.\n\n It's no big deal, some additional checks of the va_toastrelid\n beeing the same as the target relations toast relation should\n do it.\n\n Now since toast needs the row Oid allways, I think the idea\n of making Oid's in user tables optional is dead.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 8 Aug 2001 14:00:45 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> Keep also in mind that actually the uniqueness of Oid's\n> across all tables is used by TOAST to determine that a\n> toasted value found in the new tuple is the same than in the\n> old one on heap_update() or not.\n> It's no big deal, some additional checks of the va_toastrelid\n> beeing the same as the target relations toast relation should\n> do it.\n\nGood point.\n\n> Now since toast needs the row Oid allways, I think the idea\n> of making Oid's in user tables optional is dead.\n\nWhy? I see where it's looking at the main-row OID and attno to decide\nif it's the same value or not, but this seems strange and wrong. Why\ndoesn't it just compare va_toastrelid and va_valueid? Considering that\nthe main point of this comparison is to distinguish values associated\nwith different versions of the same row, neither main row OID nor\nattribute number seem helpful. I don't see why we expend space on\nstoring va_rowid + va_attno at all.\n\nBTW, I've already completed implementing optional OIDs, so I'm not\ngoing to give up the idea lightly at this point ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 17:27:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound: summary and proposal "
}
] |
[
{
"msg_contents": "Is it possible to access tables in one database from another database if\nthey're in the same cluster? I dont seem to be able to do it; is there\nsomething I have to do or is it impossible?\n\nIe.\nIf I have two databases accessible from the same postmaster; one called\ndb_one and the other called db_two.\n\n\n%psql -U postgres -p 5555 db_one\ndb_one=# select * from db_two.mytable;\n...\n\nor, from the other perspective;\n\n%psql -U postgres -p 5555 db_two\ndb_two=# select * from db_one.myothertable;\n...\n\nThanks,\ndave\n",
"msg_date": "Wed, 01 Aug 2001 10:55:11 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": true,
"msg_subject": "Accessing different databases in a cluster"
},
{
"msg_contents": "On Wed, 1 Aug 2001, Dave Blasby wrote:\n\n> Is it possible to access tables in one database from another database if\n> they're in the same cluster? I dont seem to be able to do it; is there\n> something I have to do or is it impossible?\n\nNo, AFAIK, this isn't currently possible.\n\n",
"msg_date": "Wed, 1 Aug 2001 12:22:07 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Accessing different databases in a cluster"
},
{
"msg_contents": "At 12:22 PM 8/1/01 -0700, Stephan Szabo wrote:\n>On Wed, 1 Aug 2001, Dave Blasby wrote:\n>\n> > Is it possible to access tables in one database from another database if\n> > they're in the same cluster? I dont seem to be able to do it; is there\n> > something I have to do or is it impossible?\n>\n>No, AFAIK, this isn't currently possible.\nReally, you cannot do a \"select from\" one and \"insert into\" another?\n\nYou could probably rig it up through pipes, like a pg_dump piped to a \npg_restore.\n\n\n\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n",
"msg_date": "Wed, 01 Aug 2001 13:12:14 -0700",
"msg_from": "Naomi Walker <nwalker@eldocomp.com>",
"msg_from_op": false,
"msg_subject": "Re: Accessing different databases in a cluster"
},
{
"msg_contents": "On Wed, 1 Aug 2001, Naomi Walker wrote:\n\n> At 12:22 PM 8/1/01 -0700, Stephan Szabo wrote:\n> >On Wed, 1 Aug 2001, Dave Blasby wrote:\n> >\n> > > Is it possible to access tables in one database from another database if\n> > > they're in the same cluster? I dont seem to be able to do it; is there\n> > > something I have to do or is it impossible?\n> >\n> >No, AFAIK, this isn't currently possible.\n>\n> Really, you cannot do a \"select from\" one and \"insert into\" another?\n> \n> You could probably rig it up through pipes, like a pg_dump piped to a \n> pg_restore.\n\nTrue, but you can't do it entirely within postgres without writing your\nown functions. You could do some kind of replication, but then you'd\nreally only be accessing tables inside one database, just ones that were\nreplicated from another.\n\n",
"msg_date": "Wed, 1 Aug 2001 13:50:57 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Accessing different databases in a cluster"
}
] |
[
{
"msg_contents": "I have a function (plpgsql) and would like it to have access to the name\nof the current database. Unfortunately, I dont know how to ask the\nquestion.\n\nI've look in the documentation, and I can get a list of possible\ndatabases from pg_database, but I dont know which one I'm currently in.\n\ndave\n",
"msg_date": "Wed, 01 Aug 2001 10:58:40 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": true,
"msg_subject": "How to find the database name during run-time"
}
] |
[
{
"msg_contents": "Attached please find a patch to the input parser that yields better\nsyntax error reporting on parse errors. For example:\n\ntest=# SELECT * FRUM bob;\nERROR: parser: parse error at or near \"frum\"\n\nbecomes:\n\ntest=# SELECT * FRUM bob;\nERROR: parser: parse error at or near 'frum':\nSELECT * FRUM bob;\n ^\n\nI've also modified the regression tests accordingly.\n\nI haven't made the corresponding changes to the ecpg grammar -- I'm not\nsure whether changes like this are desirable there. Feedback welcome.\n\nComments?\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n\nIndex: src/backend/parser/scan.l\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/scan.l,v\nretrieving revision 1.88\ndiff -c -p -r1.88 scan.l\n*** src/backend/parser/scan.l\t2001/03/22 17:41:47\t1.88\n--- src/backend/parser/scan.l\t2001/08/01 17:43:53\n***************\n*** 37,42 ****\n--- 37,43 ----\n \n extern char *parseString;\n static char *parseCh;\n+ static int parseOffset;\n \n /* some versions of lex define this as a macro */\n #if defined(yywrap)\n*************** other\t\t\t.\n*** 254,262 ****\n */\n \n %%\n! {whitespace}\t{ /* ignore */ }\n \n! {xcstart}\t\t{\n \t\t\t\t\txcdepth = 0;\n \t\t\t\t\tBEGIN(xc);\n \t\t\t\t\t/* Put back any characters past slash-star; see above */\n--- 255,266 ----\n */\n \n %%\n! {whitespace}\t{ parseOffset += yyleng;\n! /* ignore */\n! }\n \n! {xcstart}\t\t{ \n! parseOffset += 2;\n \t\t\t\t\txcdepth = 0;\n \t\t\t\t\tBEGIN(xc);\n \t\t\t\t\t/* Put back any characters past slash-star; see above */\n*************** other\t\t\t.\n*** 264,293 ****\n \t\t\t\t}\n \n <xc>{xcstart}\t{\n \t\t\t\t\txcdepth++;\n \t\t\t\t\t/* Put back any characters past slash-star; see above */\n \t\t\t\t\tyyless(2);\n \t\t\t\t}\n \n <xc>{xcstop}\t{\n \t\t\t\t\tif (xcdepth <= 0)\n \t\t\t\t\t\tBEGIN(INITIAL);\n \t\t\t\t\telse\n \t\t\t\t\t\txcdepth--;\n \t\t\t\t}\n \n! <xc>{xcinside}\t{ /* ignore */ }\n \n! <xc>{op_chars}\t{ /* ignore */ }\n \n <xc><<EOF>>\t\t{ elog(ERROR, \"Unterminated /* comment\"); }\n \n! {xbitstart}\t\t{\n \t\t\t\t\tBEGIN(xbit);\n \t\t\t\t\tstartlit();\n \t\t\t\t\taddlit(\"b\", 1);\n \t\t\t\t}\n <xbit>{xbitstop}\t{\n \t\t\t\t\tBEGIN(INITIAL);\n \t\t\t\t\tif (literalbuf[strspn(literalbuf + 1, \"01\") + 1] != '\\0')\n \t\t\t\t\t\telog(ERROR, \"invalid bit string input: '%s'\",\n--- 268,303 ----\n \t\t\t\t}\n \n <xc>{xcstart}\t{\n+ parseOffset += 2;\n \t\t\t\t\txcdepth++;\n \t\t\t\t\t/* Put back any characters past slash-star; see above */\n \t\t\t\t\tyyless(2);\n \t\t\t\t}\n \n <xc>{xcstop}\t{\n+ parseOffset += yyleng;\n \t\t\t\t\tif (xcdepth <= 0)\n \t\t\t\t\t\tBEGIN(INITIAL);\n \t\t\t\t\telse\n \t\t\t\t\t\txcdepth--;\n \t\t\t\t}\n \n! <xc>{xcinside}\t{ parseOffset += yyleng;\n! /* ignore */ }\n \n! <xc>{op_chars}\t{ parseOffset += yyleng;\n! /* ignore */ }\n \n <xc><<EOF>>\t\t{ elog(ERROR, \"Unterminated /* comment\"); }\n \n! {xbitstart}\t\t{ \n! parseOffset += yyleng;\n \t\t\t\t\tBEGIN(xbit);\n \t\t\t\t\tstartlit();\n \t\t\t\t\taddlit(\"b\", 1);\n \t\t\t\t}\n <xbit>{xbitstop}\t{\n+ parseOffset += yyleng;\n \t\t\t\t\tBEGIN(INITIAL);\n \t\t\t\t\tif (literalbuf[strspn(literalbuf + 1, \"01\") + 1] != '\\0')\n \t\t\t\t\t\telog(ERROR, \"invalid bit string input: '%s'\",\n*************** other\t\t\t.\n*** 297,311 ****\n \t\t\t\t}\n <xh>{xhinside}\t|\n <xbit>{xbitinside}\t{\n \t\t\t\t\taddlit(yytext, yyleng);\n \t\t\t\t}\n <xh>{xhcat}\t\t|\n! <xbit>{xbitcat}\t\t{\n \t\t\t\t\t/* ignore */\n \t\t\t\t}\n <xbit><<EOF>>\t\t{ elog(ERROR, \"unterminated bit string literal\"); }\n \n {xhstart}\t\t{\n \t\t\t\t\tBEGIN(xh);\n \t\t\t\t\tstartlit();\n \t\t\t\t}\n--- 307,324 ----\n \t\t\t\t}\n <xh>{xhinside}\t|\n <xbit>{xbitinside}\t{\n+ parseOffset += yyleng;\n \t\t\t\t\taddlit(yytext, yyleng);\n \t\t\t\t}\n <xh>{xhcat}\t\t|\n! <xbit>{xbitcat}\t\t{ \n! parseOffset += yyleng;\n \t\t\t\t\t/* ignore */\n \t\t\t\t}\n <xbit><<EOF>>\t\t{ elog(ERROR, \"unterminated bit string literal\"); }\n \n {xhstart}\t\t{\n+ parseOffset += yyleng;\n \t\t\t\t\tBEGIN(xh);\n \t\t\t\t\tstartlit();\n \t\t\t\t}\n*************** other\t\t\t.\n*** 313,318 ****\n--- 326,332 ----\n \t\t\t\t\tlong val;\n \t\t\t\t\tchar* endptr;\n \n+ parseOffset += yyleng;\n \t\t\t\t\tBEGIN(INITIAL);\n \t\t\t\t\terrno = 0;\n \t\t\t\t\tval = strtol(literalbuf, &endptr, 16);\n*************** other\t\t\t.\n*** 330,339 ****\n--- 344,355 ----\n <xh><<EOF>>\t\t{ elog(ERROR, \"Unterminated hexadecimal integer\"); }\n \n {xqstart}\t\t{\n+ parseOffset += yyleng;\n \t\t\t\t\tBEGIN(xq);\n \t\t\t\t\tstartlit();\n \t\t\t\t}\n <xq>{xqstop}\t{\n+ parseOffset += yyleng;\n \t\t\t\t\tBEGIN(INITIAL);\n \t\t\t\t\tyylval.str = scanstr(literalbuf);\n \t\t\t\t\treturn SCONST;\n*************** other\t\t\t.\n*** 341,359 ****\n--- 357,379 ----\n <xq>{xqdouble}\t|\n <xq>{xqinside}\t|\n <xq>{xqliteral} {\n+ parseOffset += yyleng;\n \t\t\t\t\taddlit(yytext, yyleng);\n \t\t\t\t}\n <xq>{xqcat}\t\t{\n+ parseOffset += yyleng;\n \t\t\t\t\t/* ignore */\n \t\t\t\t}\n <xq><<EOF>>\t\t{ elog(ERROR, \"Unterminated quoted string\"); }\n \n \n {xdstart}\t\t{\n+ parseOffset += yyleng;\n \t\t\t\t\tBEGIN(xd);\n \t\t\t\t\tstartlit();\n \t\t\t\t}\n <xd>{xdstop}\t{\n+ parseOffset += yyleng;\n \t\t\t\t\tBEGIN(INITIAL);\n \t\t\t\t\tif (strlen(literalbuf) == 0)\n \t\t\t\t\t\telog(ERROR, \"zero-length delimited identifier\");\n*************** other\t\t\t.\n*** 375,391 ****\n \t\t\t\t\treturn IDENT;\n \t\t\t\t}\n <xd>{xddouble} {\n \t\t\t\t\taddlit(yytext, yyleng-1);\n \t\t\t\t}\n! <xd>{xdinside}\t{\n \t\t\t\t\taddlit(yytext, yyleng);\n \t\t\t\t}\n <xd><<EOF>>\t\t{ elog(ERROR, \"Unterminated quoted identifier\"); }\n \n! {typecast}\t\t{ return TYPECAST; }\n \n- {self}\t\t\t{ return yytext[0]; }\n- \n {operator}\t\t{\n \t\t\t\t\t/*\n \t\t\t\t\t * Check for embedded slash-star or dash-dash; those\n--- 395,417 ----\n \t\t\t\t\treturn IDENT;\n \t\t\t\t}\n <xd>{xddouble} {\n+ parseOffset += yyleng;\n \t\t\t\t\taddlit(yytext, yyleng-1);\n \t\t\t\t}\n! <xd>{xdinside}\t{ \n! parseOffset += yyleng;\n \t\t\t\t\taddlit(yytext, yyleng);\n \t\t\t\t}\n <xd><<EOF>>\t\t{ elog(ERROR, \"Unterminated quoted identifier\"); }\n \n! {typecast}\t\t{ \n! parseOffset += yyleng;\n! return TYPECAST; }\n! \n! {self}\t\t\t{ \n! parseOffset += yyleng;\n! return yytext[0]; }\n \n {operator}\t\t{\n \t\t\t\t\t/*\n \t\t\t\t\t * Check for embedded slash-star or dash-dash; those\n*************** other\t\t\t.\n*** 396,401 ****\n--- 422,429 ----\n \t\t\t\t\tint\t\tnchars = yyleng;\n \t\t\t\t\tchar *slashstar = strstr((char*)yytext, \"/*\");\n \t\t\t\t\tchar *dashdash = strstr((char*)yytext, \"--\");\n+ \t\t\t\t\t \n+ parseOffset += yyleng;\n \n \t\t\t\t\tif (slashstar && dashdash)\n \t\t\t\t\t{\n*************** other\t\t\t.\n*** 455,461 ****\n \t\t\t\t\treturn Op;\n \t\t\t\t}\n \n! {param}\t\t\t{\n \t\t\t\t\tyylval.ival = atol((char*)&yytext[1]);\n \t\t\t\t\treturn PARAM;\n \t\t\t\t}\n--- 483,490 ----\n \t\t\t\t\treturn Op;\n \t\t\t\t}\n \n! {param}\t\t\n{ \n! parseOffset += yyleng;\n \t\t\t\t\tyylval.ival = atol((char*)&yytext[1]);\n \t\t\t\t\treturn PARAM;\n \t\t\t\t}\n*************** other\t\t\t.\n*** 463,469 ****\n {integer}\t\t{\n \t\t\t\t\tlong val;\n \t\t\t\t\tchar* endptr;\n! \n \t\t\t\t\terrno = 0;\n \t\t\t\t\tval = strtol((char *)yytext, &endptr, 10);\n \t\t\t\t\tif (*endptr != '\\0' || errno == ERANGE\n--- 492,499 ----\n {integer}\t\t{\n \t\t\t\t\tlong val;\n \t\t\t\t\tchar* endptr;\n! \n! parseOffset += yyleng;\n \t\t\t\t\terrno = 0;\n \t\t\t\t\tval = strtol((char *)yytext, &endptr, 10);\n \t\t\t\t\tif (*endptr != '\\0' || errno == ERANGE\n*************** other\t\t\t.\n*** 480,490 ****\n \t\t\t\t\tyylval.ival = val;\n \t\t\t\t\treturn ICONST;\n \t\t\t\t}\n! {decimal}\t\t{\n \t\t\t\t\tyylval.str = pstrdup((char*)yytext);\n \t\t\t\t\treturn FCONST;\n \t\t\t\t}\n! {real}\t\t\t{\n \t\t\t\t\tyylval.str = pstrdup((char*)yytext);\n \t\t\t\t\treturn FCONST;\n \t\t\t\t}\n--- 510,522 ----\n \t\t\t\t\tyylval.ival = val;\n \t\t\t\t\treturn ICONST;\n \t\t\t\t}\n! {decimal}\t\n{ \n! parseOffset += yyleng;\n \t\t\t\t\tyylval.str = pstrdup((char*)yytext);\n \t\t\t\t\treturn FCONST;\n \t\t\t\t}\n! {real}\t\t\n{ \n! parseOffset += yyleng;\n \t\t\t\t\tyylval.str = pstrdup((char*)yytext);\n \t\t\t\t\treturn FCONST;\n \t\t\t\t}\n*************** other\t\t\t.\n*** 493,498 ****\n--- 525,532 ----\n {identifier}\t{\n \t\t\t\t\tScanKeyword\t *keyword;\n \t\t\t\t\tint\t\t\t\ti;\n+ \t\t\t\t\t \n+ \t\t\t\t\tparseOffset += yyleng;\n \n \t\t\t\t\t/* Is it a keyword? */\n \t\t\t\t\tkeyword = ScanKeywordLookup((char*) yytext);\n*************** other\t\t\t.\n*** 530,545 ****\n \t\t\t\t\treturn IDENT;\n \t\t\t\t}\n \n! {other}\t\t\t{ return yytext[0]; }\n \n %%\n \n void\n yyerror(const char *message)\n {\n! \telog(ERROR, \"parser: %s at or near \\\"%s\\\"\", message, yytext);\n }\n \n int\n yywrap(void)\n {\n--- 564,634 ----\n \t\t\t\t\treturn IDENT;\n \t\t\t\t}\n \n! {other}\t\t\t{ \n! parseOffset += yyleng;\n! return yytext[0]; \n! }\n \n %%\n \n void\n yyerror(const char *message)\n {\n! int errorOffset;\n! char *line;\n! char *endOfLine;\n! char *beginningOfLine; \n! size_t buffSize;\n! \n! /* Calculate the error's offset from the beginning of the input */\n! \n! errorOffset = parseOffset + 1 - yyleng;\n! \n! /* Find the beginning of the input line */\n! \n! for(beginningOfLine = parseString + errorOffset;\n! beginningOfLine > parseString;\n! beginningOfLine--)\n! if(*(beginningOfLine - 1) == '\\n' || \n! *(beginningOfLine - 1) == '\\r' ||\n! *(beginningOfLine - 1) == '\\f')\n! break;\n! \n! /* Find the end of the input line */\n! \n! for(endOfLine = parseString + errorOffset;\n! *endOfLine != '\\0';\n! endOfLine++)\n! if(*endOfLine == '\\n' ||\n! *endOfLine == '\\r' ||\n! *endOfLine == '\\f')\n! break;\n! \n! /* Calculate the offset of the error into the input line */\n! \n! errorOffset = errorOffset - (int)(beginningOfLine - parseString);\n! \n! /* Allocate a buffer for the line */\n! \n! buffSize = (endOfLine - beginningOfLine) + 1;\n! line = palloc(buffSize);\n! \n! /* Copy the line into the buffer */\n! \n! memcpy(line, beginningOfLine, buffSize);\n! *(line + buffSize - 1) = '\\0';\n! \n! /* Report the error */\n! \n! elog(ERROR, \"parser: %s at or near \\\"%s\\\":\\n%s\\n%*s\", \n! message, \n! yytext,\n! line, \n! errorOffset, \n! \"^\");\n }\n \n+ \n int\n yywrap(void)\n {\n*************** scanner_init(void)\n*** 557,562 ****\n--- 646,655 ----\n \t because input()/myinput() checks the non-nullness of parseCh\n \t to know when to pass the string to lex/flex */\n \tparseCh = NULL;\n+ \n+ \t/* Initialize the parse input offset -- used by enhanced syntax error\nreporting */\n+ \n+ \tparseOffset = 0;\n \n \t/* initialize literal buffer to a reasonable but expansible size */\n \tliteralalloc = 128;\nIndex: src/test/regress/expected/errors.out\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/errors.out,v\nretrieving revision 1.26\ndiff -c -p -r1.26 errors.out\n*** src/test/regress/expected/errors.out\t2000/11/08 22:10:03\t1.26\n--- src/test/regress/expected/errors.out\t2001/08/01 17:43:56\n*************** select 1\n*** 19,25 ****\n select\n -- no such relation \n select * from nonesuch;\n! ERROR: parser: parse error at or near \"select\"\n -- bad name in target list\n select nonesuch from pg_database;\n ERROR: Attribute 'nonesuch' not found\n--- 19,27 ----\n select\n -- no such relation \n select * from nonesuch;\n! ERROR: parser: parse error at or near \"select\":\n! select\n! ^\n -- bad name in target list\n select nonesuch from pg_database;\n ERROR: Attribute 'nonesuch' not found\n*************** select * from pg_database where pg_datab\n*** 31,37 ****\n ERROR: Attribute 'nonesuch' not found\n -- bad select distinct on syntax, distinct attribute missing\n select distinct on (foobar) from pg_database;\n! ERROR: parser: parse error at or near \"from\"\n -- bad select distinct on syntax, distinct attribute not in target\nlist\n select distinct on (foobar) * from pg_database;\n ERROR: Attribute 'foobar' not found\n--- 33,41 ----\n ERROR: Attribute 'nonesuch' not found\n -- bad select distinct on syntax, distinct attribute missing\n select distinct on (foobar) from pg_database;\n! ERROR: parser: parse error at or near \"from\":\n! select distinct on (foobar) from pg_database;\n! ^\n -- bad select distinct on syntax, distinct attribute not in target\nlist\n select distinct on (foobar) * from pg_database;\n ERROR: Attribute 'foobar' not found\n*************** ERROR: Attribute 'foobar' not found\n*** 39,46 ****\n -- DELETE\n \n -- missing relation name (this had better not wildcard!) \n delete from;\n! ERROR: parser: parse error at or near \";\"\n -- no such relation \n delete from nonesuch;\n ERROR: Relation 'nonesuch' does not exist\n--- 43,52 ----\n -- DELETE\n \n -- missing relation name (this had better not wildcard!) \n+ delete from;\n+ ERROR: parser: parse error at or near \";\":\n delete from;\n! ^\n -- no such relation \n delete from nonesuch;\n ERROR: Relation 'nonesuch' does not exist\n*************** ERROR: Relation 'nonesuch' does not exi\n*** 49,55 ****\n \n -- missing relation name (this had better not wildcard!) \n drop table;\n! ERROR: parser: parse error at or near \";\"\n -- no such relation \n drop table nonesuch;\n ERROR: table \"nonesuch\" does not exist\n--- 55,63 ----\n \n -- missing relation name (this had better not wildcard!) \n drop table;\n! ERROR: parser: parse error at or near \";\":\n! drop table;\n! ^\n -- no such relation \n drop table nonesuch;\n ERROR: table \"nonesuch\" does not exist\n*************** ERROR: table \"nonesuch\" does not exist\n*** 58,65 ****\n \n -- relation renaming \n -- missing relation name \n alter table rename;\n! ERROR: parser: parse error at or near \";\"\n -- no such relation \n alter table nonesuch rename to newnonesuch;\n ERROR: Relation \"nonesuch\" does not exist\n--- 66,75 ----\n \n -- relation renaming \n -- missing relation name \n+ alter table rename;\n+ ERROR: parser: parse error at or near \";\":\n alter table rename;\n! ^\n -- no such relation \n alter table nonesuch rename to newnonesuch;\n ERROR: Relation \"nonesuch\" does not exist\n*************** ERROR: Define: \"basetype\" unspecified\n*** 116,125 ****\n \n -- missing index name \n drop index;\n! ERROR: parser: parse error at or near \";\"\n -- bad index name \n drop index 314159;\n! ERROR: parser: parse error at or near \"314159\"\n -- no such index \n drop index nonesuch;\n ERROR: index \"nonesuch\" does not exist\n--- 126,139 ----\n \n -- missing index name \n drop index;\n! ERROR: parser: parse error at or near \";\":\n! drop index;\n! ^\n -- bad index name \n+ drop index 314159;\n+ ERROR: parser: parse error at or near \"314159\":\n drop index 314159;\n! ^\n -- no such index \n drop index nonesuch;\n ERROR: index \"nonesuch\" does not exist\n*************** ERROR: index \"nonesuch\" does not exist\n*** 127,143 ****\n -- REMOVE AGGREGATE\n \n -- missing aggregate name \n drop aggregate;\n! ERROR: parser: parse error at or near \";\"\n -- bad aggregate name \n drop aggregate 314159;\n! ERROR: parser: parse error at or near \"314159\"\n -- no such aggregate \n drop aggregate nonesuch;\n! ERROR: parser: parse error at or near \";\"\n -- missing aggregate type\n drop aggregate newcnt1;\n! ERROR: parser: parse error at or near \";\"\n -- bad aggregate type\n drop aggregate newcnt nonesuch;\n ERROR: RemoveAggregate: type 'nonesuch' does not exist\n--- 141,165 ----\n -- REMOVE AGGREGATE\n \n -- missing aggregate name \n+ drop aggregate;\n+ ERROR: parser: parse error at or near \";\":\n drop aggregate;\n! ^\n -- bad aggregate name \n drop aggregate 314159;\n! ERROR: parser: parse error at or near \"314159\":\n! drop aggregate 314159;\n! ^\n -- no such aggregate \n+ drop aggregate nonesuch;\n+ ERROR: parser: parse error at or near \";\":\n drop aggregate nonesuch;\n! ^\n -- missing aggregate type\n drop aggregate newcnt1;\n! ERROR: parser: parse error at or near \";\":\n! drop aggregate newcnt1;\n! ^\n -- bad aggregate type\n drop aggregate newcnt nonesuch;\n ERROR: RemoveAggregate: type 'nonesuch' does not exist\n*************** ERROR: RemoveAggregate: aggregate 'newc\n*** 148,158 ****\n -- REMOVE FUNCTION\n \n -- missing function name \n drop function ();\n! ERROR: parser: parse error at or near \"(\"\n -- bad function name \n drop function 314159();\n! ERROR: parser: parse error at or near \"314159\"\n -- no such function \n drop function nonesuch();\n ERROR: RemoveFunction: function 'nonesuch()' does not exist\n--- 170,184 ----\n -- REMOVE FUNCTION\n \n -- missing function name \n+ drop function ();\n+ ERROR: parser: parse error at or near \"(\":\n drop function ();\n! ^\n -- bad function name \n drop function 314159();\n! ERROR: parser: parse error at or near \"314159\":\n! drop function 314159();\n! ^\n -- no such function \n drop function nonesuch();\n ERROR: RemoveFunction: function 'nonesuch()' does not exist\n*************** ERROR: RemoveFunction: function 'nonesu\n*** 160,170 ****\n -- REMOVE TYPE\n \n -- missing type name \n drop type;\n! ERROR: parser: parse error at or near \";\"\n -- bad type name \n drop type 314159;\n! ERROR: parser: parse error at or near \"314159\"\n -- no such type \n drop type nonesuch;\n ERROR: RemoveType: type 'nonesuch' does not exist\n--- 186,200 ----\n -- REMOVE TYPE\n \n -- missing type name \n+ drop type;\n+ ERROR: parser: parse error at or near \";\":\n drop type;\n! ^\n -- bad type name \n+ drop type 314159;\n+ ERROR: parser: parse error at or near \"314159\":\n drop type 314159;\n! ^\n -- no such type \n drop type nonesuch;\n ERROR: RemoveType: type 'nonesuch' does not exist\n*************** ERROR: RemoveType: type 'nonesuch' does\n*** 173,194 ****\n \n -- missing everything \n drop operator;\n! ERROR: parser: parse error at or near \";\"\n -- bad operator name \n drop operator equals;\n! ERROR: parser: parse error at or near \"equals\"\n -- missing type list \n drop operator ===;\n! ERROR: parser: parse error at or near \";\"\n -- missing parentheses \n drop operator int4, int4;\n! ERROR: parser: parse error at or near \"int4\"\n -- missing operator name \n drop operator (int4, int4);\n! ERROR: parser: parse error at or near \"(\"\n -- missing type list contents \n drop operator === ();\n! ERROR: parser: parse error at or near \")\"\n -- no such operator \n drop operator === (int4);\n ERROR: parser: argument type missing (use NONE for unary operators)\n--- 203,236 ----\n \n -- missing everything \n drop operator;\n! ERROR: parser: parse error at or near \";\":\n! drop operator;\n! ^\n -- bad operator name \n+ drop operator equals;\n+ ERROR: parser: parse error at or near \"equals\":\n drop operator equals;\n! ^\n -- missing type list \n drop operator ===;\n! ERROR: parser: parse error at or near \";\":\n! drop operator ===;\n! ^\n -- missing parentheses \n+ drop operator int4, int4;\n+ ERROR: parser: parse error at or near \"int4\":\n drop operator int4, int4;\n! ^\n -- missing operator name \n drop operator (int4, int4);\n! ERROR: parser: parse error at or near \"(\":\n! drop operator (int4, int4);\n! ^\n -- missing type list contents \n+ drop operator === ();\n+ ERROR: parser: parse error at or near \")\":\n drop operator === ();\n! ^\n -- no such operator \n drop operator === (int4);\n ERROR: parser: argument type missing (use NONE for unary operators)\n*************** ERROR: RemoveOperator: binary operator \n*** 199,206 ****\n drop operator = (nonesuch);\n ERROR: parser: argument type missing (use NONE for unary operators)\n -- no such type1 \n drop operator = ( , int4);\n! ERROR: parser: parse error at or near \",\"\n -- no such type1 \n drop operator = (nonesuch, int4);\n ERROR: RemoveOperator: type 'nonesuch' does not exist\n--- 241,250 ----\n drop operator = (nonesuch);\n ERROR: parser: argument type missing (use NONE for unary operators)\n -- no such type1 \n+ drop operator = ( , int4);\n+ ERROR: parser: parse error at or near \",\":\n drop operator = ( , int4);\n! ^\n -- no such type1 \n drop operator = (nonesuch, int4);\n ERROR: RemoveOperator: type 'nonesuch' does not exist\n*************** drop operator = (int4, nonesuch);\n*** 209,233 ****\n ERROR: RemoveOperator: type 'nonesuch' does not exist\n -- no such type2 \n drop operator = (int4, );\n! ERROR: parser: parse error at or near \")\"\n --\n -- DROP RULE\n \n -- missing rule name \n drop rule;\n! ERROR: parser: parse error at or near \";\"\n -- bad rule name \n drop rule 314159;\n! ERROR: parser: parse error at or near \"314159\"\n -- no such rule \n drop rule nonesuch;\n ERROR: Rule or view \"nonesuch\" not found\n -- bad keyword \n drop tuple rule nonesuch;\n! ERROR: parser: parse error at or near \"tuple\"\n -- no such rule \n drop instance rule nonesuch;\n! ERROR: parser: parse error at or near \"instance\"\n -- no such rule \n drop rewrite rule nonesuch;\n! ERROR: parser: parse error at or near \"rewrite\"\n--- 253,289 ----\n ERROR: RemoveOperator: type 'nonesuch' does not exist\n -- no such type2 \n drop operator = (int4, );\n! ERROR: parser: parse error at or near \")\":\n! drop operator = (int4, );\n! ^\n --\n -- DROP RULE\n \n -- missing rule name \n+ drop rule;\n+ ERROR: parser: parse error at or near \";\":\n drop rule;\n! ^\n -- bad rule name \n+ drop rule 314159;\n+ ERROR: parser: parse error at or near \"314159\":\n drop rule 314159;\n! ^\n -- no such rule \n drop rule nonesuch;\n ERROR: Rule or view \"nonesuch\" not found\n -- bad keyword \n drop tuple rule nonesuch;\n! ERROR: parser: parse error at or near \"tuple\":\n! drop tuple rule nonesuch;\n! ^\n -- no such rule \n+ drop instance rule nonesuch;\n+ ERROR: parser: parse error at or near \"instance\":\n drop instance rule nonesuch;\n! ^\n -- no such rule \n+ drop rewrite rule nonesuch;\n+ ERROR: parser: parse error at or near \"rewrite\":\n drop rewrite rule nonesuch;\n! ^\n\\ No newline at end of file\nIndex: src/test/regress/expected/strings.out\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/strings.out,v\nretrieving revision 1.10\ndiff -c -p -r1.10 strings.out\n*** src/test/regress/expected/strings.out\t2001/06/01 17:49:17\t1.10\n--- src/test/regress/expected/strings.out\t2001/08/01 17:43:56\n*************** SELECT 'first line'\n*** 17,23 ****\n ' - next line' /* this comment is not allowed here */\n ' - third line'\n \tAS \"Illegal comment within continuation\";\n! ERROR: parser: parse error at or near \"'\"\n --\n -- test conversions between various string types\n --\n--- 17,25 ----\n ' - next line' /* this comment is not allowed here */\n ' - third line'\n \tAS \"Illegal comment within continuation\";\n! ERROR: parser: parse error at or near \"'\":\n! ' - third line'\n! ^\n --\n -- test conversions between various string types\n --\nIndex: src/test/regress/output/constraints.source\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/constraints.source,v\nretrieving revision 1.18\ndiff -c -p -r1.18 constraints.source\n*** src/test/regress/output/constraints.source\t2001/02/22 05:32:56\t1.18\n--- src/test/regress/output/constraints.source\t2001/08/01 17:43:56\n*************** SELECT '' AS four, * FROM DEFAULTEXPR_TB\n*** 45,56 ****\n -- syntax errors\n -- test for extraneous comma\n CREATE TABLE error_tbl (i int DEFAULT (100, ));\n! ERROR: parser: parse error at or near \",\"\n -- this will fail because gram.y uses b_expr not a_expr for defaults,\n -- to avoid a shift/reduce conflict that arises from NOT NULL being\n -- part of the column definition syntax:\n CREATE TABLE error_tbl (b1 bool DEFAULT 1 IN (1, 2));\n! ERROR: parser: parse error at or near \"IN\"\n -- this should work, however:\n CREATE TABLE error_tbl (b1 bool DEFAULT (1 IN (1, 2)));\n DROP TABLE error_tbl;\n--- 45,60 ----\n -- syntax errors\n -- test for extraneous comma\n CREATE TABLE error_tbl (i int DEFAULT (100, ));\n! ERROR: parser: parse error at or near \",\":\n! CREATE TABLE error_tbl (i int DEFAULT (100, ));\n! ^\n -- this will fail because gram.y uses b_expr not a_expr for defaults,\n -- to avoid a shift/reduce conflict that arises from NOT NULL being\n -- part of the column definition syntax:\n CREATE TABLE error_tbl (b1 bool DEFAULT 1 IN (1, 2));\n! ERROR: parser: parse error at or near \"IN\":\n! CREATE TABLE error_tbl (b1 bool DEFAULT 1 IN (1, 2));\n! ^\n -- this should work, however:\n CREATE TABLE error_tbl (b1 bool DEFAULT (1 IN (1, 2)));\n DROP TABLE error_tbl;\n",
"msg_date": "Wed, 01 Aug 2001 13:58:45 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> Attached please find a patch to the input parser that yields better\n> syntax error reporting on parse errors.\n\nThis has been discussed before (you guys really should spend more time\nreading the mail list archives) and in fact you are not the first to\nsubmit essentially this patch. The major objections to reporting syntax\nproblems this way, IIRC, are that (a) it makes unsupportable assumptions\nabout what the user interface looks like, for example the assumption\nthat the error message will be displayed in a fixed-width font; and\n(b) it becomes essentially useless when the input query exceeds a few\nlines in size.\n\nThe conclusion we had come to in previous discussion is that the error\noffset ought to be delivered to the client application as a separate\ncomponent of the error report, and the client ought to be responsible\nfor doing something appropriate with it --- which might, for example,\ninclude highlighting the offending word(s) if it's a GUI application\nthat has the input query in a window. psql couldn't do that, but might\nchoose to redisplay a few dozen characters around the position of the\nerror.\n\nIn any case, the limiting factor is not the parser change, which is\ntrivial, but our ability/willingness to change the on-the-wire protocol\nto allow error reports to contain multiple components. There are some\nother useful things that could be done once we did that. Again, I'd\nrecommend trawling the archives for a bit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 14:33:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Neil Padgett <npadgett@redhat.com> writes:\n> > Attached please find a patch to the input parser that yields better\n> > syntax error reporting on parse errors.\n> \n> This has been discussed before (you guys really should spend more time\n> reading the mail list archives) and in fact you are not the first to\n\nI've just read the relevant messages. Though, I do find the list\narchives a bit slow for any useful browsing -- can I grab a copy of them\nfrom somewhere? Setting up a mirror might be an idea. . .\n\n> submit essentially this patch. The major objections to reporting syntax\n> problems this way, IIRC, are that (a) it makes unsupportable assumptions\n> about what the user interface looks like, for example the assumption\n> that the error message will be displayed in a fixed-width font; and\n\nCan you cite a client which does not use a fixed-width font at this\ntime? I think most I've worked with use one for SQL interaction -- it is\npretty much \"the way things are done.\" I suppose, however, there could\nbe some clients which display error messages in a dialog box or\nsomething similar, so I do agree that this is something that does need\nto be handled, and which the patch doesn't address. See below for a\nsuggestion for this.\n\n> (b) it becomes essentially useless when the input query exceeds a few\n> lines in size.\n\nHow so? I grab out the line of interest when reporting the error.\n\n> \n> The conclusion we had come to in previous discussion is that the error\n> offset ought to be delivered to the client application as a separate\n> component of the error report, and the client ought to be responsible\n> for doing something appropriate with it --- which might, for example,\n> include highlighting the offending word(s) if it's a GUI application\n> that has the input query in a window. psql couldn't do that, but might\n> choose to redisplay a few dozen characters around the position of the\n> error.\n\nWell, perhaps the error message could be changed to something like this,\nwith a special string marking the parse error position?\n\ntest=# SELECT * FRUM bob;\nERROR: parser: parse error at or near 'frum':\nSELECT * ***FRUM bob;\n\nOr, perhaps better than a magic string:\n\ntest=# SELECT * FRUM bob;\nERROR: parser: parse error at or near 'frum' (index 10)\n \nThe latter is probably more useful, though it does place a burden on the\nclient to format and display an error message. But, the client program\ncan mark out the error in any way it sees fit. In fact, it could even\nleave the raw message in place and still the user will get something\nmore useful than the current output. No protocol change is required, but\nvery useful functionality is added.\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Wed, 01 Aug 2001 14:56:37 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > Neil Padgett <npadgett@redhat.com> writes:\n> > > Attached please find a patch to the input parser that yields better\n> > > syntax error reporting on parse errors.\n> > \n> > This has been discussed before (you guys really should spend more time\n> > reading the mail list archives) and in fact you are not the first to\n\nTom, it is hard to imagine how they would even find relevant stuff on\nthis issue. The TODO.detail item is very vague. Would they start\nsearching for keywords in the mailing list search engine? Not sure what\nkeywords they would even use.\n\nIn fact, their solution is an improvement over what is in\nTODO.detail/yacc now.\n\n> I've just read the relevant messages. Though, I do find the list\n> archives a bit slow for any useful browsing -- can I grab a copy of them\n> from somewhere? Setting up a mirror might be an idea. . .\n\nThe whole internet seems slow today. I think it is that Code Red worm.\n\n> > submit essentially this patch. The major objections to reporting syntax\n> > problems this way, IIRC, are that (a) it makes unsupportable assumptions\n> > about what the user interface looks like, for example the assumption\n> > that the error message will be displayed in a fixed-width font; and\n> \n> Can you cite a client which does not use a fixed-width font at this\n> time? I think most I've worked with use one for SQL interaction -- it is\n> pretty much \"the way things are done.\" I suppose, however, there could\n> be some clients which display error messages in a dialog box or\n> something similar, so I do agree that this is something that does need\n> to be handled, and which the patch doesn't address. See below for a\n> suggestion for this.\n\nI know some people like a client-independent way of displaying errors,\nbut I like the direct approach of this patch, returning a string with\nthe error line highlighted and the location marked. I don't want to\npush added complexity into the client, especially when we don't even\nhave a client who has this need yet.\n\nIMHO, I am starting to see a lot of over-engineering demands made of\nthese patches. I think it is wasting time and is of little value to\naverage PostgreSQL users. Of course, others may disagree, but that is\nmy opinion.\n\nSo, my vote is to accept the patch as-is. When we have need for more\ncomplex reporting, we can add it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 15:24:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> I've just read the relevant messages. Though, I do find the list\n> archives a bit slow for any useful browsing -- can I grab a copy of them\n> from somewhere? Setting up a mirror might be an idea. . .\n\nThe raw archives are under http://www.ca.postgresql.org/mhonarc/\nin monthly files, for example\nhttp://www.ca.postgresql.org/mhonarc/pgsql-patches/archive/pgsql-patches.200108\n\nI am not sure whether our mirror sites mirror them or not. In any case,\nyou should talk to Marc if you want to coordinate some sort of long-term\nmirroring arrangement.\n\n> Can you cite a client which does not use a fixed-width font at this\n> time? I think most I've worked with use one for SQL interaction -- it is\n> pretty much \"the way things are done.\"\n\nI'd believe that data is entered/displayed in fixed-width text; I'm less\nready to assume that for error messages, however.\n\n>> (b) it becomes essentially useless when the input query exceeds a few\n>> lines in size.\n\n> How so? I grab out the line of interest when reporting the error.\n\nMy apologies, I missed that aspect of your patch. An interesting\nsolution. However, it doesn't handle embedded tabs, and there is still\nthe objection that a client app might want to present the location info\nin a completely different fashion anyway.\n\n>> The conclusion we had come to in previous discussion is that the error\n>> offset ought to be delivered to the client application as a separate\n>> component of the error report,\n\n> Well, perhaps the error message could be changed to something like this,\n> with a special string marking the parse error position?\n\n> test=# SELECT * FRUM bob;\n> ERROR: parser: parse error at or near 'frum':\n> SELECT * ***FRUM bob;\n\nI was thinking something along the lines of\n\n\tERROR: message string just like now\n\tPOSITION: 42\n\tOTHERSTUFF: yadda yadda\n\nie, the error message string is now interpreted as keyworded lines,\nsomewhat like (say) mail headers. This would be workable for new\nclients, wouldn't break anything at the wire-protocol level, and would\nnot be totally useless if presented \"raw\" to a user by an old client.\nSee the archives for more info --- I think the last discussion was three\nor four months back when Peter E. started to make noises about fixing\nelog for internationalization.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 15:37:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, it is hard to imagine how they would even find relevant stuff on\n> this issue. The TODO.detail item is very vague.\n\nI dunno that it's vague --- a quick look indicates that TODO.detail/elog\nhas most of the recent messages on the subject. (Neil, the \"recent\ndiscussion\" that I referred to seems to be in there, or most of it\nanyway, if you didn't see it in the archives yet.)\n\n> In fact, their solution is an improvement over what is in\n> TODO.detail/yacc now.\n\nAgreed, the idea of pulling out just the one line is an improvement over\nthe last patch. It's still going down the wrong path though. We should\nbe empowering client apps to highlight syntax errors properly, not\npresenting edited info in a way that might be useful to humans but will\nbe unintelligible to programs. If we go that route, it will be harder\nto do the right thing later.\n\n> I know some people like a client-independent way of displaying errors,\n> but I like the direct approach of this patch, returning a string with\n> the error line highlighted and the location marked. I don't want to\n> push added complexity into the client, especially when we don't even\n> have a client who has this need yet.\n\npgAdmin, phpAdmin, pgaccess, and friends don't count? We have GUI front\nends *today*, you know.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 15:50:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting "
},
{
"msg_contents": "> > In fact, their solution is an improvement over what is in\n> > TODO.detail/yacc now.\n> \n> Agreed, the idea of pulling out just the one line is an improvement over\n> the last patch. It's still going down the wrong path though. We should\n> be empowering client apps to highlight syntax errors properly, not\n> presenting edited info in a way that might be useful to humans but will\n> be unintelligible to programs. If we go that route, it will be harder\n> to do the right thing later.\n> \n> > I know some people like a client-independent way of displaying errors,\n> > but I like the direct approach of this patch, returning a string with\n> > the error line highlighted and the location marked. I don't want to\n> > push added complexity into the client, especially when we don't even\n> > have a client who has this need yet.\n> \n> pgAdmin, phpAdmin, pgaccess, and friends don't count? We have GUI front\n> ends *today*, you know.\n\nBut how do they display error messages now? Can't they just continue\ndoing that with this new code? Do we want to make them code their own\nerror handling, and for what little benefit? Let them figure out how to\ndisplay the error in fixed-width font and be done with it. I am sure\nthey have bigger things to do than colorize error locations.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 16:10:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "> > > In fact, their solution is an improvement over what is in\n> > > TODO.detail/yacc now.\n> > \n> > Agreed, the idea of pulling out just the one line is an improvement over\n> > the last patch. It's still going down the wrong path though. We should\n> > be empowering client apps to highlight syntax errors properly, not\n> > presenting edited info in a way that might be useful to humans but will\n> > be unintelligible to programs. If we go that route, it will be harder\n> > to do the right thing later.\n> > \n> > > I know some people like a client-independent way of displaying errors,\n> > > but I like the direct approach of this patch, returning a string with\n> > > the error line highlighted and the location marked. I don't want to\n> > > push added complexity into the client, especially when we don't even\n> > > have a client who has this need yet.\n> > \n> > pgAdmin, phpAdmin, pgaccess, and friends don't count? We have GUI front\n> > ends *today*, you know.\n> \n> But how do they display error messages now? Can't they just continue\n> doing that with this new code? Do we want to make them code their own\n> error handling, and for what little benefit? Let them figure out how to\n> display the error in fixed-width font and be done with it. I am sure\n> they have bigger things to do than colorize error locations.\n\nA bigger question is that if we decide to output just offset information\nin the message, we have to be sure _all_ the clients can interpret it or\nthe syntax information is confusing. Are we prepared to get all the\nclients update at the same time we add the feature? Seems we should go\nwith a simple solution now and add later.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 17:23:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > In fact, their solution is an improvement over what is in\n> > > > TODO.detail/yacc now.\n> > >\n> > > Agreed, the idea of pulling out just the one line is an improvement over\n> > > the last patch. It's still going down the wrong path though. We should\n> > > be empowering client apps to highlight syntax errors properly, not\n> > > presenting edited info in a way that might be useful to humans but will\n> > > be unintelligible to programs. If we go that route, it will be harder\n> > > to do the right thing later.\n> > >\n> > > > I know some people like a client-independent way of displaying errors,\n> > > > but I like the direct approach of this patch, returning a string with\n> > > > the error line highlighted and the location marked. I don't want to\n> > > > push added complexity into the client, especially when we don't even\n> > > > have a client who has this need yet.\n> > >\n> > > pgAdmin, phpAdmin, pgaccess, and friends don't count? We have GUI front\n> > > ends *today*, you know.\n> >\n> > But how do they display error messages now? Can't they just continue\n> > doing that with this new code? Do we want to make them code their own\n> > error handling, and for what little benefit? Let them figure out how to\n> > display the error in fixed-width font and be done with it. I am sure\n> > they have bigger things to do than colorize error locations.\n> \n> A bigger question is that if we decide to output just offset information\n> in the message, we have to be sure _all_ the clients can interpret it or\n> the syntax information is confusing. Are we prepared to get all the\n> clients update at the same time we add the feature? Seems we should go\n> with a simple solution now and add later.\n> \n\nIf instead of printing:\n\nERROR: A parse error near \"foo\"\n\nwe print\n\nERROR: A parse error near \"foo\" (index=10)\n\nit should not affect any of the existing clients.\nFor the clients, this will be just text as before and they will print it as\nreceived. If some client wants, it may look for the index information and do\nwhatever is convenient for that interface (as an enhancement).\n\nSo I think this option is backward compatible.\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Wed, 01 Aug 2001 17:34:22 -0400",
"msg_from": "Fernando Nasser <fnasser@cygnus.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "Fernando Nasser <fnasser@cygnus.com> writes:\n> If instead of printing:\n> ERROR: A parse error near \"foo\"\n> we print\n> ERROR: A parse error near \"foo\" (index=10)\n> it should not affect any of the existing clients.\n\nOne objection to this idea is that it doesn't play nicely with\nlocalization of error message texts. I'd sooner do\n\n\tERROR: A parse error near \"foo\"\n\tERRORLOCATION: 10\n\nwhich doesn't create any problems with localization (there's no\nparticular need to translate the keywords, since a client probably\nwouldn't show them to the user anyway). It's just as backward\ncompatible, and not that much uglier for an old client.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 18:22:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@cygnus.com> writes:\n> > If instead of printing:\n> > ERROR: A parse error near \"foo\"\n> > we print\n> > ERROR: A parse error near \"foo\" (index=10)\n> > it should not affect any of the existing clients.\n> \n> One objection to this idea is that it doesn't play nicely with\n> localization of error message texts. I'd sooner do\n> \n> ERROR: A parse error near \"foo\"\n> ERRORLOCATION: 10\n> \n> which doesn't create any problems with localization (there's no\n> particular need to translate the keywords, since a client probably\n> wouldn't show them to the user anyway). It's just as backward\n> compatible, and not that much uglier for an old client.\n\nI'm not sure how this format causes a problem with localization. The\ntrailing bracketed text isn't part of the message text -- it's just a\nset of fields and values. So, since the keywords aren't really intended\nfor raw display, they don't require translation. Parsing the format is\nno harder, and the raw output isn't as ugly as is a multi-line list of\nfields and values, IMHO. (I really dislike unnecessarily having gobs of\noutput lines in the message.)\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Wed, 01 Aug 2001 18:38:29 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> I'm not sure how this format causes a problem with localization. The\n> trailing bracketed text isn't part of the message text -- it's just a\n> set of fields and values.\n\nIt *looks* like it is part of the message text --- to users, and to\nprograms that aren't specifically aware that it isn't. A user-friendly\nclient would need to take extra steps to strip out the (index=N) part\nto avoid complaints from users that their error messages aren't getting\nfully translated.\n\n> So, since the keywords aren't really intended\n> for raw display, they don't require translation. Parsing the format is\n> no harder, and the raw output isn't as ugly as is a multi-line list of\n> fields and values, IMHO. (I really dislike unnecessarily having gobs of\n> output lines in the message.)\n\nI don't much care for it either, and wouldn't propose it if this were\nthe sole application. However, we have other applications, as noted in\nthe previous discussion:\n\n--- distinguishing the actual error message from tips/hints about what\n to do about it. There are a fair number of these already, and right\n now there's just a very weak formatting convention that hints\n appear on a separate line.\n\n--- distinguishing a translatable (primary) error message from a\n maintainer error message that need not be translated. We have lots\n and lots of errors in the backend that could all fit under a single\n primary error code of \"Internal error, please report this to\n pgsql-bugs\", thus vastly reducing the burden on message translators.\n The maintainer error message (eg, \"foobar: unexpected node type 124\")\n still needs to appear, but it could be a secondary field.\n\n--- including backend file name and line number of the elog call, for\n easier debugging and unambiguous identification of an error source.\n\n--- severity level\n\n--- doubtless other ideas will occur to us once we have the capability.\n\nGiven all these potential benefits, I'm willing to endure the downside\nof slightly ugly-looking error reports in old clients. They'll still\n*work*, mind you, and indeed emit info that users might like to have.\nTo the extent that it's clutter, people will be motivated to update\ntheir clients. Doesn't seem like much of a downside.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 18:51:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Neil Padgett <npadgett@redhat.com> writes:\n> > I'm not sure how this format causes a problem with localization. The\n> > trailing bracketed text isn't part of the message text -- it's just a\n> > set of fields and values.\n> \n> It *looks* like it is part of the message text --- to users, and to\n> programs that aren't specifically aware that it isn't. A user-friendly\n> client would need to take extra steps to strip out the (index=N) part\n> to avoid complaints from users that their error messages aren't getting\n> fully translated.\n> \n\nThis is exactly what I want. If you don't have a new client, it looks\nlike a message with some funk on the end. If you have a new, friendly\nclient, it will strip out the field/value list at the end. Exactly the\nsame as the multi-line list, really. And the translation complaint is\nequally applicable to either format.\n\n> > So, since the keywords aren't really intended\n> > for raw display, they don't require translation. Parsing the format is\n> > no harder, and the raw output isn't as ugly as is a multi-line list of\n> > fields and values, IMHO. (I really dislike unnecessarily having gobs of\n> > output lines in the message.)\n> \n> I don't much care for it either, and wouldn't propose it if this were\n> the sole application. However, we have other applications, as noted in\n> the previous discussion:\n> \n> --- distinguishing the actual error message from tips/hints about what\n> to do about it. There are a fair number of these already, and right\n> now there's just a very weak formatting convention that hints\n> appear on a separate line.\n\nI didn't know that such a convention exists already -- how would these\nhints look under your proposed new format?\n\n> \n> --- distinguishing a translatable (primary) error message from a\n> maintainer error message that need not be translated. We have lots\n> and lots of errors in the backend that could all fit under a single\n> primary error code of \"Internal error, please report this to\n> pgsql-bugs\", thus vastly reducing the burden on message translators.\n> The maintainer error message (eg, \"foobar: unexpected node type 124\")\n> still needs to appear, but it could be a secondary field.\n> \n\nWhy aren't we using numerics to do this? I recall reading something (in\nthe archives?) about them before, but I don't recall the outcome. Is\nanything like numerics being added to help with the localization\nefforts? It would seem this is the best way to handle primary errors,\nversus maintainer errors.\n\n> --- including backend file name and line number of the elog call, for\n> easier debugging and unambiguous identification of an error source.\n> \n> --- severity level\n> \n> --- doubtless other ideas will occur to us once we have the capability.\n\nHmm... You could do any of these with either format, but I'm starting to\nthat with this many fields, any message in my suggested format is\nprobably going to wrap. So I'm pretty much sold on a multi-line format.\n(It might even be less ugly for messages with lots of fields!) \n\n> \n> Given all these potential benefits, I'm willing to endure the downside\n> of slightly ugly-looking error reports in old clients. They'll still\n> *work*, mind you, and indeed emit info that users might like to have.\n> To the extent that it's clutter, people will be motivated to update\n> their clients. Doesn't seem like much of a downside.\n>\n\nNo, I don't think so either. It seems that this new format makes sense.\nWould the elog call be changed to support passing in a list of\narguments? Or are you proposing we just hard code the field name / value\nlists into the messages? (a bad idea, IMHO) We should probably introduce\na new call, say, eelog (for enhanced error log) that takes such a list,\nand then we could define elog as a macro which calls eelog with suitable\ndefaults for use with \"legacy\" messages. Then, we wouldn't need to go\nafter every error message right away. (And in fact, probably, in the\ncase of soem rare messages. need not ever.)\n\nThe question this brings up is whether a logging change can / should be\ntackled in this release. Specifically, with the current state of\ninternationalization work, is it best to do it now, or later? Or, for\nnow, should we just decide on an output format, and then hardcode the\nfield output for just the syntax error reporting, leaving everything\nelse to be tackled later?\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Wed, 01 Aug 2001 19:35:15 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> This is exactly what I want. If you don't have a new client, it looks\n> like a message with some funk on the end. If you have a new, friendly\n> client, it will strip out the field/value list at the end. Exactly the\n> same as the multi-line list, really. And the translation complaint is\n> equally applicable to either format.\n\nAgreed --- as long as the *only* thing you want to add is a syntax error\nlocation, that way would be better. But it doesn't scale...\n\n>> --- distinguishing the actual error message from tips/hints about what\n>> to do about it. There are a fair number of these already, and right\n>> now there's just a very weak formatting convention that hints\n>> appear on a separate line.\n\n> I didn't know that such a convention exists already -- how would these\n> hints look under your proposed new format?\n\nWell, it's a pretty weak convention, but here are a couple of examples\nof what I'm talking about:\n\n elog(ERROR, \"_bt_getstackbuf: my bits moved right off the end of the world!\"\n \"\\n\\tRecreate index %s.\", RelationGetRelationName(rel));\n\n elog(ERROR, \"Left hand side of operator '%s' has an unknown type\"\n \"\\n\\tProbably a bad attribute name\", op);\n\n elog(ERROR, \"Unable to identify a %s operator '%s' for type '%s'\"\n \"\\n\\tYou may need to add parentheses or an explicit cast\",\n (is_left_op ? \"left\" : \"right\"),\n op, typeidTypeName(arg));\n\nIn all these cases, I'd call the added-on lines hints --- they're not\npart of the basic error message, and the hint may not be applicable\nto your particular situation. Without wanting to start an argument\nas to the validity of these particular hints, I do think that it'd\nbe a good idea to distinguish them from the primary error message.\nIn the first example, where I'd like to see us end up is something\nlike:\n\nERROR: Internal error, please report to pgsql-bugs\nDETAIL: my bits moved right off the end of the world!\nHINT: Recreate index foo\nCODELOCATION: _bt_getstackbuf: src/backend/access/nbtree/nbtinsert.c, line 551\n\nI'm not wedded to these particular keywords, but hopefully this will\nserve as an illustration that we're cramming a lot of stuff into an\nerror message already. Breaking it out into fields could render it\nmore intelligible, not less so --- and with an updated client, a user\ncould choose not to look at the info he doesn't want. Right now he\nhas no real prospect of suppressing unwanted info. An example is the\nsource routine name that we cram into many messages, as a poor man's\nsubstitute for accurate error location info. That's pretty much useless\nto non-hackers, and ought to be moved out to a secondary field.\n\n> Why aren't we using numerics to do this?\n\nWhy, thanks for reminding me. Adding a standardized error code (not\nmessage) that client programs could interpret is another thing that\nis on the TODO list. Seems like another application for a separable\nfield of an error report. I think we should keep such codes separate\nfrom the (localizable) message text, however. Peter E. made some cogent\narguments that trying to drive localization off error numbers would be\na losing proposition, as opposed to using gettext().\n\n> Would the elog call be changed to support passing in a list of\n> arguments?\n\nThat hadn't really been decided, but clearly some change of the elog\nAPI will be needed. I think there is more about this in the archives\nthan you will find in TODO.detail/elog.\n\n> We should probably introduce\n> a new call, say, eelog (for enhanced error log) that takes such a list,\n> and then we could define elog as a macro which calls eelog with suitable\n> defaults for use with \"legacy\" messages. Then, we wouldn't need to go\n> after every error message right away.\n\nYeah, the $64 question is how to avoid needing a \"big bang\" changeover\nof all the elog calls. Even if we wanted to try that, it'd be a\ncontinuing headache for user-added datatypes and such. I'd like to be\nable to leave the existing elog() API in place for a few releases, if\npossible.\n\n> The question this brings up is whether a logging change can / should be\n> tackled in this release. Specifically, with the current state of\n> internationalization work, is it best to do it now, or later?\n\nI'm still pointing towards 7.2 beta near the end of the month, which\nwould be a mighty tight schedule for anything ambitious in elog rework.\nOn the other hand, there's no harm in working out a design now with\nthe intention of starting to implement it in the 7.3 cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 20:38:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting "
},
{
"msg_contents": "> But how do they display error messages now? Can't they just continue\n> doing that with this new code? Do we want to make them code their own\n> error handling, and for what little benefit? Let them figure out how to\n> display the error in fixed-width font and be done with it. I am sure\n> they have bigger things to do than colorize error locations.\n\nMy 2c:\n\nWhy not do tom's suggestion for the POSITION: n thing, and modify psql to\nstrip out that header, and output the relevant part of the sql with a caret\nhighlighting the error position.\n\nThis will make it so that writers of the guis and format errors how they\nlike, and users of the most popular text interface (psql) get human-readable\nresults...\n\nie. best of both worlds...\n\nChris\n\n",
"msg_date": "Thu, 2 Aug 2001 10:27:22 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "> > But how do they display error messages now? Can't they just continue\n> > doing that with this new code? Do we want to make them code their own\n> > error handling, and for what little benefit? Let them figure out how to\n> > display the error in fixed-width font and be done with it. I am sure\n> > they have bigger things to do than colorize error locations.\n> \n> My 2c:\n> \n> Why not do tom's suggestion for the POSITION: n thing, and modify psql to\n> strip out that header, and output the relevant part of the sql with a caret\n> highlighting the error position.\n> \n> This will make it so that writers of the guis and format errors how they\n> like, and users of the most popular text interface (psql) get human-readable\n> results...\n> \n> ie. best of both worlds...\n\nOK, I withdraw my objection.\n\nAlso, I like the idea of adding Hints and Function/line numbers to the\noutput too. The offset of the error would work into that system.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 22:39:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "> > My 2c:\n> > \n> > Why not do tom's suggestion for the POSITION: n thing, and modify psql to\n> > strip out that header, and output the relevant part of the sql with a caret\n> > highlighting the error position.\n> > \n> > This will make it so that writers of the guis and format errors how they\n> > like, and users of the most popular text interface (psql) get human-readable\n> > results...\n> > \n> > ie. best of both worlds...\n> \n> OK, I withdraw my objection.\n> \n> Also, I like the idea of adding Hints and Function/line numbers to the\n> output too. The offset of the error would work into that system.\n\nI guess the thing that bothered me is that 90% of our interfaces are\njust going to throw the carret under the error line and this patch\nrequires us to modify all the client interfaces to do that, just to\nallow 10% to customize their display.\n\nNow, I know we are going to allow elog() to generate filename, line\nnumber, and function name as optional output information. We could have\na SET paramter like:\n\n\tSET SYSOUTPUT TO \"message, function, offset\"\n\nand this displays:\n\n\tERROR: lkjasdf\n\tFUNCTION: lkjasdf\n\tOFFSET: 2343\n\nand we could have an option for HIGHLIGHT:\n\n\tHIGHLIGHT: FROM tab1, tab2\n\tHIGHLIGHT: ^^^^\n\nWe could control this via GUC or via the client startup code, and\nclients could grab whatever they want to know about an error.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 12:29:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > My 2c:\n> > >\n> > > Why not do tom's suggestion for the POSITION: n thing, and modify psql to\n> > > strip out that header, and output the relevant part of the sql with a caret\n> > > highlighting the error position.\n> > >\n> > > This will make it so that writers of the guis and format errors how they\n> > > like, and users of the most popular text interface (psql) get human-readable\n> > > results...\n> > >\n> > > ie. best of both worlds...\n> >\n> > OK, I withdraw my objection.\n> >\n> > Also, I like the idea of adding Hints and Function/line numbers to the\n> > output too. The offset of the error would work into that system.\n> \n> I guess the thing that bothered me is that 90% of our interfaces are\n> just going to throw the carret under the error line and this patch\n> requires us to modify all the client interfaces to do that, just to\n> allow 10% to customize their display.\n> \n> Now, I know we are going to allow elog() to generate filename, line\n> number, and function name as optional output information. We could have\n> a SET paramter like:\n> \n> SET SYSOUTPUT TO \"message, function, offset\"\n> \n> and this displays:\n> \n> ERROR: lkjasdf\n> FUNCTION: lkjasdf\n> OFFSET: 2343\n> \n> and we could have an option for HIGHLIGHT:\n> \n> HIGHLIGHT: FROM tab1, tab2\n> HIGHLIGHT: ^^^^\n> \n> We could control this via GUC or via the client startup code, and\n> clients could grab whatever they want to know about an error.\n\nI think it seems that we all have a general idea of where we want to go\nwith this. How about the following as a plan to get this ready for 7.2:\n\n1. Leave elog() alone.\n2. For syntax error reporting, and syntax error reporting alone, output\nthe error message in the new, multi-line format from the backend.\n3. Add functionality to psql for parsing the multi-line format error\nmessages. (Probably this will form a reusable function library that\nother utilities can use.)\n4. Modify psql to use this new functionality, but only for processing\nparse errors -- all other messages will be handled as is.\n\nThoughts?\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Thu, 02 Aug 2001 13:07:10 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> I think it seems that we all have a general idea of where we want to go\n> with this. How about the following as a plan to get this ready for 7.2:\n\n> 1. Leave elog() alone.\n> 2. For syntax error reporting, and syntax error reporting alone, output\n> the error message in the new, multi-line format from the backend.\n> 3. Add functionality to psql for parsing the multi-line format error\n> messages. (Probably this will form a reusable function library that\n> other utilities can use.)\n> 4. Modify psql to use this new functionality, but only for processing\n> parse errors -- all other messages will be handled as is.\n\nThat seems like a good plan --- forward progress, and doable within the\n7.2 time frame.\n\nI think the thing we need to nail down next is the changes in the wire\nprotocol --- specifically, how the \"multi line format\" of error messages\nwill be defined. We don't necessarily need to define all the field\nkeywords yet, but we do need to have a clear idea of the format rules\nand the parsing algorithm that clients will use. This might be trickier\nthan it seems at first glance, because we need both backwards and\nforwards compatibility if we are to avoid a protocol version bump: not\nonly must old clients accept the new syntax (which is a no-op), but\nnew clients should behave reasonably well when fed messages from an old\nbackend (which might not adhere perfectly to the new syntax definition).\n\nI'd suggest drawing up a straw-man definition and posting it on\npghackers and/or pginterfaces for comment.\n\nAnother thing to think about (orthogonally to the wire protocol) is\nwhere on the client side to do the parsing. IMHO it'd be a good idea to\nput as much of it as we can into libpq, where it'll be available\nautomatically to non-psql applications. Again, though, compatibility\nis an issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 16:28:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting "
},
{
"msg_contents": "> That seems like a good plan --- forward progress, and doable within the\n> 7.2 time frame.\n\nJust a quick question - when does stuff need to be in for 7.2? I've been\nsitting on this ADD UNIQUE patch for a while. There's just a few little\nthings in its behaviour that need to be changed. Like, letting ppl add\nunique(a,b) and unique (b,a) plus warn if there's an existing non-unique\nindex...\n\nI'm just finding it really hard to find time to fiddle with it - how long do\nI have before 7.2alpha?\n\nChris\n\n",
"msg_date": "Fri, 3 Aug 2001 09:30:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Patch for Improved Syntax Error Reporting "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Just a quick question - when does stuff need to be in for 7.2?\n\nThere's not an agreed schedule yet. Personally I'd like to see us\nrelease a beta before I go off to LinuxWorld (8/28) ... it'd be nice\nto be able to say \"7.2 is in beta\" at the show. But that's just my\ntwo cents. There's been no core discussion yet beyond agreeing a\ncouple months ago that \"end of the summer feels about right\".\n\nBut anyway, if that seems like a reasonable goal then it'd be nice\nto have all new-feature patches in by mid-month or so, so that we\ncould buckle down to alpha-testing and bug fixes. How's that fit\nwith your schedule?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 22:55:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Schedule (was Re: [PATCHES] Patch for Improved Syntax Error\n Reporting)"
},
{
"msg_contents": "Neil Padgett writes:\n\n> 1. Leave elog() alone.\n> 2. For syntax error reporting, and syntax error reporting alone, output\n> the error message in the new, multi-line format from the backend.\n> 3. Add functionality to psql for parsing the multi-line format error\n> messages. (Probably this will form a reusable function library that\n> other utilities can use.)\n> 4. Modify psql to use this new functionality, but only for processing\n> parse errors -- all other messages will be handled as is.\n\nWe've had a discussion a month or two ago about rearranging error and\nnotice packets into key/value pairs, which would contain error codes,\ncommand string positions, and what not. I opined that this would require\na wire protocol change. I do not like the alternative suggestion of\nseparating these pairs by newlines or some such. That puts a rather\npeculiar (and possibly not automatically enforcable) restriction on the\nallowable contents of the message texts. Note that message texts can\ncontain parse or plan trees, which can contain all kinds of funny line\nnoise. A protocol change isn't the end of the world, so please consider\nit.\n\nBtw., there's something in the SQL standard about all of this. I can tell\nyou more once I'm done reading back emails.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 5 Aug 2001 22:37:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Patch for Improved Syntax Error Reporting"
}
] |
[
{
"msg_contents": "1. Just changed\n\tTAS(lock) to pthread_mutex_trylock(lock)\n\tS_LOCK(lock) to pthread_mutex_lock(lock)\n\tS_UNLOCK(lock) to pthread_mutex_unlock(lock)\n(and S_INIT_LOCK to share mutex-es between processes).\n\n2. pgbench was initialized with scale 10.\n SUN WS 10 (512Mb), Solaris 2.6 (I'm unable to test on E4500 -:()\n -B 16384, wal_files 8, wal_buffers 256,\n checkpoint_segments 64, checkpoint_timeout 3600\n 50 clients x 100 transactions\n (after initialization DB dir was saved and before each test\n copyed back and vacuum-ed).\n\n3. No difference.\n Mutex version maybe 0.5-1 % faster (eg: 37.264238 tps vs 37.083339 tps).\n\nSo - no gain, but no performance loss \"from using pthread library\"\n(I've also run tests with 1 client), at least on Solaris.\n\nAnd so - looks like we can use POSIX mutex-es and conditional variables\n(not semaphores; man pthread_cond_wait) and should implement light lmgr,\nprobably with priority locking.\n\nVadim\n",
"msg_date": "Wed, 1 Aug 2001 12:04:24 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Using POSIX mutex-es"
},
{
"msg_contents": "\nAdded to TODO.detail/performance.\n\n> 1. Just changed\n> \tTAS(lock) to pthread_mutex_trylock(lock)\n> \tS_LOCK(lock) to pthread_mutex_lock(lock)\n> \tS_UNLOCK(lock) to pthread_mutex_unlock(lock)\n> (and S_INIT_LOCK to share mutex-es between processes).\n> \n> 2. pgbench was initialized with scale 10.\n> SUN WS 10 (512Mb), Solaris 2.6 (I'm unable to test on E4500 -:()\n> -B 16384, wal_files 8, wal_buffers 256,\n> checkpoint_segments 64, checkpoint_timeout 3600\n> 50 clients x 100 transactions\n> (after initialization DB dir was saved and before each test\n> copyed back and vacuum-ed).\n> \n> 3. No difference.\n> Mutex version maybe 0.5-1 % faster (eg: 37.264238 tps vs 37.083339 tps).\n> \n> So - no gain, but no performance loss \"from using pthread library\"\n> (I've also run tests with 1 client), at least on Solaris.\n> \n> And so - looks like we can use POSIX mutex-es and conditional variables\n> (not semaphores; man pthread_cond_wait) and should implement light lmgr,\n> probably with priority locking.\n> \n> Vadim\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 12:51:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using POSIX mutex-es"
}
] |
[
{
"msg_contents": "Hello,\n\nI read in comp.lang.java.databases that help is needed with\ndevelopment of the JDBC driver. Can someone please provide some\npointers to what needs to be done?\n\nWhat are the open issues? Is it JDBC 2.0 compliance? PostgreSQL\n7.1 support?\n\nI've seen a lot of postings about BLOB problems, and\nJDBC-standard BLOB support is on the overall todo list\n(http://www.postgresql.org/docs/todo.html). Is that still open\nfor development? Is there anyone who has already looked at\nJDBC-standard BLOB support? If so, what are the challenges and\ncomplications?\n\nI can't promise anything yet, but I'll certainly consider\nhelping with PostgreSQL/JDBC development. I'm fluent in Java and\nhave developed a database driver before (for Oracle in a\nproprietary product). I'm about to spend quite a lot of time on\ndeveloping a web application in Java on top of PostgreSQL, so I\ncertainly have an interest in good JDBC support.\n\nIf you're not a developer but a user of the driver, what are\nyour current complaints or wish list items?\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Wed, 01 Aug 2001 21:52:18 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": true,
"msg_subject": "What needs to be done?"
},
{
"msg_contents": "Rene,\n\nCertainly the blob support needs to be done. That seems to be high on\nthe list\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Rene Pijlman\nSent: August 1, 2001 3:52 PM\nTo: pgsql-jdbc@postgresql.org\nSubject: [JDBC] What needs to be done?\n\n\nHello,\n\nI read in comp.lang.java.databases that help is needed with development\nof the JDBC driver. Can someone please provide some pointers to what\nneeds to be done?\n\nWhat are the open issues? Is it JDBC 2.0 compliance? PostgreSQL 7.1\nsupport?\n\nI've seen a lot of postings about BLOB problems, and JDBC-standard BLOB\nsupport is on the overall todo list\n(http://www.postgresql.org/docs/todo.html). Is that still open for\ndevelopment? Is there anyone who has already looked at JDBC-standard\nBLOB support? If so, what are the challenges and complications?\n\nI can't promise anything yet, but I'll certainly consider helping with\nPostgreSQL/JDBC development. I'm fluent in Java and have developed a\ndatabase driver before (for Oracle in a proprietary product). I'm about\nto spend quite a lot of time on developing a web application in Java on\ntop of PostgreSQL, so I certainly have an interest in good JDBC support.\n\nIf you're not a developer but a user of the driver, what are your\ncurrent complaints or wish list items?\n\nRegards,\nRen� Pijlman\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n",
"msg_date": "Wed, 1 Aug 2001 16:11:44 -0400",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "RE: What needs to be done?"
},
{
"msg_contents": "On Wed, 1 Aug 2001, Rene Pijlman wrote:\n\n> Hello,\n>\n> I read in comp.lang.java.databases that help is needed with\n> development of the JDBC driver. Can someone please provide some\n> pointers to what needs to be done?\n>\n> What are the open issues? Is it JDBC 2.0 compliance? PostgreSQL\n> 7.1 support?\n>\n> I've seen a lot of postings about BLOB problems, and\n> JDBC-standard BLOB support is on the overall todo list\n> (http://www.postgresql.org/docs/todo.html). Is that still open\n> for development? Is there anyone who has already looked at\n> JDBC-standard BLOB support? If so, what are the challenges and\n> complications?\n\nThe broken BLOB support is a complete showstopper for PostgreSQL in some\nenvironments, so that feels like a high priority.\n\nAs for JDBC 2.0, has anyone tried some sort of test suite for compliance?\nI know that some differences from the SQL standards make it impossible for\nPostgreSQL to be truly JDBC 2.0 compliant at the time, but it would be\nnice to know if we are as close to compliance as we can be.\n\n/Anders\n_____________________________________________________________________\nA n d e r s B e n g t s s o n ndrsbngtssn@yahoo.se\nStockholm, Sweden\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 1 Aug 2001 22:19:01 +0200 (CEST)",
"msg_from": "Anders Bengtsson <ndrsbngtssn@yahoo.se>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "On Wednesday 01 August 2001 20:52, Rene Pijlman wrote:\n> Hello,\n>\n> I read in comp.lang.java.databases that help is needed with\n> development of the JDBC driver. Can someone please provide some\n> pointers to what needs to be done?\n>\n> What are the open issues? Is it JDBC 2.0 compliance? PostgreSQL\n> 7.1 support?\n>\n> I've seen a lot of postings about BLOB problems, and\n> JDBC-standard BLOB support is on the overall todo list\n> (http://www.postgresql.org/docs/todo.html). Is that still open\n> for development? Is there anyone who has already looked at\n> JDBC-standard BLOB support? If so, what are the challenges and\n> complications?\n>\n> I can't promise anything yet, but I'll certainly consider\n> helping with PostgreSQL/JDBC development. I'm fluent in Java and\n> have developed a database driver before (for Oracle in a\n> proprietary product). I'm about to spend quite a lot of time on\n> developing a web application in Java on top of PostgreSQL, so I\n> certainly have an interest in good JDBC support.\n>\n> If you're not a developer but a user of the driver, what are\n> your current complaints or wish list items?\n\nHi,\n\nI am working in a client application that uses JDBC to access several \ndatabases. The problem is that, as the PostgreSQL JDBC driver doesn't follow \nJDBC Standard I had to write some specific code for use it with PostgreSQL DB.\n\nIt would be very interesting to have a JDBC 2.0 compliant driver.\nI would surely try it and give some feedback!!!\n\nRicardo Maia\n\n>\n> Regards,\n> Ren� Pijlman\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n",
"msg_date": "Wed, 1 Aug 2001 22:49:40 +0100",
"msg_from": "Ricardo Maia <rmaia@criticalsoftware.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n>The problem is that, as the PostgreSQL JDBC driver doesn't \n>follow JDBC Standard I had to write some specific code for \n>use it with PostgreSQL DB.\n\nSo what exactly are the deviations from the standard that you\nencountered?\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Thu, 02 Aug 2001 00:29:21 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": true,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "For example when I call the method:\n\nDatabaseMetaData.getTypeInfo()\n\nI whould expect to see the SQL Type BLOB mapped as an oid.\n\nsee attach\n\nRicardo Maia\n\n\nOn Wednesday 01 August 2001 23:29, Rene Pijlman wrote:\n> On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n> >The problem is that, as the PostgreSQL JDBC driver doesn't\n> >follow JDBC Standard I had to write some specific code for\n> >use it with PostgreSQL DB.\n>\n> So what exactly are the deviations from the standard that you\n> encountered?\n>\n> Regards,\n> Ren� Pijlman\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "Thu, 2 Aug 2001 00:58:15 +0100",
"msg_from": "Ricardo Maia <rmaia@criticalsoftware.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "\nExamples please? We need to know what is broken/missing in order for it \nto be fixed. I know there are people out there who would be glad to fix \nbugs in the JDBC driver if they know about them. Please post the \nproblems you encountered to the jdbc mail list.\n\nthanks,\n--Barry\n\n\n\nRicardo Maia wrote:\n\n> On Wednesday 01 August 2001 20:52, Rene Pijlman wrote:\n> \n>>Hello,\n>>\n>>I read in comp.lang.java.databases that help is needed with\n>>development of the JDBC driver. Can someone please provide some\n>>pointers to what needs to be done?\n>>\n>>What are the open issues? Is it JDBC 2.0 compliance? PostgreSQL\n>>7.1 support?\n>>\n>>I've seen a lot of postings about BLOB problems, and\n>>JDBC-standard BLOB support is on the overall todo list\n>>(http://www.postgresql.org/docs/todo.html). Is that still open\n>>for development? Is there anyone who has already looked at\n>>JDBC-standard BLOB support? If so, what are the challenges and\n>>complications?\n>>\n>>I can't promise anything yet, but I'll certainly consider\n>>helping with PostgreSQL/JDBC development. I'm fluent in Java and\n>>have developed a database driver before (for Oracle in a\n>>proprietary product). I'm about to spend quite a lot of time on\n>>developing a web application in Java on top of PostgreSQL, so I\n>>certainly have an interest in good JDBC support.\n>>\n>>If you're not a developer but a user of the driver, what are\n>>your current complaints or wish list items?\n>>\n> \n> Hi,\n> \n> I am working in a client application that uses JDBC to access several \n> databases. The problem is that, as the PostgreSQL JDBC driver doesn't follow \n> JDBC Standard I had to write some specific code for use it with PostgreSQL DB.\n> \n> It would be very interesting to have a JDBC 2.0 compliant driver.\n> I would surely try it and give some feedback!!!\n> \n> Ricardo Maia\n> \n> \n>>Regards,\n>>Ren� Pijlman\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n",
"msg_date": "Wed, 01 Aug 2001 18:32:32 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "Please send us all of the issues you have or know about. Just providing \nexamples of some of the problems will only get fixes for some of the \nproblems. What would be really useful is a list of all the issues you \nknow about. That way they can end up on the TODO list and get addressed.\n\nthanks,\n--Barry\n\nRicardo Maia wrote:\n\n> For example when I call the method:\n> \n> DatabaseMetaData.getTypeInfo()\n> \n> I whould expect to see the SQL Type BLOB mapped as an oid.\n> \n> see attach\n> \n> Ricardo Maia\n> \n> \n> On Wednesday 01 August 2001 23:29, Rene Pijlman wrote:\n> \n>>On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n>>\n>>>The problem is that, as the PostgreSQL JDBC driver doesn't\n>>>follow JDBC Standard I had to write some specific code for\n>>>use it with PostgreSQL DB.\n>>>\n>>So what exactly are the deviations from the standard that you\n>>encountered?\n>>\n>>Regards,\n>>Ren� Pijlman\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>>\n>>\n>>------------------------------------------------------------------------\n>>\n>>package databasetest;\n>>\n>>import java.sql.*;\n>>\n>>public class GetTypesInfo {\n>>\n>> public static void main(String args[ ]) {\n>>\n>> String url = \"jdbc:postgresql://127.0.0.1/test\";\n>>\n>> Connection con;\n>>\n>> DatabaseMetaData dbmd;\n>>\n>> try {\n>> Class.forName(\"org.postgresql.Driver\");\n>> } catch(java.lang.ClassNotFoundException e) {\n>> System.err.print(\"ClassNotFoundException: \");\n>> System.err.println(e.getMessage());\n>> }\n>>\n>> try {\n>> con = DriverManager.getConnection(url,\"bobby\", \"tareco\");\n>>\n>> dbmd = con.getMetaData();\n>>\n>> ResultSet rs = dbmd.getTypeInfo();\n>>\n>> while (rs.next()) {\n>>\n>> String typeName = rs.getString(\"TYPE_NAME\");\n>>\n>> short dataType = rs.getShort(\"DATA_TYPE\");\n>>\n>> String createParams = rs.getString(\"CREATE_PARAMS\");\n>>\n>> int nullable = rs.getInt(\"NULLABLE\");\n>>\n>> boolean caseSensitive = rs.getBoolean(\"CASE_SENSITIVE\");\n>>\n>> if(dataType != java.sql.Types.OTHER)\n>> {\n>> System.out.println(\"DBMS type \" + typeName + \":\");\n>> System.out.println(\" java.sql.Types: \" + typeName(dataType));\n>> System.out.print(\" parameters used to create: \");\n>> System.out.println(createParams);\n>> System.out.println(\" nullable?: \" + nullable);\n>> System.out.print(\" case sensitive?: \");\n>> System.out.println(caseSensitive);\n>> System.out.println(\"\");\n>> }\n>> }\n>>\n>> con.close();\n>> } catch(SQLException ex) {\n>> System.err.println(\"SQLException: \" + ex.getMessage());\n>> }\n>> }\n>>\n>>\n>> public static String typeName(int i)\n>> {\n>> switch(i){\n>> case java.sql.Types.ARRAY: return \"ARRAY\";\n>> case java.sql.Types.BIGINT: return \"BIGINT\";\n>> case java.sql.Types.BINARY: return \"BINARY\";\n>> case java.sql.Types.BIT: return \"BIT\";\n>> case java.sql.Types.BLOB: return \"BLOB\";\n>> case java.sql.Types.CHAR: return \"CHAR\";\n>> case java.sql.Types.CLOB: return \"CLOB\";\n>> case java.sql.Types.DATE: return \"DATE\";\n>> case java.sql.Types.DECIMAL: return \"DECIMAL\";\n>> case java.sql.Types.DISTINCT: return \"DISTINCT\";\n>> case java.sql.Types.DOUBLE: return \"DOUBLE\";\n>> case java.sql.Types.FLOAT: return \"FLOAT\";\n>> case java.sql.Types.INTEGER: return \"INTEGER\";\n>> case java.sql.Types.JAVA_OBJECT: return \"JAVA_OBJECT\";\n>> case java.sql.Types.LONGVARBINARY: return \"LONGVARBINARY\";\n>> case java.sql.Types.LONGVARCHAR: return \"LONGVARCHAR\";\n>> case java.sql.Types.NULL: return \"NULL\";\n>> case java.sql.Types.NUMERIC: return \"NUMERIC\";\n>> case java.sql.Types.OTHER: return \"OTHER\";\n>> case java.sql.Types.REAL: return \"REAL\";\n>> case java.sql.Types.REF: return \"REF\";\n>> case java.sql.Types.SMALLINT: return \"SMALLINT\";\n>> case java.sql.Types.STRUCT: return \"STRUCT\";\n>> case java.sql.Types.TIME: return \"TIME\";\n>> case java.sql.Types.TIMESTAMP: return \"TIMESTAMP\";\n>> case java.sql.Types.TINYINT: return \"TINYINT\";\n>> case java.sql.Types.VARBINARY: return \"VARBINARY\";\n>> case java.sql.Types.VARCHAR: return \"VARCHAR\";\n>> default: return \"\";\n>> }\n>> }\n>>}\n>>\n>>\n>>------------------------------------------------------------------------\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: Have you checked our extensive FAQ?\n>>\n>>http://www.postgresql.org/users-lounge/docs/faq.html\n>>\n>> GetTypesInfo.java\n>>\n>> Content-Type:\n>>\n>> text/x-java\n>> Content-Encoding:\n>>\n>> base64\n>>\n>>\n>> ------------------------------------------------------------------------\n>> Part 1.3\n>>\n>> Content-Type:\n>>\n>> text/plain\n>> Content-Encoding:\n>>\n>> binary\n>>\n>>\n\n\n",
"msg_date": "Wed, 01 Aug 2001 19:02:54 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "I actually think the response for 'oid' is correct. It reports the oid \nas java type integer (which is the real datatype of the value stored). \nA column of type oid can be used for may different things. It can be \nused for blobs, but not all columns of type oid are used for blobs. \nAnother use of a column of type oid is to store foreign keys from one \ntable to another. Since all tables have a builtin column named 'oid' of \ntype oid, it is very convenient to use this value in foreign keys on \nother tables. Assuming that oid = blob would break those applications.\n\nI hope everyone that uses postgresql and jdbc understands that BLOB \nsupport is one area with many problems, some of which can be fixed in \nthe JDBC code, but others that will require better support in the \nunderlying database.\n\nthanks,\n--Barry\n\nRicardo Maia wrote:\n\n> For example when I call the method:\n> \n> DatabaseMetaData.getTypeInfo()\n> \n> I whould expect to see the SQL Type BLOB mapped as an oid.\n> \n> see attach\n> \n> Ricardo Maia\n> \n> \n> On Wednesday 01 August 2001 23:29, Rene Pijlman wrote:\n> \n>>On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n>>\n>>>The problem is that, as the PostgreSQL JDBC driver doesn't\n>>>follow JDBC Standard I had to write some specific code for\n>>>use it with PostgreSQL DB.\n>>>\n>>So what exactly are the deviations from the standard that you\n>>encountered?\n>>\n>>Regards,\n>>Ren� Pijlman\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>>\n>>\n>>------------------------------------------------------------------------\n>>\n>>package databasetest;\n>>\n>>import java.sql.*;\n>>\n>>public class GetTypesInfo {\n>>\n>> public static void main(String args[ ]) {\n>>\n>> String url = \"jdbc:postgresql://127.0.0.1/test\";\n>>\n>> Connection con;\n>>\n>> DatabaseMetaData dbmd;\n>>\n>> try {\n>> Class.forName(\"org.postgresql.Driver\");\n>> } catch(java.lang.ClassNotFoundException e) {\n>> System.err.print(\"ClassNotFoundException: \");\n>> System.err.println(e.getMessage());\n>> }\n>>\n>> try {\n>> con = DriverManager.getConnection(url,\"bobby\", \"tareco\");\n>>\n>> dbmd = con.getMetaData();\n>>\n>> ResultSet rs = dbmd.getTypeInfo();\n>>\n>> while (rs.next()) {\n>>\n>> String typeName = rs.getString(\"TYPE_NAME\");\n>>\n>> short dataType = rs.getShort(\"DATA_TYPE\");\n>>\n>> String createParams = rs.getString(\"CREATE_PARAMS\");\n>>\n>> int nullable = rs.getInt(\"NULLABLE\");\n>>\n>> boolean caseSensitive = rs.getBoolean(\"CASE_SENSITIVE\");\n>>\n>> if(dataType != java.sql.Types.OTHER)\n>> {\n>> System.out.println(\"DBMS type \" + typeName + \":\");\n>> System.out.println(\" java.sql.Types: \" + typeName(dataType));\n>> System.out.print(\" parameters used to create: \");\n>> System.out.println(createParams);\n>> System.out.println(\" nullable?: \" + nullable);\n>> System.out.print(\" case sensitive?: \");\n>> System.out.println(caseSensitive);\n>> System.out.println(\"\");\n>> }\n>> }\n>>\n>> con.close();\n>> } catch(SQLException ex) {\n>> System.err.println(\"SQLException: \" + ex.getMessage());\n>> }\n>> }\n>>\n>>\n>> public static String typeName(int i)\n>> {\n>> switch(i){\n>> case java.sql.Types.ARRAY: return \"ARRAY\";\n>> case java.sql.Types.BIGINT: return \"BIGINT\";\n>> case java.sql.Types.BINARY: return \"BINARY\";\n>> case java.sql.Types.BIT: return \"BIT\";\n>> case java.sql.Types.BLOB: return \"BLOB\";\n>> case java.sql.Types.CHAR: return \"CHAR\";\n>> case java.sql.Types.CLOB: return \"CLOB\";\n>> case java.sql.Types.DATE: return \"DATE\";\n>> case java.sql.Types.DECIMAL: return \"DECIMAL\";\n>> case java.sql.Types.DISTINCT: return \"DISTINCT\";\n>> case java.sql.Types.DOUBLE: return \"DOUBLE\";\n>> case java.sql.Types.FLOAT: return \"FLOAT\";\n>> case java.sql.Types.INTEGER: return \"INTEGER\";\n>> case java.sql.Types.JAVA_OBJECT: return \"JAVA_OBJECT\";\n>> case java.sql.Types.LONGVARBINARY: return \"LONGVARBINARY\";\n>> case java.sql.Types.LONGVARCHAR: return \"LONGVARCHAR\";\n>> case java.sql.Types.NULL: return \"NULL\";\n>> case java.sql.Types.NUMERIC: return \"NUMERIC\";\n>> case java.sql.Types.OTHER: return \"OTHER\";\n>> case java.sql.Types.REAL: return \"REAL\";\n>> case java.sql.Types.REF: return \"REF\";\n>> case java.sql.Types.SMALLINT: return \"SMALLINT\";\n>> case java.sql.Types.STRUCT: return \"STRUCT\";\n>> case java.sql.Types.TIME: return \"TIME\";\n>> case java.sql.Types.TIMESTAMP: return \"TIMESTAMP\";\n>> case java.sql.Types.TINYINT: return \"TINYINT\";\n>> case java.sql.Types.VARBINARY: return \"VARBINARY\";\n>> case java.sql.Types.VARCHAR: return \"VARCHAR\";\n>> default: return \"\";\n>> }\n>> }\n>>}\n>>\n>>\n>>------------------------------------------------------------------------\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: Have you checked our extensive FAQ?\n>>\n>>http://www.postgresql.org/users-lounge/docs/faq.html\n>>\n>> GetTypesInfo.java\n>>\n>> Content-Type:\n>>\n>> text/x-java\n>> Content-Encoding:\n>>\n>> base64\n>>\n>>\n>> ------------------------------------------------------------------------\n>> Part 1.3\n>>\n>> Content-Type:\n>>\n>> text/plain\n>> Content-Encoding:\n>>\n>> binary\n>>\n>>\n\n\n",
"msg_date": "Wed, 01 Aug 2001 19:16:45 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "Anders,\n\nWhat aspects of BLOB support do you consider broken? Are these aspects \nthat are broken in the JDBC layer or are 'broken' at the server layer?\n\nthanks,\n--Barry\n\nAnders Bengtsson wrote:\n\n> On Wed, 1 Aug 2001, Rene Pijlman wrote:\n> \n> \n>>Hello,\n>>\n>>I read in comp.lang.java.databases that help is needed with\n>>development of the JDBC driver. Can someone please provide some\n>>pointers to what needs to be done?\n>>\n>>What are the open issues? Is it JDBC 2.0 compliance? PostgreSQL\n>>7.1 support?\n>>\n>>I've seen a lot of postings about BLOB problems, and\n>>JDBC-standard BLOB support is on the overall todo list\n>>(http://www.postgresql.org/docs/todo.html). Is that still open\n>>for development? Is there anyone who has already looked at\n>>JDBC-standard BLOB support? If so, what are the challenges and\n>>complications?\n>>\n> \n> The broken BLOB support is a complete showstopper for PostgreSQL in some\n> environments, so that feels like a high priority.\n> \n> As for JDBC 2.0, has anyone tried some sort of test suite for compliance?\n> I know that some differences from the SQL standards make it impossible for\n> PostgreSQL to be truly JDBC 2.0 compliant at the time, but it would be\n> nice to know if we are as close to compliance as we can be.\n> \n> /Anders\n> _____________________________________________________________________\n> A n d e r s B e n g t s s o n ndrsbngtssn@yahoo.se\n> Stockholm, Sweden\n> \n> \n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n\n",
"msg_date": "Wed, 01 Aug 2001 19:19:33 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "I actually consider the biggest problem the fact the the 'official' \npostgres jdbc website is very much out of date \n(http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers). \nI feel that either someone needs to maintain this page; or someone needs \nto create a new website and get the jdbc.postgresql.org DNS entry to \npoint to the new site, or the page should just be decommisioned. At \nthis point I think it is doing more harm than good.\n\nthanks,\n--Barry\n\nRene Pijlman wrote:\n\n> Hello,\n> \n> I read in comp.lang.java.databases that help is needed with\n> development of the JDBC driver. Can someone please provide some\n> pointers to what needs to be done?\n> \n> What are the open issues? Is it JDBC 2.0 compliance? PostgreSQL\n> 7.1 support?\n> \n> I've seen a lot of postings about BLOB problems, and\n> JDBC-standard BLOB support is on the overall todo list\n> (http://www.postgresql.org/docs/todo.html). Is that still open\n> for development? Is there anyone who has already looked at\n> JDBC-standard BLOB support? If so, what are the challenges and\n> complications?\n> \n> I can't promise anything yet, but I'll certainly consider\n> helping with PostgreSQL/JDBC development. I'm fluent in Java and\n> have developed a database driver before (for Oracle in a\n> proprietary product). I'm about to spend quite a lot of time on\n> developing a web application in Java on top of PostgreSQL, so I\n> certainly have an interest in good JDBC support.\n> \n> If you're not a developer but a user of the driver, what are\n> your current complaints or wish list items?\n> \n> Regards,\n> Ren� Pijlman\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n",
"msg_date": "Wed, 01 Aug 2001 19:32:58 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "\nThis appeared on the JDBC list. Do we need to address this?\n\n> I actually consider the biggest problem the fact the the 'official' \n> postgres jdbc website is very much out of date \n> (http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers). \n> I feel that either someone needs to maintain this page; or someone needs \n> to create a new website and get the jdbc.postgresql.org DNS entry to \n> point to the new site, or the page should just be decommisioned. At \n> this point I think it is doing more harm than good.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Aug 2001 22:41:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "Bruce,\n\nI am willing to make my site the \"official\" site. For now we could just\nrepoint the dns to jdbc.fastcrypt.com, or I could build them on my site,\nand ftp them into the postgres site?\n\nI think we do need to address it. A lot of people go there for answers.\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Bruce Momjian\nSent: August 1, 2001 10:41 PM\nTo: Barry Lind\nCc: Rene Pijlman; pgsql-jdbc@postgresql.org; PostgreSQL-development\nSubject: [JDBC] Re: What needs to be done?\n\n\n\nThis appeared on the JDBC list. Do we need to address this?\n\n> I actually consider the biggest problem the fact the the 'official'\n> postgres jdbc website is very much out of date \n> (http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers).\n\n> I feel that either someone needs to maintain this page; or someone\nneeds \n> to create a new website and get the jdbc.postgresql.org DNS entry to \n> point to the new site, or the page should just be decommisioned. At \n> this point I think it is doing more harm than good.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Wed, 1 Aug 2001 23:15:58 -0400",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "RE: Re: What needs to be done?"
},
{
"msg_contents": "\nLet's see what people say on hackers.\n\n> Bruce,\n> \n> I am willing to make my site the \"official\" site. For now we could just\n> repoint the dns to jdbc.fastcrypt.com, or I could build them on my site,\n> and ftp them into the postgres site?\n> \n> I think we do need to address it. A lot of people go there for answers.\n> \n> Dave\n> \n> -----Original Message-----\n> From: pgsql-jdbc-owner@postgresql.org\n> [mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Bruce Momjian\n> Sent: August 1, 2001 10:41 PM\n> To: Barry Lind\n> Cc: Rene Pijlman; pgsql-jdbc@postgresql.org; PostgreSQL-development\n> Subject: [JDBC] Re: What needs to be done?\n> \n> \n> \n> This appeared on the JDBC list. Do we need to address this?\n> \n> > I actually consider the biggest problem the fact the the 'official'\n> > postgres jdbc website is very much out of date \n> > (http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers).\n> \n> > I feel that either someone needs to maintain this page; or someone\n> needs \n> > to create a new website and get the jdbc.postgresql.org DNS entry to \n> > point to the new site, or the page should just be decommisioned. At \n> > this point I think it is doing more harm than good.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 00:17:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "On Wed, 1 Aug 2001, Bruce Momjian wrote:\n\n>\n> This appeared on the JDBC list. Do we need to address this?\n\nWhere's Peter Mount? Isn't he the maintainer?\n\nVince.\n\n>\n> > I actually consider the biggest problem the fact the the 'official'\n> > postgres jdbc website is very much out of date\n> > (http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers).\n> > I feel that either someone needs to maintain this page; or someone needs\n> > to create a new website and get the jdbc.postgresql.org DNS entry to\n> > point to the new site, or the page should just be decommisioned. At\n> > this point I think it is doing more harm than good.\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 2 Aug 2001 06:41:53 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "\n\nSo how whould I map the BLOB java type in the corresponding SQL type?\n\nI want to create a table with a BLOB attribute, but I want that my code can \nrun for PostgreSQL, Oracle and other BD that handles BLOBs.\n\nSo first I had to map the BLOB in the corresponding BD SQL type and then \ncreate the table with an attribute of that SQL type.\n\nRicardo Maia\n\nOn Thursday 02 August 2001 03:16, Barry Lind wrote:\n> I actually think the response for 'oid' is correct. It reports the oid\n> as java type integer (which is the real datatype of the value stored).\n> A column of type oid can be used for may different things. It can be\n> used for blobs, but not all columns of type oid are used for blobs.\n> Another use of a column of type oid is to store foreign keys from one\n> table to another. Since all tables have a builtin column named 'oid' of\n> type oid, it is very convenient to use this value in foreign keys on\n> other tables. Assuming that oid = blob would break those applications.\n>\n> I hope everyone that uses postgresql and jdbc understands that BLOB\n> support is one area with many problems, some of which can be fixed in\n> the JDBC code, but others that will require better support in the\n> underlying database.\n>\n> thanks,\n> --Barry\n>\n> Ricardo Maia wrote:\n> > For example when I call the method:\n> >\n> > DatabaseMetaData.getTypeInfo()\n> >\n> > I whould expect to see the SQL Type BLOB mapped as an oid.\n> >\n> > see attach\n> >\n> > Ricardo Maia\n> >\n> > On Wednesday 01 August 2001 23:29, Rene Pijlman wrote:\n> >>On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n> >>>The problem is that, as the PostgreSQL JDBC driver doesn't\n> >>>follow JDBC Standard I had to write some specific code for\n> >>>use it with PostgreSQL DB.\n> >>\n> >>So what exactly are the deviations from the standard that you\n> >>encountered?\n> >>\n> >>Regards,\n> >>Ren� Pijlman\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >>\n> >>\n> >>------------------------------------------------------------------------\n> >>\n> >>package databasetest;\n> >>\n> >>import java.sql.*;\n> >>\n> >>public class GetTypesInfo {\n> >>\n> >> public static void main(String args[ ]) {\n> >>\n> >> String url = \"jdbc:postgresql://127.0.0.1/test\";\n> >>\n> >> Connection con;\n> >>\n> >> DatabaseMetaData dbmd;\n> >>\n> >> try {\n> >> Class.forName(\"org.postgresql.Driver\");\n> >> } catch(java.lang.ClassNotFoundException e) {\n> >> System.err.print(\"ClassNotFoundException: \");\n> >> System.err.println(e.getMessage());\n> >> }\n> >>\n> >> try {\n> >> con = DriverManager.getConnection(url,\"bobby\", \"tareco\");\n> >>\n> >> dbmd = con.getMetaData();\n> >>\n> >> ResultSet rs = dbmd.getTypeInfo();\n> >>\n> >> while (rs.next()) {\n> >>\n> >> String typeName = rs.getString(\"TYPE_NAME\");\n> >>\n> >> short dataType = rs.getShort(\"DATA_TYPE\");\n> >>\n> >> String createParams = rs.getString(\"CREATE_PARAMS\");\n> >>\n> >> int nullable = rs.getInt(\"NULLABLE\");\n> >>\n> >> boolean caseSensitive = rs.getBoolean(\"CASE_SENSITIVE\");\n> >>\n> >> if(dataType != java.sql.Types.OTHER)\n> >> {\n> >> System.out.println(\"DBMS type \" + typeName + \":\");\n> >> System.out.println(\" java.sql.Types: \" +\n> >> typeName(dataType)); System.out.print(\" parameters used to create:\n> >> \");\n> >> System.out.println(createParams);\n> >> System.out.println(\" nullable?: \" + nullable);\n> >> System.out.print(\" case sensitive?: \");\n> >> System.out.println(caseSensitive);\n> >> System.out.println(\"\");\n> >> }\n> >> }\n> >>\n> >> con.close();\n> >> } catch(SQLException ex) {\n> >> System.err.println(\"SQLException: \" + ex.getMessage());\n> >> }\n> >> }\n> >>\n> >>\n> >> public static String typeName(int i)\n> >> {\n> >> switch(i){\n> >> case java.sql.Types.ARRAY: return \"ARRAY\";\n> >> case java.sql.Types.BIGINT: return \"BIGINT\";\n> >> case java.sql.Types.BINARY: return \"BINARY\";\n> >> case java.sql.Types.BIT: return \"BIT\";\n> >> case java.sql.Types.BLOB: return \"BLOB\";\n> >> case java.sql.Types.CHAR: return \"CHAR\";\n> >> case java.sql.Types.CLOB: return \"CLOB\";\n> >> case java.sql.Types.DATE: return \"DATE\";\n> >> case java.sql.Types.DECIMAL: return \"DECIMAL\";\n> >> case java.sql.Types.DISTINCT: return \"DISTINCT\";\n> >> case java.sql.Types.DOUBLE: return \"DOUBLE\";\n> >> case java.sql.Types.FLOAT: return \"FLOAT\";\n> >> case java.sql.Types.INTEGER: return \"INTEGER\";\n> >> case java.sql.Types.JAVA_OBJECT: return \"JAVA_OBJECT\";\n> >> case java.sql.Types.LONGVARBINARY: return \"LONGVARBINARY\";\n> >> case java.sql.Types.LONGVARCHAR: return \"LONGVARCHAR\";\n> >> case java.sql.Types.NULL: return \"NULL\";\n> >> case java.sql.Types.NUMERIC: return \"NUMERIC\";\n> >> case java.sql.Types.OTHER: return \"OTHER\";\n> >> case java.sql.Types.REAL: return \"REAL\";\n> >> case java.sql.Types.REF: return \"REF\";\n> >> case java.sql.Types.SMALLINT: return \"SMALLINT\";\n> >> case java.sql.Types.STRUCT: return \"STRUCT\";\n> >> case java.sql.Types.TIME: return \"TIME\";\n> >> case java.sql.Types.TIMESTAMP: return \"TIMESTAMP\";\n> >> case java.sql.Types.TINYINT: return \"TINYINT\";\n> >> case java.sql.Types.VARBINARY: return \"VARBINARY\";\n> >> case java.sql.Types.VARCHAR: return \"VARCHAR\";\n> >> default: return \"\";\n> >> }\n> >> }\n> >>}\n> >>\n> >>\n> >>------------------------------------------------------------------------\n> >>\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 5: Have you checked our extensive FAQ?\n> >>\n> >>http://www.postgresql.org/users-lounge/docs/faq.html\n> >>\n> >> GetTypesInfo.java\n> >>\n> >> Content-Type:\n> >>\n> >> text/x-java\n> >> Content-Encoding:\n> >>\n> >> base64\n> >>\n> >>\n> >> ------------------------------------------------------------------------\n> >> Part 1.3\n> >>\n> >> Content-Type:\n> >>\n> >> text/plain\n> >> Content-Encoding:\n> >>\n> >> binary\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n",
"msg_date": "Thu, 2 Aug 2001 11:51:49 +0100",
"msg_from": "Ricardo Maia <rmaia@criticalsoftware.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "\n[Answering as Anders Norwegian brother :-]\n \n* Barry Lind <barry@xythos.com> wrote:\n|\n| Anders,\n| \n| What aspects of BLOB support do you consider broken? Are these\n| aspects that are broken in the JDBC layer or are 'broken' at the\n| server layer?\n\nWe should have support for the bytea datatype, so applications are not\nrequired to wrap blob operations into a transaction. This has been \na showstopper for using PostgreSQL with the Turbine framework at Apache\nfor a long time. If we get that to work with PostgreSQL we will attract\nmore users and be a step closer to world domination ;-)\n\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n",
"msg_date": "02 Aug 2001 16:01:37 +0200",
"msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "* Barry Lind <barry@xythos.com> wrote:\n|\n| I actually think the response for 'oid' is correct. It reports the\n\n\nWell, maybe one could check if the oid is a foreign key refering to \nthe lo table. \n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n",
"msg_date": "02 Aug 2001 16:06:05 +0200",
"msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "On Wed, Aug 01, 2001 at 10:19:01PM +0200, Anders Bengtsson wrote:\n> \n> As for JDBC 2.0, has anyone tried some sort of test suite for compliance?\n> I know that some differences from the SQL standards make it impossible for\n> PostgreSQL to be truly JDBC 2.0 compliant at the time, but it would be\n> nice to know if we are as close to compliance as we can be.\n> \n\nI have run the JDBC test suite[1] against the jdbc driver that comes\nwith the postgresql 7.1.2 release and the driver that is in CVS. About\n17% of the tests fail in both cases. From a glance, it looks like most\nof those failures are from unimplemented methods, either because support\nhasn't been added to the driver or there isn't any backend support.\nThere are some weird failures as well (i.e.: a test fails once, but\nsucceeds on later runs). Once I have combed through the results (and\nthere are a lot of results!), I will post a report here.\n\nLiam\n\n[1] http://java.sun.com/products/jdbc/jdbctestsuite-1_2_1.html\n http://java.sun.com/products/jdbc/download.html#jdbctestsuite\n\nThe instructions are fairly straightforward, but application server\nstuff is a bit vague. Using j2ee as the application server is what they\nexpect even though it doesn't say so in the docs. I can provide\ninstructions.\n\n-- \nLiam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com\n",
"msg_date": "Thu, 2 Aug 2001 10:33:59 -0400",
"msg_from": "Liam Stewart <liams@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "> I actually consider the biggest problem the fact the the 'official' \n> postgres jdbc website is very much out of date \n> (http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers). \n> I feel that either someone needs to maintain this page; or someone needs \n> to create a new website and get the jdbc.postgresql.org DNS entry to \n> point to the new site, or the page should just be decommisioned. At \n> this point I think it is doing more harm than good.\n\nJust a followup. Peter has replied to a few people stating he is very\nbusy and wants someone to take over the jdbc.postgresql.org website. \nMarc, Vince, and others are working on it now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 11:35:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "Ricardo,\n\nThere are many other issues with postgres blobs that will not allow you \nto acheive your goal easily. You are going to need different \nimplementations per database type to deal with the differences between \nblob implementations across different databases. The one big hurdle you \nwill have with postgres blobs is the fact that when you delete the row \ncontaining the blob, it doesn't delete the blob. You have to issue a \nseparate delete blob request. This is very different than what happens \nin Oracle for example. This can be automated by adding triggers to the \ntable to do this, but by now you are very far from having a single code \nbase (at least the code that creates the tables and triggers) that \nsupports all of the different databases.\n\nthanks,\n--Barry\n\nRicardo Maia wrote:\n\n> \n> So how whould I map the BLOB java type in the corresponding SQL type?\n> \n> I want to create a table with a BLOB attribute, but I want that my code can \n> run for PostgreSQL, Oracle and other BD that handles BLOBs.\n> \n> So first I had to map the BLOB in the corresponding BD SQL type and then \n> create the table with an attribute of that SQL type.\n> \n> Ricardo Maia\n> \n> On Thursday 02 August 2001 03:16, Barry Lind wrote:\n> \n>>I actually think the response for 'oid' is correct. It reports the oid\n>>as java type integer (which is the real datatype of the value stored).\n>>A column of type oid can be used for may different things. It can be\n>>used for blobs, but not all columns of type oid are used for blobs.\n>>Another use of a column of type oid is to store foreign keys from one\n>>table to another. Since all tables have a builtin column named 'oid' of\n>>type oid, it is very convenient to use this value in foreign keys on\n>>other tables. Assuming that oid = blob would break those applications.\n>>\n>>I hope everyone that uses postgresql and jdbc understands that BLOB\n>>support is one area with many problems, some of which can be fixed in\n>>the JDBC code, but others that will require better support in the\n>>underlying database.\n>>\n>>thanks,\n>>--Barry\n>>\n>>Ricardo Maia wrote:\n>>\n>>>For example when I call the method:\n>>>\n>>>DatabaseMetaData.getTypeInfo()\n>>>\n>>>I whould expect to see the SQL Type BLOB mapped as an oid.\n>>>\n>>>see attach\n>>>\n>>>Ricardo Maia\n>>>\n>>>On Wednesday 01 August 2001 23:29, Rene Pijlman wrote:\n>>>\n>>>>On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n>>>>\n>>>>>The problem is that, as the PostgreSQL JDBC driver doesn't\n>>>>>follow JDBC Standard I had to write some specific code for\n>>>>>use it with PostgreSQL DB.\n>>>>>\n>>>>So what exactly are the deviations from the standard that you\n>>>>encountered?\n>>>>\n>>>>Regards,\n>>>>Ren� Pijlman\n>>>>\n>>>>---------------------------(end of broadcast)---------------------------\n>>>>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>>>>\n>>>>\n>>>>------------------------------------------------------------------------\n>>>>\n>>>>package databasetest;\n>>>>\n>>>>import java.sql.*;\n>>>>\n>>>>public class GetTypesInfo {\n>>>>\n>>>> public static void main(String args[ ]) {\n>>>>\n>>>> String url = \"jdbc:postgresql://127.0.0.1/test\";\n>>>>\n>>>> Connection con;\n>>>>\n>>>> DatabaseMetaData dbmd;\n>>>>\n>>>> try {\n>>>> Class.forName(\"org.postgresql.Driver\");\n>>>> } catch(java.lang.ClassNotFoundException e) {\n>>>> System.err.print(\"ClassNotFoundException: \");\n>>>> System.err.println(e.getMessage());\n>>>> }\n>>>>\n>>>> try {\n>>>> con = DriverManager.getConnection(url,\"bobby\", \"tareco\");\n>>>>\n>>>> dbmd = con.getMetaData();\n>>>>\n>>>> ResultSet rs = dbmd.getTypeInfo();\n>>>>\n>>>> while (rs.next()) {\n>>>>\n>>>> String typeName = rs.getString(\"TYPE_NAME\");\n>>>>\n>>>> short dataType = rs.getShort(\"DATA_TYPE\");\n>>>>\n>>>> String createParams = rs.getString(\"CREATE_PARAMS\");\n>>>>\n>>>> int nullable = rs.getInt(\"NULLABLE\");\n>>>>\n>>>> boolean caseSensitive = rs.getBoolean(\"CASE_SENSITIVE\");\n>>>>\n>>>> if(dataType != java.sql.Types.OTHER)\n>>>> {\n>>>> System.out.println(\"DBMS type \" + typeName + \":\");\n>>>> System.out.println(\" java.sql.Types: \" +\n>>>>typeName(dataType)); System.out.print(\" parameters used to create:\n>>>>\");\n>>>> System.out.println(createParams);\n>>>> System.out.println(\" nullable?: \" + nullable);\n>>>> System.out.print(\" case sensitive?: \");\n>>>> System.out.println(caseSensitive);\n>>>> System.out.println(\"\");\n>>>> }\n>>>> }\n>>>>\n>>>> con.close();\n>>>> } catch(SQLException ex) {\n>>>> System.err.println(\"SQLException: \" + ex.getMessage());\n>>>> }\n>>>> }\n>>>>\n>>>>\n>>>> public static String typeName(int i)\n>>>> {\n>>>> switch(i){\n>>>> case java.sql.Types.ARRAY: return \"ARRAY\";\n>>>> case java.sql.Types.BIGINT: return \"BIGINT\";\n>>>> case java.sql.Types.BINARY: return \"BINARY\";\n>>>> case java.sql.Types.BIT: return \"BIT\";\n>>>> case java.sql.Types.BLOB: return \"BLOB\";\n>>>> case java.sql.Types.CHAR: return \"CHAR\";\n>>>> case java.sql.Types.CLOB: return \"CLOB\";\n>>>> case java.sql.Types.DATE: return \"DATE\";\n>>>> case java.sql.Types.DECIMAL: return \"DECIMAL\";\n>>>> case java.sql.Types.DISTINCT: return \"DISTINCT\";\n>>>> case java.sql.Types.DOUBLE: return \"DOUBLE\";\n>>>> case java.sql.Types.FLOAT: return \"FLOAT\";\n>>>> case java.sql.Types.INTEGER: return \"INTEGER\";\n>>>> case java.sql.Types.JAVA_OBJECT: return \"JAVA_OBJECT\";\n>>>> case java.sql.Types.LONGVARBINARY: return \"LONGVARBINARY\";\n>>>> case java.sql.Types.LONGVARCHAR: return \"LONGVARCHAR\";\n>>>> case java.sql.Types.NULL: return \"NULL\";\n>>>> case java.sql.Types.NUMERIC: return \"NUMERIC\";\n>>>> case java.sql.Types.OTHER: return \"OTHER\";\n>>>> case java.sql.Types.REAL: return \"REAL\";\n>>>> case java.sql.Types.REF: return \"REF\";\n>>>> case java.sql.Types.SMALLINT: return \"SMALLINT\";\n>>>> case java.sql.Types.STRUCT: return \"STRUCT\";\n>>>> case java.sql.Types.TIME: return \"TIME\";\n>>>> case java.sql.Types.TIMESTAMP: return \"TIMESTAMP\";\n>>>> case java.sql.Types.TINYINT: return \"TINYINT\";\n>>>> case java.sql.Types.VARBINARY: return \"VARBINARY\";\n>>>> case java.sql.Types.VARCHAR: return \"VARCHAR\";\n>>>> default: return \"\";\n>>>> }\n>>>> }\n>>>>}\n>>>>\n>>>>\n>>>>------------------------------------------------------------------------\n>>>>\n>>>>\n>>>>---------------------------(end of broadcast)---------------------------\n>>>>TIP 5: Have you checked our extensive FAQ?\n>>>>\n>>>>http://www.postgresql.org/users-lounge/docs/faq.html\n>>>>\n>>>>GetTypesInfo.java\n>>>>\n>>>>Content-Type:\n>>>>\n>>>>text/x-java\n>>>>Content-Encoding:\n>>>>\n>>>>base64\n>>>>\n>>>>\n>>>>------------------------------------------------------------------------\n>>>>Part 1.3\n>>>>\n>>>>Content-Type:\n>>>>\n>>>>text/plain\n>>>>Content-Encoding:\n>>>>\n>>>>binary\n>>>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n",
"msg_date": "Thu, 02 Aug 2001 09:37:36 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "Bruce Momjian wrote:\n>Peter has replied to a few people stating he is very\n>busy and wants someone to take over the jdbc.postgresql.org website. \n>Marc, Vince, and others are working on it now.\n\nDo you need help?\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Thu, 02 Aug 2001 18:51:22 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": true,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "On Thu, 2 Aug 2001, Rene Pijlman wrote:\n\n> Bruce Momjian wrote:\n> >Peter has replied to a few people stating he is very\n> >busy and wants someone to take over the jdbc.postgresql.org website.\n> >Marc, Vince, and others are working on it now.\n>\n> Do you need help?\n\nWe will very soon. I'll hang onto your address and get back to you.\nIf for some reason you don't hear from me in the next couple of weeks,\ndrop me a note in case I forgot about you. You can either mail to this\naddress or to webmaster@postgresql.org.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 2 Aug 2001 12:57:47 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "Ricardo,\n\nThere are actually a couple of reasons why the jdbc driver can't do this:\n\n1) The client doesn't know that the column being deleted is a blob. All \nit can know is that the data type of the column is oid. Oids can be \nused for many reasons, one of which is blobs. The code can't assume \nthat just because a column is of type oid that it represents a blob.\n\n2) The fact that the delete of the blob is separate from the delete of \nthe row is actually a useful feature. The postgres blob feature \nessentially treats the blob as an independent object from the table row \nthat holds a pointer to it. Thus you can have multiple rows of data in \nthe same or even different tables point to the same blob. Because of \nthis feature, you can't assume that when any one row is deleted that the \ncorresponding blob should be deleted (that decision requires an \nunderstanding of the application data model).\n\n\nPostgres as of 7.1 has 'toast' which provides a different mechanism for \nstoring large objects. 'toast' doesn't have the 'multiple rows can \nreference the same blob' feature, and therefore 'toast' does delete the \nlarge object when the row is deleted. However 'toast' has other \ndeficiencies that prevent it from being used in the JDBC driver for \nBLOBs. It is my hope that in the future with some additional \nfunctionality on the server that the JDBC driver can have a reasonable \nBLOB implementation that uses the new 'toast' functionality, and the \ncurrent blob implementation is deprecated.\n\nthanks,\n--Barry\n\nRicardo Maia wrote:\n\n> Why can't the JDBC Driver deal with the delete of the Blob? From the user \n> point of view the BLOB is an attribute of that row and should be \n> inserted/deleted with the rest of the row.\n> \n> The fact that postgres uses another entity to store the blob is an \n> implementation issue ...\n> \n> Regards,\n> \n> Ricardo\n> \n> On Thursday 02 August 2001 17:37, Barry Lind wrote:\n> \n>>Ricardo,\n>>\n>>There are many other issues with postgres blobs that will not allow you\n>>to acheive your goal easily. You are going to need different\n>>implementations per database type to deal with the differences between\n>>blob implementations across different databases. The one big hurdle you\n>>will have with postgres blobs is the fact that when you delete the row\n>>containing the blob, it doesn't delete the blob. You have to issue a\n>>separate delete blob request. This is very different than what happens\n>>in Oracle for example. This can be automated by adding triggers to the\n>>table to do this, but by now you are very far from having a single code\n>>base (at least the code that creates the tables and triggers) that\n>>supports all of the different databases.\n>>\n>>thanks,\n>>--Barry\n>>\n>>Ricardo Maia wrote:\n>>\n>>>So how whould I map the BLOB java type in the corresponding SQL type?\n>>>\n>>>I want to create a table with a BLOB attribute, but I want that my code\n>>>can run for PostgreSQL, Oracle and other BD that handles BLOBs.\n>>>\n>>>So first I had to map the BLOB in the corresponding BD SQL type and then\n>>>create the table with an attribute of that SQL type.\n>>>\n>>>Ricardo Maia\n>>>\n>>>On Thursday 02 August 2001 03:16, Barry Lind wrote:\n>>>\n>>>>I actually think the response for 'oid' is correct. It reports the oid\n>>>>as java type integer (which is the real datatype of the value stored).\n>>>>A column of type oid can be used for may different things. It can be\n>>>>used for blobs, but not all columns of type oid are used for blobs.\n>>>>Another use of a column of type oid is to store foreign keys from one\n>>>>table to another. Since all tables have a builtin column named 'oid' of\n>>>>type oid, it is very convenient to use this value in foreign keys on\n>>>>other tables. Assuming that oid = blob would break those applications.\n>>>>\n>>>>I hope everyone that uses postgresql and jdbc understands that BLOB\n>>>>support is one area with many problems, some of which can be fixed in\n>>>>the JDBC code, but others that will require better support in the\n>>>>underlying database.\n>>>>\n>>>>thanks,\n>>>>--Barry\n>>>>\n>>>>Ricardo Maia wrote:\n>>>>\n>>>>>For example when I call the method:\n>>>>>\n>>>>>DatabaseMetaData.getTypeInfo()\n>>>>>\n>>>>>I whould expect to see the SQL Type BLOB mapped as an oid.\n>>>>>\n>>>>>see attach\n>>>>>\n>>>>>Ricardo Maia\n>>>>>\n>>>>>On Wednesday 01 August 2001 23:29, Rene Pijlman wrote:\n>>>>>\n>>>>>>On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n>>>>>>\n>>>>>>>The problem is that, as the PostgreSQL JDBC driver doesn't\n>>>>>>>follow JDBC Standard I had to write some specific code for\n>>>>>>>use it with PostgreSQL DB.\n>>>>>>>\n>>>>>>So what exactly are the deviations from the standard that you\n>>>>>>encountered?\n>>>>>>\n>>>>>>Regards,\n>>>>>>Ren� Pijlman\n>>>>>>\n>>>>>>---------------------------(end of\n>>>>>>broadcast)--------------------------- TIP 1: subscribe and unsubscribe\n>>>>>>commands go to majordomo@postgresql.org\n>>>>>>\n>>>>>>\n>>>>>>-----------------------------------------------------------------------\n>>>>>>-\n>>>>>>\n>>>>>>package databasetest;\n>>>>>>\n>>>>>>import java.sql.*;\n>>>>>>\n>>>>>>public class GetTypesInfo {\n>>>>>>\n>>>>>>public static void main(String args[ ]) {\n>>>>>>\n>>>>>> String url = \"jdbc:postgresql://127.0.0.1/test\";\n>>>>>>\n>>>>>> Connection con;\n>>>>>>\n>>>>>> DatabaseMetaData dbmd;\n>>>>>>\n>>>>>> try {\n>>>>>> Class.forName(\"org.postgresql.Driver\");\n>>>>>> } catch(java.lang.ClassNotFoundException e) {\n>>>>>> System.err.print(\"ClassNotFoundException: \");\n>>>>>> System.err.println(e.getMessage());\n>>>>>> }\n>>>>>>\n>>>>>> try {\n>>>>>> con = DriverManager.getConnection(url,\"bobby\", \"tareco\");\n>>>>>>\n>>>>>> dbmd = con.getMetaData();\n>>>>>>\n>>>>>> ResultSet rs = dbmd.getTypeInfo();\n>>>>>>\n>>>>>> while (rs.next()) {\n>>>>>>\n>>>>>> String typeName = rs.getString(\"TYPE_NAME\");\n>>>>>>\n>>>>>> short dataType = rs.getShort(\"DATA_TYPE\");\n>>>>>>\n>>>>>> String createParams = rs.getString(\"CREATE_PARAMS\");\n>>>>>>\n>>>>>> int nullable = rs.getInt(\"NULLABLE\");\n>>>>>>\n>>>>>> boolean caseSensitive = rs.getBoolean(\"CASE_SENSITIVE\");\n>>>>>>\n>>>>>> if(dataType != java.sql.Types.OTHER)\n>>>>>> {\n>>>>>> System.out.println(\"DBMS type \" + typeName + \":\");\n>>>>>> System.out.println(\" java.sql.Types: \" +\n>>>>>>typeName(dataType)); System.out.print(\" parameters used to create:\n>>>>>>\");\n>>>>>> System.out.println(createParams);\n>>>>>> System.out.println(\" nullable?: \" + nullable);\n>>>>>> System.out.print(\" case sensitive?: \");\n>>>>>> System.out.println(caseSensitive);\n>>>>>> System.out.println(\"\");\n>>>>>> }\n>>>>>> }\n>>>>>>\n>>>>>> con.close();\n>>>>>> } catch(SQLException ex) {\n>>>>>> System.err.println(\"SQLException: \" + ex.getMessage());\n>>>>>> }\n>>>>>>}\n>>>>>>\n>>>>>>\n>>>>>>public static String typeName(int i)\n>>>>>>{\n>>>>>> switch(i){\n>>>>>> case java.sql.Types.ARRAY: return \"ARRAY\";\n>>>>>> case java.sql.Types.BIGINT: return \"BIGINT\";\n>>>>>> case java.sql.Types.BINARY: return \"BINARY\";\n>>>>>> case java.sql.Types.BIT: return \"BIT\";\n>>>>>> case java.sql.Types.BLOB: return \"BLOB\";\n>>>>>> case java.sql.Types.CHAR: return \"CHAR\";\n>>>>>> case java.sql.Types.CLOB: return \"CLOB\";\n>>>>>> case java.sql.Types.DATE: return \"DATE\";\n>>>>>> case java.sql.Types.DECIMAL: return \"DECIMAL\";\n>>>>>> case java.sql.Types.DISTINCT: return \"DISTINCT\";\n>>>>>> case java.sql.Types.DOUBLE: return \"DOUBLE\";\n>>>>>> case java.sql.Types.FLOAT: return \"FLOAT\";\n>>>>>> case java.sql.Types.INTEGER: return \"INTEGER\";\n>>>>>> case java.sql.Types.JAVA_OBJECT: return \"JAVA_OBJECT\";\n>>>>>> case java.sql.Types.LONGVARBINARY: return \"LONGVARBINARY\";\n>>>>>> case java.sql.Types.LONGVARCHAR: return \"LONGVARCHAR\";\n>>>>>> case java.sql.Types.NULL: return \"NULL\";\n>>>>>> case java.sql.Types.NUMERIC: return \"NUMERIC\";\n>>>>>> case java.sql.Types.OTHER: return \"OTHER\";\n>>>>>> case java.sql.Types.REAL: return \"REAL\";\n>>>>>> case java.sql.Types.REF: return \"REF\";\n>>>>>> case java.sql.Types.SMALLINT: return \"SMALLINT\";\n>>>>>> case java.sql.Types.STRUCT: return \"STRUCT\";\n>>>>>> case java.sql.Types.TIME: return \"TIME\";\n>>>>>> case java.sql.Types.TIMESTAMP: return \"TIMESTAMP\";\n>>>>>> case java.sql.Types.TINYINT: return \"TINYINT\";\n>>>>>> case java.sql.Types.VARBINARY: return \"VARBINARY\";\n>>>>>> case java.sql.Types.VARCHAR: return \"VARCHAR\";\n>>>>>> default: return \"\";\n>>>>>> }\n>>>>>>}\n>>>>>>}\n>>>>>>\n>>>>>>\n>>>>>>-----------------------------------------------------------------------\n>>>>>>-\n>>>>>>\n>>>>>>\n>>>>>>---------------------------(end of\n>>>>>>broadcast)--------------------------- TIP 5: Have you checked our\n>>>>>>extensive FAQ?\n>>>>>>\n>>>>>>http://www.postgresql.org/users-lounge/docs/faq.html\n>>>>>>\n>>>>>>GetTypesInfo.java\n>>>>>>\n>>>>>>Content-Type:\n>>>>>>\n>>>>>>text/x-java\n>>>>>>Content-Encoding:\n>>>>>>\n>>>>>>base64\n>>>>>>\n>>>>>>\n>>>>>>-----------------------------------------------------------------------\n>>>>>>- Part 1.3\n>>>>>>\n>>>>>>Content-Type:\n>>>>>>\n>>>>>>text/plain\n>>>>>>Content-Encoding:\n>>>>>>\n>>>>>>binary\n>>>>>>\n>>>>---------------------------(end of broadcast)---------------------------\n>>>>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 2: you can get off all lists at once with the unregister command\n>>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>>>\n> \n\n\n",
"msg_date": "Thu, 02 Aug 2001 10:58:16 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "\nWhy can't the JDBC Driver deal with the delete of the Blob? From the user \npoint of view the BLOB is an attribute of that row and should be \ninserted/deleted with the rest of the row.\n\nThe fact that postgres uses another entity to store the blob is an \nimplementation issue ...\n\nRegards,\n\nRicardo\n\nOn Thursday 02 August 2001 17:37, Barry Lind wrote:\n> Ricardo,\n>\n> There are many other issues with postgres blobs that will not allow you\n> to acheive your goal easily. You are going to need different\n> implementations per database type to deal with the differences between\n> blob implementations across different databases. The one big hurdle you\n> will have with postgres blobs is the fact that when you delete the row\n> containing the blob, it doesn't delete the blob. You have to issue a\n> separate delete blob request. This is very different than what happens\n> in Oracle for example. This can be automated by adding triggers to the\n> table to do this, but by now you are very far from having a single code\n> base (at least the code that creates the tables and triggers) that\n> supports all of the different databases.\n>\n> thanks,\n> --Barry\n>\n> Ricardo Maia wrote:\n> > So how whould I map the BLOB java type in the corresponding SQL type?\n> >\n> > I want to create a table with a BLOB attribute, but I want that my code\n> > can run for PostgreSQL, Oracle and other BD that handles BLOBs.\n> >\n> > So first I had to map the BLOB in the corresponding BD SQL type and then\n> > create the table with an attribute of that SQL type.\n> >\n> > Ricardo Maia\n> >\n> > On Thursday 02 August 2001 03:16, Barry Lind wrote:\n> >>I actually think the response for 'oid' is correct. It reports the oid\n> >>as java type integer (which is the real datatype of the value stored).\n> >>A column of type oid can be used for may different things. It can be\n> >>used for blobs, but not all columns of type oid are used for blobs.\n> >>Another use of a column of type oid is to store foreign keys from one\n> >>table to another. Since all tables have a builtin column named 'oid' of\n> >>type oid, it is very convenient to use this value in foreign keys on\n> >>other tables. Assuming that oid = blob would break those applications.\n> >>\n> >>I hope everyone that uses postgresql and jdbc understands that BLOB\n> >>support is one area with many problems, some of which can be fixed in\n> >>the JDBC code, but others that will require better support in the\n> >>underlying database.\n> >>\n> >>thanks,\n> >>--Barry\n> >>\n> >>Ricardo Maia wrote:\n> >>>For example when I call the method:\n> >>>\n> >>>DatabaseMetaData.getTypeInfo()\n> >>>\n> >>>I whould expect to see the SQL Type BLOB mapped as an oid.\n> >>>\n> >>>see attach\n> >>>\n> >>>Ricardo Maia\n> >>>\n> >>>On Wednesday 01 August 2001 23:29, Rene Pijlman wrote:\n> >>>>On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n> >>>>>The problem is that, as the PostgreSQL JDBC driver doesn't\n> >>>>>follow JDBC Standard I had to write some specific code for\n> >>>>>use it with PostgreSQL DB.\n> >>>>\n> >>>>So what exactly are the deviations from the standard that you\n> >>>>encountered?\n> >>>>\n> >>>>Regards,\n> >>>>Ren� Pijlman\n> >>>>\n> >>>>---------------------------(end of\n> >>>> broadcast)--------------------------- TIP 1: subscribe and unsubscribe\n> >>>> commands go to majordomo@postgresql.org\n> >>>>\n> >>>>\n> >>>>-----------------------------------------------------------------------\n> >>>>-\n> >>>>\n> >>>>package databasetest;\n> >>>>\n> >>>>import java.sql.*;\n> >>>>\n> >>>>public class GetTypesInfo {\n> >>>>\n> >>>> public static void main(String args[ ]) {\n> >>>>\n> >>>> String url = \"jdbc:postgresql://127.0.0.1/test\";\n> >>>>\n> >>>> Connection con;\n> >>>>\n> >>>> DatabaseMetaData dbmd;\n> >>>>\n> >>>> try {\n> >>>> Class.forName(\"org.postgresql.Driver\");\n> >>>> } catch(java.lang.ClassNotFoundException e) {\n> >>>> System.err.print(\"ClassNotFoundException: \");\n> >>>> System.err.println(e.getMessage());\n> >>>> }\n> >>>>\n> >>>> try {\n> >>>> con = DriverManager.getConnection(url,\"bobby\", \"tareco\");\n> >>>>\n> >>>> dbmd = con.getMetaData();\n> >>>>\n> >>>> ResultSet rs = dbmd.getTypeInfo();\n> >>>>\n> >>>> while (rs.next()) {\n> >>>>\n> >>>> String typeName = rs.getString(\"TYPE_NAME\");\n> >>>>\n> >>>> short dataType = rs.getShort(\"DATA_TYPE\");\n> >>>>\n> >>>> String createParams = rs.getString(\"CREATE_PARAMS\");\n> >>>>\n> >>>> int nullable = rs.getInt(\"NULLABLE\");\n> >>>>\n> >>>> boolean caseSensitive = rs.getBoolean(\"CASE_SENSITIVE\");\n> >>>>\n> >>>> if(dataType != java.sql.Types.OTHER)\n> >>>> {\n> >>>> System.out.println(\"DBMS type \" + typeName + \":\");\n> >>>> System.out.println(\" java.sql.Types: \" +\n> >>>>typeName(dataType)); System.out.print(\" parameters used to create:\n> >>>>\");\n> >>>> System.out.println(createParams);\n> >>>> System.out.println(\" nullable?: \" + nullable);\n> >>>> System.out.print(\" case sensitive?: \");\n> >>>> System.out.println(caseSensitive);\n> >>>> System.out.println(\"\");\n> >>>> }\n> >>>> }\n> >>>>\n> >>>> con.close();\n> >>>> } catch(SQLException ex) {\n> >>>> System.err.println(\"SQLException: \" + ex.getMessage());\n> >>>> }\n> >>>> }\n> >>>>\n> >>>>\n> >>>> public static String typeName(int i)\n> >>>> {\n> >>>> switch(i){\n> >>>> case java.sql.Types.ARRAY: return \"ARRAY\";\n> >>>> case java.sql.Types.BIGINT: return \"BIGINT\";\n> >>>> case java.sql.Types.BINARY: return \"BINARY\";\n> >>>> case java.sql.Types.BIT: return \"BIT\";\n> >>>> case java.sql.Types.BLOB: return \"BLOB\";\n> >>>> case java.sql.Types.CHAR: return \"CHAR\";\n> >>>> case java.sql.Types.CLOB: return \"CLOB\";\n> >>>> case java.sql.Types.DATE: return \"DATE\";\n> >>>> case java.sql.Types.DECIMAL: return \"DECIMAL\";\n> >>>> case java.sql.Types.DISTINCT: return \"DISTINCT\";\n> >>>> case java.sql.Types.DOUBLE: return \"DOUBLE\";\n> >>>> case java.sql.Types.FLOAT: return \"FLOAT\";\n> >>>> case java.sql.Types.INTEGER: return \"INTEGER\";\n> >>>> case java.sql.Types.JAVA_OBJECT: return \"JAVA_OBJECT\";\n> >>>> case java.sql.Types.LONGVARBINARY: return \"LONGVARBINARY\";\n> >>>> case java.sql.Types.LONGVARCHAR: return \"LONGVARCHAR\";\n> >>>> case java.sql.Types.NULL: return \"NULL\";\n> >>>> case java.sql.Types.NUMERIC: return \"NUMERIC\";\n> >>>> case java.sql.Types.OTHER: return \"OTHER\";\n> >>>> case java.sql.Types.REAL: return \"REAL\";\n> >>>> case java.sql.Types.REF: return \"REF\";\n> >>>> case java.sql.Types.SMALLINT: return \"SMALLINT\";\n> >>>> case java.sql.Types.STRUCT: return \"STRUCT\";\n> >>>> case java.sql.Types.TIME: return \"TIME\";\n> >>>> case java.sql.Types.TIMESTAMP: return \"TIMESTAMP\";\n> >>>> case java.sql.Types.TINYINT: return \"TINYINT\";\n> >>>> case java.sql.Types.VARBINARY: return \"VARBINARY\";\n> >>>> case java.sql.Types.VARCHAR: return \"VARCHAR\";\n> >>>> default: return \"\";\n> >>>> }\n> >>>> }\n> >>>>}\n> >>>>\n> >>>>\n> >>>>-----------------------------------------------------------------------\n> >>>>-\n> >>>>\n> >>>>\n> >>>>---------------------------(end of\n> >>>> broadcast)--------------------------- TIP 5: Have you checked our\n> >>>> extensive FAQ?\n> >>>>\n> >>>>http://www.postgresql.org/users-lounge/docs/faq.html\n> >>>>\n> >>>>GetTypesInfo.java\n> >>>>\n> >>>>Content-Type:\n> >>>>\n> >>>>text/x-java\n> >>>>Content-Encoding:\n> >>>>\n> >>>>base64\n> >>>>\n> >>>>\n> >>>>-----------------------------------------------------------------------\n> >>>>- Part 1.3\n> >>>>\n> >>>>Content-Type:\n> >>>>\n> >>>>text/plain\n> >>>>Content-Encoding:\n> >>>>\n> >>>>binary\n> >>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n----------------------\nCritical Software, SA \nUrbaniza��o Quinta da Fonte\nLote 15, TZ, r/c H\n3030 Coimbra\nTelef.: 239 708 520\nTelem.: 938 314 605\n----------------------\n",
"msg_date": "Thu, 2 Aug 2001 19:16:00 +0100",
"msg_from": "Ricardo Maia <rmaia@criticalsoftware.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "On Wed, 1 Aug 2001, Barry Lind wrote:\n\n> Anders,\n>\n> What aspects of BLOB support do you consider broken? Are these aspects\n> that are broken in the JDBC layer or are 'broken' at the server layer?\n\nNow I've looked at the code and located the problem:\n\nThe method setBinaryStream(...) in PreparedStatement always assumes that\nit's a BLOB that we want to write, but it should really be able to write\nany kind of field. It should for instance be possible to write a VARCHAR\nfrom an InputStream, but currently you will end up with an integer (the\nOID) in the field instead of the data.\n\nI was first surprised to find that getBinaryStream(...) in ResultSet\n*does* support both BLOBs and ordinary values, but then realized that it\ncan do this because it knows the type of the field. In PreparedStatement\nnothing is known about the fields.\n\nI'm not sure where this problem belongs. It is not impossible for the JDBC\ndriver to find out about the field types, but it may be slow to do so.\n\n/Anders\n\n_____________________________________________________________________\nA n d e r s B e n g t s s o n ndrsbngtssn@yahoo.se\nStockholm, Sweden\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 2 Aug 2001 20:47:16 +0200 (CEST)",
"msg_from": "Anders Bengtsson <ndrsbngtssn@yahoo.se>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "On Thu, 2 Aug 2001, Barry Lind wrote:\n\n> There are actually a couple of reasons why the jdbc driver can't do this:\n>\n> 1) The client doesn't know that the column being deleted is a blob. All\n> it can know is that the data type of the column is oid. Oids can be\n> used for many reasons, one of which is blobs. The code can't assume\n> that just because a column is of type oid that it represents a blob.\n\nI'm thinking that it should be possible to create some kind of\ncompatability mode for the driver. If you knew for sure that you we're\nonly using OIDs for BLOBs, then that assumption would be safe (?). Would\nsomething like that be possible to create, or am I missing something\nhere?\nOf course, this could add too much complexity to the driver.\n\n/Anders\n_____________________________________________________________________\nA n d e r s B e n g t s s o n ndrsbngtssn@yahoo.se\nStockholm, Sweden\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 2 Aug 2001 21:03:55 +0200 (CEST)",
"msg_from": "Anders Bengtsson <ndrsbngtssn@yahoo.se>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "\nwe are currently evaluating several solutions, and, once we've fully\nfigured out what we are going to do, will announce such ... at the time, I\ncan imagine that help will be much appreciated :)\n\nOn Thu, 2 Aug 2001, Rene Pijlman wrote:\n\n> Bruce Momjian wrote:\n> >Peter has replied to a few people stating he is very\n> >busy and wants someone to take over the jdbc.postgresql.org website.\n> >Marc, Vince, and others are working on it now.\n>\n> Do you need help?\n>\n> Regards,\n> Ren� Pijlman\n>\n\n",
"msg_date": "Thu, 2 Aug 2001 15:26:31 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "\nWould someone summarize what items need to be added to the TODO list.\n\n> Ricardo,\n> \n> There are many other issues with postgres blobs that will not allow you \n> to acheive your goal easily. You are going to need different \n> implementations per database type to deal with the differences between \n> blob implementations across different databases. The one big hurdle you \n> will have with postgres blobs is the fact that when you delete the row \n> containing the blob, it doesn't delete the blob. You have to issue a \n> separate delete blob request. This is very different than what happens \n> in Oracle for example. This can be automated by adding triggers to the \n> table to do this, but by now you are very far from having a single code \n> base (at least the code that creates the tables and triggers) that \n> supports all of the different databases.\n> \n> thanks,\n> --Barry\n> \n> Ricardo Maia wrote:\n> \n> > \n> > So how whould I map the BLOB java type in the corresponding SQL type?\n> > \n> > I want to create a table with a BLOB attribute, but I want that my code can \n> > run for PostgreSQL, Oracle and other BD that handles BLOBs.\n> > \n> > So first I had to map the BLOB in the corresponding BD SQL type and then \n> > create the table with an attribute of that SQL type.\n> > \n> > Ricardo Maia\n> > \n> > On Thursday 02 August 2001 03:16, Barry Lind wrote:\n> > \n> >>I actually think the response for 'oid' is correct. It reports the oid\n> >>as java type integer (which is the real datatype of the value stored).\n> >>A column of type oid can be used for may different things. It can be\n> >>used for blobs, but not all columns of type oid are used for blobs.\n> >>Another use of a column of type oid is to store foreign keys from one\n> >>table to another. Since all tables have a builtin column named 'oid' of\n> >>type oid, it is very convenient to use this value in foreign keys on\n> >>other tables. Assuming that oid = blob would break those applications.\n> >>\n> >>I hope everyone that uses postgresql and jdbc understands that BLOB\n> >>support is one area with many problems, some of which can be fixed in\n> >>the JDBC code, but others that will require better support in the\n> >>underlying database.\n> >>\n> >>thanks,\n> >>--Barry\n> >>\n> >>Ricardo Maia wrote:\n> >>\n> >>>For example when I call the method:\n> >>>\n> >>>DatabaseMetaData.getTypeInfo()\n> >>>\n> >>>I whould expect to see the SQL Type BLOB mapped as an oid.\n> >>>\n> >>>see attach\n> >>>\n> >>>Ricardo Maia\n> >>>\n> >>>On Wednesday 01 August 2001 23:29, Rene Pijlman wrote:\n> >>>\n> >>>>On Wed, 1 Aug 2001 22:49:40 +0100, Ricardo Maia wrote:\n> >>>>\n> >>>>>The problem is that, as the PostgreSQL JDBC driver doesn't\n> >>>>>follow JDBC Standard I had to write some specific code for\n> >>>>>use it with PostgreSQL DB.\n> >>>>>\n> >>>>So what exactly are the deviations from the standard that you\n> >>>>encountered?\n> >>>>\n> >>>>Regards,\n> >>>>Ren? Pijlman\n> >>>>\n> >>>>---------------------------(end of broadcast)---------------------------\n> >>>>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >>>>\n> >>>>\n> >>>>------------------------------------------------------------------------\n> >>>>\n> >>>>package databasetest;\n> >>>>\n> >>>>import java.sql.*;\n> >>>>\n> >>>>public class GetTypesInfo {\n> >>>>\n> >>>> public static void main(String args[ ]) {\n> >>>>\n> >>>> String url = \"jdbc:postgresql://127.0.0.1/test\";\n> >>>>\n> >>>> Connection con;\n> >>>>\n> >>>> DatabaseMetaData dbmd;\n> >>>>\n> >>>> try {\n> >>>> Class.forName(\"org.postgresql.Driver\");\n> >>>> } catch(java.lang.ClassNotFoundException e) {\n> >>>> System.err.print(\"ClassNotFoundException: \");\n> >>>> System.err.println(e.getMessage());\n> >>>> }\n> >>>>\n> >>>> try {\n> >>>> con = DriverManager.getConnection(url,\"bobby\", \"tareco\");\n> >>>>\n> >>>> dbmd = con.getMetaData();\n> >>>>\n> >>>> ResultSet rs = dbmd.getTypeInfo();\n> >>>>\n> >>>> while (rs.next()) {\n> >>>>\n> >>>> String typeName = rs.getString(\"TYPE_NAME\");\n> >>>>\n> >>>> short dataType = rs.getShort(\"DATA_TYPE\");\n> >>>>\n> >>>> String createParams = rs.getString(\"CREATE_PARAMS\");\n> >>>>\n> >>>> int nullable = rs.getInt(\"NULLABLE\");\n> >>>>\n> >>>> boolean caseSensitive = rs.getBoolean(\"CASE_SENSITIVE\");\n> >>>>\n> >>>> if(dataType != java.sql.Types.OTHER)\n> >>>> {\n> >>>> System.out.println(\"DBMS type \" + typeName + \":\");\n> >>>> System.out.println(\" java.sql.Types: \" +\n> >>>>typeName(dataType)); System.out.print(\" parameters used to create:\n> >>>>\");\n> >>>> System.out.println(createParams);\n> >>>> System.out.println(\" nullable?: \" + nullable);\n> >>>> System.out.print(\" case sensitive?: \");\n> >>>> System.out.println(caseSensitive);\n> >>>> System.out.println(\"\");\n> >>>> }\n> >>>> }\n> >>>>\n> >>>> con.close();\n> >>>> } catch(SQLException ex) {\n> >>>> System.err.println(\"SQLException: \" + ex.getMessage());\n> >>>> }\n> >>>> }\n> >>>>\n> >>>>\n> >>>> public static String typeName(int i)\n> >>>> {\n> >>>> switch(i){\n> >>>> case java.sql.Types.ARRAY: return \"ARRAY\";\n> >>>> case java.sql.Types.BIGINT: return \"BIGINT\";\n> >>>> case java.sql.Types.BINARY: return \"BINARY\";\n> >>>> case java.sql.Types.BIT: return \"BIT\";\n> >>>> case java.sql.Types.BLOB: return \"BLOB\";\n> >>>> case java.sql.Types.CHAR: return \"CHAR\";\n> >>>> case java.sql.Types.CLOB: return \"CLOB\";\n> >>>> case java.sql.Types.DATE: return \"DATE\";\n> >>>> case java.sql.Types.DECIMAL: return \"DECIMAL\";\n> >>>> case java.sql.Types.DISTINCT: return \"DISTINCT\";\n> >>>> case java.sql.Types.DOUBLE: return \"DOUBLE\";\n> >>>> case java.sql.Types.FLOAT: return \"FLOAT\";\n> >>>> case java.sql.Types.INTEGER: return \"INTEGER\";\n> >>>> case java.sql.Types.JAVA_OBJECT: return \"JAVA_OBJECT\";\n> >>>> case java.sql.Types.LONGVARBINARY: return \"LONGVARBINARY\";\n> >>>> case java.sql.Types.LONGVARCHAR: return \"LONGVARCHAR\";\n> >>>> case java.sql.Types.NULL: return \"NULL\";\n> >>>> case java.sql.Types.NUMERIC: return \"NUMERIC\";\n> >>>> case java.sql.Types.OTHER: return \"OTHER\";\n> >>>> case java.sql.Types.REAL: return \"REAL\";\n> >>>> case java.sql.Types.REF: return \"REF\";\n> >>>> case java.sql.Types.SMALLINT: return \"SMALLINT\";\n> >>>> case java.sql.Types.STRUCT: return \"STRUCT\";\n> >>>> case java.sql.Types.TIME: return \"TIME\";\n> >>>> case java.sql.Types.TIMESTAMP: return \"TIMESTAMP\";\n> >>>> case java.sql.Types.TINYINT: return \"TINYINT\";\n> >>>> case java.sql.Types.VARBINARY: return \"VARBINARY\";\n> >>>> case java.sql.Types.VARCHAR: return \"VARCHAR\";\n> >>>> default: return \"\";\n> >>>> }\n> >>>> }\n> >>>>}\n> >>>>\n> >>>>\n> >>>>------------------------------------------------------------------------\n> >>>>\n> >>>>\n> >>>>---------------------------(end of broadcast)---------------------------\n> >>>>TIP 5: Have you checked our extensive FAQ?\n> >>>>\n> >>>>http://www.postgresql.org/users-lounge/docs/faq.html\n> >>>>\n> >>>>GetTypesInfo.java\n> >>>>\n> >>>>Content-Type:\n> >>>>\n> >>>>text/x-java\n> >>>>Content-Encoding:\n> >>>>\n> >>>>base64\n> >>>>\n> >>>>\n> >>>>------------------------------------------------------------------------\n> >>>>Part 1.3\n> >>>>\n> >>>>Content-Type:\n> >>>>\n> >>>>text/plain\n> >>>>Content-Encoding:\n> >>>>\n> >>>>binary\n> >>>>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >>\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 18:20:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "This is what I think needs to be done wrt large objects and binary data \nsupport (and hopefully what I plan to do sometime before 7.2 beta, but \nif anyone else feels up to it, feel free to do any of these things \nyourself):\n\nAdd support for the postgresql binary datatype 'bytea'. This means \nadding the logic to encode/decode binary data into the ascii escape \nsequences used by postgresql. This also means that the \ngetBytes()/setBytes() methods will be changed to interact with the bytea \ndatatype instead of the current mapping to large objects. This is a non \nbackwardly compatable change in functionality that makes the driver more \ncompliant with the spec.\n\nSecond I plan to change the getBinaryStream()/setBinaryStream() methods \nto likewise work on the bytea datatype instead of large objects. Given \nthat toast allows bytea values to be upto 1G in size a stream interface \nmakes sense. This change also breaks backward compatibilty, but is more \nspec compliant. The spec implies that these methods are for accessing \nregular binary data (i.e. bytea), and that the \ngetBlob().getBinaryStream() is for binary large object access.\n\nThird, I plan to change the getCharacterStream()/setCharacterStream() \nmethods to work against text datatypes (text, char, varchar) instead of \nlarge objects. Same reason and same consequences as for the binary \nstream methods.\n\nThat will leave getBlob()/setBlob() and getClob()/setClob() as the \nsupported way of accessing large objects (along with the LargeObject \nclass itself). Which my reading of the spec says is correct.\n\nNow in the long run, I would even like to change \ngetBlob()/setBlob()/getClob()/setClob() methods to no longer support the \nold large object functionality of postgresql but to move these to \nsupport a 'toast' version of large objects (once the corresponding \naccess methods to toasted columns exist so that toasted columns can \nreally be treated as large objects). This would solve the problem with \ndeletes not deleting the large objects. At that time the only way to \naccess the old large object functionality would be through the \nfunctionality provided by the LargeObject class.\n\nAs you can probably guess I don't like the current implementation of \nlarge objects in postgresql (and I haven't even gotten into the security \nissues they have). I believe that 'toast' will provide the \nfunctionality of large objects in the future in a way that is compatable \nwith other databases and the JDBC Blob/Clob interface. Until the time \nthat toast is ready, I believe we need to make the above changes and \ndocument very clearly the issues with the current large object \nfunctionality.\n\nthanks,\n--Barry\n\n\n\nGunnar R�nning wrote:\n\n> [Answering as Anders Norwegian brother :-]\n> \n> * Barry Lind <barry@xythos.com> wrote:\n> |\n> | Anders,\n> | \n> | What aspects of BLOB support do you consider broken? Are these\n> | aspects that are broken in the JDBC layer or are 'broken' at the\n> | server layer?\n> \n> We should have support for the bytea datatype, so applications are not\n> required to wrap blob operations into a transaction. This has been \n> a showstopper for using PostgreSQL with the Turbine framework at Apache\n> for a long time. If we get that to work with PostgreSQL we will attract\n> more users and be a step closer to world domination ;-)\n> \n> \n> \n\n\n",
"msg_date": "Thu, 02 Aug 2001 22:59:11 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> This is what I think needs to be done wrt large objects and binary data \n> support ...\n> [ much snipped ]\n> As you can probably guess I don't like the current implementation of \n> large objects in postgresql\n\nYup, I got that ;-).\n\nWhile these seem like good changes in the long run, I'm concerned about\nbreaking existing client apps wholesale. Is it feasible to have a\nbackwards-compatibility mode? I wouldn't even insist that it be the\ndefault behavior --- but adding a one-line \"set backwards-compatible\nmode\" kind of call seems better than major rewrites, for apps that\ndepend on the old behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 02:30:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done? "
},
{
"msg_contents": "If people feel that backwards compatibiliy is important I would suggest \nit be done in the following way:\n\nA new connection parameter named 'compatible' be defined whose default \nvalue is 7.2 (i.e new functionality). But you could set compatible=7.1 \nto revert back to the old functionality. (This is how Oracle deals with \nsimilar issues in its code base). This parameter could then be set \neither in the JDBC URL (i.e. \njdbc:postgresql://localhost:5432:template1?compatible=7.1) or passed \nexplicily in the connect() method.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n> Barry Lind <barry@xythos.com> writes:\n> \n>>This is what I think needs to be done wrt large objects and binary data \n>>support ...\n>>[ much snipped ]\n>>As you can probably guess I don't like the current implementation of \n>>large objects in postgresql\n>>\n> \n> Yup, I got that ;-).\n> \n> While these seem like good changes in the long run, I'm concerned about\n> breaking existing client apps wholesale. Is it feasible to have a\n> backwards-compatibility mode? I wouldn't even insist that it be the\n> default behavior --- but adding a one-line \"set backwards-compatible\n> mode\" kind of call seems better than major rewrites, for apps that\n> depend on the old behavior.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n",
"msg_date": "Fri, 03 Aug 2001 00:01:26 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: What needs to be done?"
},
{
"msg_contents": "\nIndex: Connection.java\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/Co\nnnection.java,v\nretrieving revision 1.21\ndiff -f -r1.21 Connection.java\nc1039 1040\n info.put(\"user\", PG_USER);\n info.put(\"password\", PG_PASSWORD);\n\n\n",
"msg_date": "Fri, 3 Aug 2001 11:17:19 -0400",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Patch for jdbc1 compile"
},
{
"msg_contents": "> If people feel that backwards compatibiliy is important I would suggest \n> it be done in the following way:\n> \n> A new connection parameter named 'compatible' be defined whose default \n> value is 7.2 (i.e new functionality). But you could set compatible=7.1 \n> to revert back to the old functionality. (This is how Oracle deals with \n> similar issues in its code base). This parameter could then be set \n> either in the JDBC URL (i.e. \n> jdbc:postgresql://localhost:5432:template1?compatible=7.1) or passed \n> explicily in the connect() method.\n\nGUC seems to be the way to control these things. It can be set in\npostgresql.conf and via a SET command.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 11:49:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "GUC is how this type of stuff is controlled on the server, but I don't \nknow of any examples where it controlls client only functionality. Why \nwould you want parameters on the server that the server doesn't use?\n\nthanks,\n--Barry\n\nBruce Momjian wrote:\n\n>>If people feel that backwards compatibiliy is important I would suggest \n>>it be done in the following way:\n>>\n>>A new connection parameter named 'compatible' be defined whose default \n>>value is 7.2 (i.e new functionality). But you could set compatible=7.1 \n>>to revert back to the old functionality. (This is how Oracle deals with \n>>similar issues in its code base). This parameter could then be set \n>>either in the JDBC URL (i.e. \n>>jdbc:postgresql://localhost:5432:template1?compatible=7.1) or passed \n>>explicily in the connect() method.\n>>\n> \n> GUC seems to be the way to control these things. It can be set in\n> postgresql.conf and via a SET command.\n> \n> \n\n\n",
"msg_date": "Fri, 03 Aug 2001 10:15:25 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "> GUC is how this type of stuff is controlled on the server, but I don't \n> know of any examples where it controlls client only functionality. Why \n> would you want parameters on the server that the server doesn't use?\n\n\nOh, I didn't realize this was client side too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 13:50:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "Barry,\n\nOn Thu, 02 Aug 2001 22:59:11 -0700, you wrote:\n>Now in the long run, I would even like to change \n>getBlob()/setBlob()/getClob()/setClob() methods to no longer support the \n>old large object functionality of postgresql but to move these to \n>support a 'toast' version of large objects (once the corresponding \n>access methods to toasted columns exist so that toasted columns can \n>really be treated as large objects). \n\nCould you elaborate on that please? What new access methods are\nneeded on toasted columns? Does this require backend support?\nFE/BE protocol changes? \n\nWould it be conceivable to implement the Lob JDBC interface on\nthe current implementation of toasted columns (in both the\nbackend and the protocol), e.g. using a OID/column name pair as\nthe \"logical pointer\" needed by JDBC?\n\nAlso, I'm wondering if it would be wise to re-architect Lob\nsupport in the JDBC interface only? Someone creating a Lob\nthrough JDBC may have a hard time accessing his data using\nanother interface that not yet supports efficient access methods\non huge toasted data. I definitely agree Blob->toast is the most\ndesirable mapping from a JDBC point of view, but I'm not sure if\nthis should be changed only in JDBC.\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Fri, 10 Aug 2001 12:07:59 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": true,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "\nVince, has this been addressed?\n\n\n> On Wed, 1 Aug 2001, Bruce Momjian wrote:\n> \n> >\n> > This appeared on the JDBC list. Do we need to address this?\n> \n> Where's Peter Mount? Isn't he the maintainer?\n> \n> Vince.\n> \n> >\n> > > I actually consider the biggest problem the fact the the 'official'\n> > > postgres jdbc website is very much out of date\n> > > (http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers).\n> > > I feel that either someone needs to maintain this page; or someone needs\n> > > to create a new website and get the jdbc.postgresql.org DNS entry to\n> > > point to the new site, or the page should just be decommisioned. At\n> > > this point I think it is doing more harm than good.\n> >\n> >\n> \n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 12:54:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "\nAdded to TODO:\n\n* -Make binary interface for TOAST columns (base64)\n* Make file in/out interface for TOAST columns, similar to large object\n interface (force out-of-line storage and no compression) \n \n\n> This is what I think needs to be done wrt large objects and binary data \n> support (and hopefully what I plan to do sometime before 7.2 beta, but \n> if anyone else feels up to it, feel free to do any of these things \n> yourself):\n> \n> Add support for the postgresql binary datatype 'bytea'. This means \n> adding the logic to encode/decode binary data into the ascii escape \n> sequences used by postgresql. This also means that the \n> getBytes()/setBytes() methods will be changed to interact with the bytea \n> datatype instead of the current mapping to large objects. This is a non \n> backwardly compatable change in functionality that makes the driver more \n> compliant with the spec.\n> \n> Second I plan to change the getBinaryStream()/setBinaryStream() methods \n> to likewise work on the bytea datatype instead of large objects. Given \n> that toast allows bytea values to be upto 1G in size a stream interface \n> makes sense. This change also breaks backward compatibilty, but is more \n> spec compliant. The spec implies that these methods are for accessing \n> regular binary data (i.e. bytea), and that the \n> getBlob().getBinaryStream() is for binary large object access.\n> \n> Third, I plan to change the getCharacterStream()/setCharacterStream() \n> methods to work against text datatypes (text, char, varchar) instead of \n> large objects. Same reason and same consequences as for the binary \n> stream methods.\n> \n> That will leave getBlob()/setBlob() and getClob()/setClob() as the \n> supported way of accessing large objects (along with the LargeObject \n> class itself). Which my reading of the spec says is correct.\n> \n> Now in the long run, I would even like to change \n> getBlob()/setBlob()/getClob()/setClob() methods to no longer support the \n> old large object functionality of postgresql but to move these to \n> support a 'toast' version of large objects (once the corresponding \n> access methods to toasted columns exist so that toasted columns can \n> really be treated as large objects). This would solve the problem with \n> deletes not deleting the large objects. At that time the only way to \n> access the old large object functionality would be through the \n> functionality provided by the LargeObject class.\n> \n> As you can probably guess I don't like the current implementation of \n> large objects in postgresql (and I haven't even gotten into the security \n> issues they have). I believe that 'toast' will provide the \n> functionality of large objects in the future in a way that is compatable \n> with other databases and the JDBC Blob/Clob interface. Until the time \n> that toast is ready, I believe we need to make the above changes and \n> document very clearly the issues with the current large object \n> functionality.\n> \n> thanks,\n> --Barry\n> \n> \n> \n> Gunnar R?nning wrote:\n> \n> > [Answering as Anders Norwegian brother :-]\n> > \n> > * Barry Lind <barry@xythos.com> wrote:\n> > |\n> > | Anders,\n> > | \n> > | What aspects of BLOB support do you consider broken? Are these\n> > | aspects that are broken in the JDBC layer or are 'broken' at the\n> > | server layer?\n> > \n> > We should have support for the bytea datatype, so applications are not\n> > required to wrap blob operations into a transaction. This has been \n> > a showstopper for using PostgreSQL with the Turbine framework at Apache\n> > for a long time. If we get that to work with PostgreSQL we will attract\n> > more users and be a step closer to world domination ;-)\n> > \n> > \n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 12:59:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "On Thu, 6 Sep 2001, Bruce Momjian wrote:\n\n>\n> Vince, has this been addressed?\n\nYes, Barry Lind is handling the website. I expect a few days to\na week for him to be ready to go live. Sorry Bruce, I meant to\nCC you on it.\n\nVince.\n\n>\n>\n> > On Wed, 1 Aug 2001, Bruce Momjian wrote:\n> >\n> > >\n> > > This appeared on the JDBC list. Do we need to address this?\n> >\n> > Where's Peter Mount? Isn't he the maintainer?\n> >\n> > Vince.\n> >\n> > >\n> > > > I actually consider the biggest problem the fact the the 'official'\n> > > > postgres jdbc website is very much out of date\n> > > > (http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers).\n> > > > I feel that either someone needs to maintain this page; or someone needs\n> > > > to create a new website and get the jdbc.postgresql.org DNS entry to\n> > > > point to the new site, or the page should just be decommisioned. At\n> > > > this point I think it is doing more harm than good.\n> > >\n> > >\n> >\n> > --\n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> >\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 6 Sep 2001 13:01:08 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: What needs to be done?"
},
{
"msg_contents": "\nBarry just got some info back to me to create his account, so he should\nalso be online later tonight ...\n\nOn Thu, 6 Sep 2001, Vince Vielhaber wrote:\n\n> On Thu, 6 Sep 2001, Bruce Momjian wrote:\n>\n> >\n> > Vince, has this been addressed?\n>\n> Yes, Barry Lind is handling the website. I expect a few days to\n> a week for him to be ready to go live. Sorry Bruce, I meant to\n> CC you on it.\n>\n> Vince.\n>\n> >\n> >\n> > > On Wed, 1 Aug 2001, Bruce Momjian wrote:\n> > >\n> > > >\n> > > > This appeared on the JDBC list. Do we need to address this?\n> > >\n> > > Where's Peter Mount? Isn't he the maintainer?\n> > >\n> > > Vince.\n> > >\n> > > >\n> > > > > I actually consider the biggest problem the fact the the 'official'\n> > > > > postgres jdbc website is very much out of date\n> > > > > (http://jdbc.postgresql.org). (it doesn't even have the 7.1 drivers).\n> > > > > I feel that either someone needs to maintain this page; or someone needs\n> > > > > to create a new website and get the jdbc.postgresql.org DNS entry to\n> > > > > point to the new site, or the page should just be decommisioned. At\n> > > > > this point I think it is doing more harm than good.\n> > > >\n> > > >\n> > >\n> > > --\n> > > ==========================================================================\n> > > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > > Online Campground Directory http://www.camping-usa.com\n> > > Online Giftshop Superstore http://www.cloudninegifts.com\n> > > ==========================================================================\n> > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://www.postgresql.org/search.mpl\n> > >\n> >\n> >\n>\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Thu, 13 Sep 2001 15:45:36 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] Re: What needs to be done?"
}
] |
[
{
"msg_contents": "I recall seeing a message by Tom Lane stating that dropping and\nre-creating a primary index may speed up db performance. Is there a\nSQL command that will do this?\n\nMy current method is to use pg_dump -s to dump out the schema. Then I\ngo through and cut out everything but the CREATE INDEX lines. Then, I\nhave to add a DROP INDEX line before that. I run this through with the\npsql command line program.\n\nIs there a better way?\n\nThanks.\n-Tony\n",
"msg_date": "1 Aug 2001 16:33:39 -0700",
"msg_from": "reina@nsi.edu (Tony Reina)",
"msg_from_op": true,
"msg_subject": "Is there a way to drop and restore an index?"
},
{
"msg_contents": "Just off the top of my head,\n\nCouldn't you write a little PL/PGSQL procedure which queries the system\ntables and builds statements to execute with the new EXECUTE command for\neach record returned that would drop and recreate the indexes? It would\ntake a little work but would be generic enough to automatically reindex\nyour entire DB.\n\nJust a thought,\n\nMike Mascari\nmascarm@mascari.com\n\nTony Reina wrote:\n> \n> I recall seeing a message by Tom Lane stating that dropping and\n> re-creating a primary index may speed up db performance. Is there a\n> SQL command that will do this?\n> \n> My current method is to use pg_dump -s to dump out the schema. Then I\n> go through and cut out everything but the CREATE INDEX lines. Then, I\n> have to add a DROP INDEX line before that. I run this through with the\n> psql command line program.\n> \n> Is there a better way?\n> \n> Thanks.\n> -Tony\n",
"msg_date": "Wed, 01 Aug 2001 20:49:26 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Is there a way to drop and restore an index?"
},
{
"msg_contents": "See REINDEX.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Aug 2001 21:35:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is there a way to drop and restore an index? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> See REINDEX.\n>\n\nThanks.\n-Tony\n\n\n",
"msg_date": "Wed, 01 Aug 2001 18:52:54 -0700",
"msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>",
"msg_from_op": false,
"msg_subject": "Re: Is there a way to drop and restore an index?"
}
] |
[
{
"msg_contents": "\n> Strangely enough, I've seen no objection to optional OIDs\n> other than mine. Probably it was my mistake to have formulated\n> a plan on the flimsy assumption. \n\nI for one am more concerned about adding additional per\ntuple overhead (moving from 32 -> 64bit) than loosing OID's\non some large tables. Imho optional OID's is the best way to combine \nboth worlds. OID's only where you absolutely need them, and thus\na good chance that wraparound does not happen during the lifetime of \none application. (And all this by reducing overhead, and not adding \noverhead :-)\n\nAndreas \n",
"msg_date": "Thu, 2 Aug 2001 09:28:18 +0200 ",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: OID wraparound: summary and proposal"
},
{
"msg_contents": "> \n> > Strangely enough, I've seen no objection to optional OIDs\n> > other than mine. Probably it was my mistake to have formulated\n> > a plan on the flimsy assumption. \n> \n> I for one am more concerned about adding additional per\n> tuple overhead (moving from 32 -> 64bit) than loosing OID's\n> on some large tables. Imho optional OID's is the best way to combine \n> both worlds. OID's only where you absolutely need them, and thus\n> a good chance that wraparound does not happen during the lifetime of \n> one application. (And all this by reducing overhead, and not adding \n> overhead :-)\n\nAgreed, the big selling point for me and optional oid's was removing\ntheir overhead from the tuple header. We need to trim that baby down! \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 06:24:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: OID wraparound: summary and proposal"
},
{
"msg_contents": "Nathan Myers wrote:\n> \n> On Thu, Aug 02, 2001 at 09:28:18AM +0200, Zeugswetter Andreas SB wrote:\n> >\n> > > Strangely enough, I've seen no objection to optional OIDs\n> > > other than mine. Probably it was my mistake to have formulated\n> > > a plan on the flimsy assumption.\n> >\n> > I for one am more concerned about adding additional per\n> > tuple overhead (moving from 32 -> 64bit) than loosing OID's\n> > on some large tables. Imho optional OID's is the best way to combine\n> > both worlds.\n> \n> At the same time that we announce support for optional OIDs,\n> we should announce that, in future releases, OIDs will only be\n> guaranteed unique (modulo wraparounds) within a single table.\n\nWhat would the purpose of such an announcement be ???\n\nOID is \"Object IDentifier\", meant to uniquely identify ANY object in an \nObject-Relational Database ,which PostgreSQL sometimes claims itself to\nbe.\n\nIf they are unique only within a single table then they are just \nsystem-supplied primary key fields without a default index - quite \nuseless IMHO\n\nI hope someone takes up the task of putting back some of the \nniftier features of original Postgres/postgres95 and adding more OO \nfeatures. Deprecating OIDs won't help there .\n\n--------------------\nHannu\n",
"msg_date": "Fri, 03 Aug 2001 01:20:29 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "On Thu, Aug 02, 2001 at 09:28:18AM +0200, Zeugswetter Andreas SB wrote:\n> \n> > Strangely enough, I've seen no objection to optional OIDs\n> > other than mine. Probably it was my mistake to have formulated\n> > a plan on the flimsy assumption. \n> \n> I for one am more concerned about adding additional per\n> tuple overhead (moving from 32 -> 64bit) than loosing OID's\n> on some large tables. Imho optional OID's is the best way to combine \n> both worlds. \n\nAt the same time that we announce support for optional OIDs,\nwe should announce that, in future releases, OIDs will only be \nguaranteed unique (modulo wraparounds) within a single table.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 2 Aug 2001 14:54:28 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> At the same time that we announce support for optional OIDs,\n> we should announce that, in future releases, OIDs will only be \n> guaranteed unique (modulo wraparounds) within a single table.\n\nSeems reasonable --- that will give people notice that we're thinking\nabout separate-OID-generator-per-table ideas.\n\nRight now we don't really document any of these considerations,\nbut I plan to write something as part of the work I'm about to do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 19:15:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Strangely enough, I've seen no objection to optional OIDs\n> > other than mine. Probably it was my mistake to have formulated\n> > a plan on the flimsy assumption.\n> \n> I for one am more concerned about adding additional per\n> tuple overhead (moving from 32 -> 64bit) than loosing OID's\n> on some large tables. Imho optional OID's is the best way to combine\n> both worlds. OID's only where you absolutely need them, and thus\n> a good chance that wraparound does not happen during the lifetime of\n> one application. (And all this by reducing overhead, and not adding\n> overhead :-)\n> \n\nHmm there seems to be an assumption that people could\nknow whether they need OID or not for each table.\nI've had a plan in ODBC using OID and TID.\nFew ODBC users know about ODBC spec. They rarely use\nODBC directly and use middlewares like Access etc.\nCould they take care of the necessity of OIDs with\nmy plan ? Could they know when/how the middlewares \nuse my new feature effectively ? To tell the truth,\nI don't know it precisely.\nOK, a user decided to create tables with OIDs unco\nnditionally for ODBC but he may encounter the OID\nwraparound problem instead....\nI don't think that people use the feature with such\nsilly restrictions.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 03 Aug 2001 10:07:40 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: AW: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Hmm there seems to be an assumption that people could\n> know whether they need OID or not for each table.\n\nA good point, and one reason not to make no-OIDs the default. I'm\nenvisioning that people will turn off OIDs only for tables that they\nknow will be very large and that they know they don't need OIDs for.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 21:09:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Hmm there seems to be an assumption that people could\n> > know whether they need OID or not for each table.\n> \n> A good point, and one reason not to make no-OIDs the default. I'm\n> envisioning that people will turn off OIDs only for tables that they\n> know will be very large and that they know they don't need OIDs for.\n>\n\nAFAIK few people have voted *OIDs by default* in the\nfirst place. It seems to mean that *default* would\nnaturally(essentially) be changed to *WITH NO OIDS*.\nThe followings are the result of vote which I remember\nwell.\n\nregards,\nHiroshi Inoue\n\n\"Mikheev, Vadim\" wrote:\n> \n> > OK, we need to vote on whether Oid's are optional,\n> > and whether we can have them not created by default.\n> \n> Optional OIDs: YES\n> No OIDs by default: YES\n\nLamar Owen wrote:\n> \n> [trimmed cc:list]\n> On Wednesday 18 July 2001 17:09, Bruce Momjian wrote:\n> > OK, we need to vote on whether Oid's are optional, and whether we can\n> > have them not created by default.\n> \n> [All the below IMHO]\n> \n> OID's should be optional.\n> \n> System tables that absolutely have to have OIDs may keep them.\n> \n> No new OID usage, period. Use some other unique primary key.\n> \n> Default user tables to no OIDs.\n",
"msg_date": "Fri, 03 Aug 2001 10:45:42 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: AW: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> The followings are the result of vote which I remember\n> well.\n\nFWIW, I changed my vote ;-). I'm not sure what Vadim and Lamar think\nat the moment, but I thought you made good arguments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 21:57:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: OID wraparound: summary and proposal "
},
{
"msg_contents": "> ncm@zembu.com (Nathan Myers) writes:\n> > At the same time that we announce support for optional OIDs,\n> > we should announce that, in future releases, OIDs will only be \n> > guaranteed unique (modulo wraparounds) within a single table.\n> \n> Seems reasonable --- that will give people notice that we're thinking\n> about separate-OID-generator-per-table ideas.\n> \n> Right now we don't really document any of these considerations,\n> but I plan to write something as part of the work I'm about to do.\n\nBut why do that if we have sequences?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 09:21:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > The followings are the result of vote which I remember\n> > well.\n> \n> FWIW, I changed my vote ;-). I'm not sure what Vadim and Lamar think\n> at the moment, but I thought you made good arguments.\n\nI think Vadim was clearly NOOID. I vote OID.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 09:30:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: OID wraparound: summary and proposal"
},
{
"msg_contents": "\nI'm not sure this is related to the OID discussion, however I have seen\ndesigns where a unique id is required for all the objects in the\ndatabase. \n\nThis (IMO) this implies an int8 (or larger) sequence number. \n\nIt would be nice if we could have different size sequences. Just thought\nI'd throw that in.\n\nDave\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Bruce Momjian\nSent: August 3, 2001 9:22 AM\nTo: Tom Lane\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] OID wraparound: summary and proposal\n\n\n> ncm@zembu.com (Nathan Myers) writes:\n> > At the same time that we announce support for optional OIDs, we \n> > should announce that, in future releases, OIDs will only be \n> > guaranteed unique (modulo wraparounds) within a single table.\n> \n> Seems reasonable --- that will give people notice that we're thinking \n> about separate-OID-generator-per-table ideas.\n> \n> Right now we don't really document any of these considerations, but I \n> plan to write something as part of the work I'm about to do.\n\nBut why do that if we have sequences?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Fri, 3 Aug 2001 09:46:09 -0400",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "RE: OID wraparound: summary and proposal"
}
] |
[
{
"msg_contents": "Helge Bahmann <bahmann@math.tu-freiberg.de> writes:\n> Most certainly they do not, or at least it is called differently; I\n> grepped includes of: FreeBSD 4.2, Solaris 8, Irix 6.5 and AIX (4.3?) and\n> did not find SO_PEERCRED.\n\n> On FreeBSD (and I guess Solaris as well) it is possible to pass\n> credentials using ancillary messages (Linux works as well, so this\n> approach would be significantly more portable). However this requires the\n> cooperation of the client who has to actively *send* his credentials, so\n> this would require changes to both the backend and libpq.\n\nAh, now I understand: those references I saw mention the existence of\nthe underlying SCM_CREDENTIALS (or whatever it's called) message type,\nnot the SO_PEERCRED getsockopt facility.\n\nI agree that it's not worth pursuing at the moment. A localized change\nin the backend is one thing, but an OS-specific addition to our client-\nvisible authentication protocol would be a lot bigger change, and a lot\nmore debatable. If we get a larger/more active Solaris user community,\nmaybe someone will be motivated to do it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 09:17:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)"
},
{
"msg_contents": "> Ah, now I understand: those references I saw mention the existence of\n> the underlying SCM_CREDENTIALS (or whatever it's called) message type,\n> not the SO_PEERCRED getsockopt facility.\n\nYes! That was it the Solaris patch I remember, SCM_CREDENTIALS.\n\n> I agree that it's not worth pursuing at the moment. A localized change\n> in the backend is one thing, but an OS-specific addition to our client-\n> visible authentication protocol would be a lot bigger change, and a lot\n> more debatable. If we get a larger/more active Solaris user community,\n> maybe someone will be motivated to do it.\n\nYes. It is part of that whole SvR4 API that allowed you to push file\ndescriptors to other processes and stuff like that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 11:22:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)"
}
] |
[
{
"msg_contents": "I was thinking about our new version of vacuum. I think it should be\ncalled VACUUM NOLOCK to make it clear when you should use it, and we can\nkeep our ordinary VACUUM the same.\n\nIf you want to get fancy, we can call our traditional vacuum VACUUM LOCK\nand have a GUC parameter that controls what VACUUM without\nLOCK/NOLOCK does.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 11:52:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Name for new VACUUM"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was thinking about our new version of vacuum. I think it should be\n> called VACUUM NOLOCK to make it clear when you should use it, and we can\n> keep our ordinary VACUUM the same.\n\nI really don't understand why you're so hot to avoid changing the\ndefault behavior of VACUUM. Name me even one user who *likes* the\ncurrent behavior (ie, VACUUM grabs exclusive lock)? IMHO the default\nbehavior *should* change. Otherwise you're just forcing people to\nupdate their cron scripts, which they wouldn't need to touch if we\ndo it the way I want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 16:40:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was thinking about our new version of vacuum. I think it should be\n> > called VACUUM NOLOCK to make it clear when you should use it, and we can\n> > keep our ordinary VACUUM the same.\n> \n> I really don't understand why you're so hot to avoid changing the\n> default behavior of VACUUM. Name me even one user who *likes* the\n> current behavior (ie, VACUUM grabs exclusive lock)? IMHO the default\n> behavior *should* change. Otherwise you're just forcing people to\n> update their cron scripts, which they wouldn't need to touch if we\n> do it the way I want.\n\nI am concerned because UPDATE consumes disk space that never gets\nreturned to the OS until a traditional vacuum is run. It is true that\nafter nolock vacuum, the future UPDATE's can use the extra space.\n\nMaybe just call the traditional vacuum VACUUM LOCK. It was the\nLOCK/NOLOCK idea that I think was important.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 18:20:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Name for new VACUUM"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I really don't understand why you're so hot to avoid changing the\n>> default behavior of VACUUM.\n\n> I am concerned because UPDATE consumes disk space that never gets\n> returned to the OS until a traditional vacuum is run.\n\nNot necessarily. Concurrent VACUUM does truncate the relation if it can\ndo so conveniently --- for example, it will successfully reclaim space\nif you do \"DELETE FROM foo; VACUUM foo;\". It just doesn't try as hard\nas the older VACUUM code does.\n\nIMHO, average disk space usage for a real-world database may well be\n*lower* with the new style of VACUUM than with the old style, simply\nbecause you can afford to do new-style VACUUM more often. The old-style\nVACUUM might give you a lower space usage just after a VACUUM, but if\nyou can only afford to do that on nights or weekends, it's cold comfort.\nYour disk hardware needs are going to be determined by peak space usage,\nnot minimum or even average usage, and time between VACUUMs is what\ndrives that. On a peak-usage basis I have no doubt that frequent\nnew-style VACUUMs will win hands down over infrequent old-style.\n\n> Maybe just call the traditional vacuum VACUUM LOCK. It was the\n> LOCK/NOLOCK idea that I think was important.\n\nRight now it's called VACUUM FULL, but I'm not particularly wedded to\nthat name. Does anyone else like VACUUM LOCK? Or have an even better\nidea?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 18:40:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> > Maybe just call the traditional vacuum VACUUM LOCK. It was the\n> > LOCK/NOLOCK idea that I think was important.\n> \n> Right now it's called VACUUM FULL, but I'm not particularly wedded to\n> that name. Does anyone else like VACUUM LOCK? Or have an even better\n> idea?\n\nWhy rename VACUUM, why not create a new command RECLAIM, or something like\nthat. RECLAIM does the VACUUM NOLOCK, while vacuum does the locking. The term\nRECLAIM will make more sense to new comers than VACUUM, and old postgres users\nalready know about VACUUM.\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Thu, 02 Aug 2001 19:48:21 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Why rename VACUUM, why not create a new command RECLAIM, or something like\n> that. RECLAIM does the VACUUM NOLOCK, while vacuum does the locking.\n\nUm, that gets the default backwards IMHO, where \"default\" = \"what\nexisting scripts will do\".\n\n> The term RECLAIM will make more sense to new comers than VACUUM,\n\nWhat's your basis for claiming that?\n\nIn any case, VACUUM is the term already used in all our documentation.\nI have no appetite for trying to teach people and documents that\ncurrently know \"you must do VACUUM periodically\" that the new truth is\n\"you must do VACUUM or RECLAIM periodically\". All these discussions\nabout which should be default aside, the bottom line is that the two\npieces of code do more-or-less the same thing from a high level\nperspective. Calling them completely different names isn't going to\nmake things easier for novices. Calling them different options of the\nsame statement seems like the right thing to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 19:51:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Why rename VACUUM, why not create a new command RECLAIM, or something like\n> > that. RECLAIM does the VACUUM NOLOCK, while vacuum does the locking.\n> \n> Um, that gets the default backwards IMHO, where \"default\" = \"what\n> existing scripts will do\".\n\nChanging how the default works is always tricky. Even if you improve something\ndramatically, someone still will still gripe about the change.\n\n> \n> > The term RECLAIM will make more sense to new comers than VACUUM,\n> \n> What's your basis for claiming that?\n\nI am so used to \"vacuum\" and postgresql, it makes perfect sense to me. Yet, I\ngave a brief discussion at work a week ago about PostgreSQL and how we can use\nit to offload SQL queries from Oracle. In the pros and cons part of the\ndiscussion, people looked at me like I had two heads when I told them about\n\"vacuum.\" It wasn't obvious to them what it did.\n\nThe term \"reclaim\" may be a little more obvious, but I could be wrong. It is\njust that the name vacuum, from the perspective of someone new to PostgreSQL,\nis a bit obscure.\n> \n> In any case, VACUUM is the term already used in all our documentation.\n> I have no appetite for trying to teach people and documents that\n> currently know \"you must do VACUUM periodically\" that the new truth is\n> \"you must do VACUUM or RECLAIM periodically\". All these discussions\n> about which should be default aside, the bottom line is that the two\n> pieces of code do more-or-less the same thing from a high level\n> perspective. Calling them completely different names isn't going to\n> make things easier for novices. Calling them different options of the\n> same statement seems like the right thing to me.\n\nI understand the documentation issue completely, and it is a very strong point.\nHowever, saying that VACUUM NOLOCK and VACUUM LOCK do \"more-or-less the same\nthing\" really isn't so. Think about it, the VACUUM LOCK, practically rebuilds a\ntables representation, in older versions of Postgres didn't it actually rewrite\nthe table? The new behavior of vacuum doesn't do that at all. \n\nPerhaps VACUUM gets changed to the new behavior, and the old behavior gets\nrenamed to DEFRAG or COMPRESS? Win/DOS users will find those names completely\nobvious.\n\nVACUUM DEFRAG?\nVACUUM COMPRESS?\n\n\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Fri, 03 Aug 2001 07:42:40 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> ... people looked at me like I had two heads when I told them about\n> \"vacuum.\" It wasn't obvious to them what it did.\n\nI won't dispute that, but changing a command name that's been around for\nten or fifteen years strikes me as a recipe for more confusion, not\nless.\n\n> However, saying that VACUUM NOLOCK and VACUUM LOCK do \"more-or-less\n> the same thing\" really isn't so. Think about it, the VACUUM LOCK,\n> practically rebuilds a tables representation,\n\nIt does no such thing. The only difference is that it's willing to move\na few tuples around if it can thereby free up (and truncate) whole pages\nat the end of the table. (In a live system you'd better hope it's only\na few tuples, anyway ;-) ... or you'll be waiting a long time.) It\ndoesn't even do a complete defrag; it stops moving tuples as soon as it\nfinds that it won't be able to truncate the table any further. So\nthere's *not* that much difference.\n\n> VACUUM DEFRAG?\n> VACUUM COMPRESS?\n\nWhile these look kinda ugly to me, I can find no stronger objection than\nthat. (Well, maybe I could complain that these overstate what old-style\nvacuum actually does, but that's even weaker.) What do other people\nthink?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 10:25:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I really don't understand why you're so hot to avoid changing the\n> >> default behavior of VACUUM.\n> \n> > I am concerned because UPDATE consumes disk space that never gets\n> > returned to the OS until a traditional vacuum is run.\n> \n> Not necessarily. Concurrent VACUUM does truncate the relation if it can\n> do so conveniently --- for example, it will successfully reclaim space\n> if you do \"DELETE FROM foo; VACUUM foo;\". It just doesn't try as hard\n> as the older VACUUM code does.\n\nBut it will not reclaim from UPDATE. You also will have to VACUUM\nNOLOCK right after your delete or the next INSERT is going to go on the\nend and VACUUM NOLOCK is not going to compact the table, right?\n\n> IMHO, average disk space usage for a real-world database may well be\n> *lower* with the new style of VACUUM than with the old style, simply\n> because you can afford to do new-style VACUUM more often. The old-style\n> VACUUM might give you a lower space usage just after a VACUUM, but if\n> you can only afford to do that on nights or weekends, it's cold comfort.\n> Your disk hardware needs are going to be determined by peak space usage,\n> not minimum or even average usage, and time between VACUUMs is what\n> drives that. On a peak-usage basis I have no doubt that frequent\n> new-style VACUUMs will win hands down over infrequent old-style.\n\nMy contention is that we are causing more problems for administrators by\nchangeing VACUUM's default behavior. Most people vacuum only at night\nwhen no one is using the system, and they should get the LOCK version of\nvacuum. (No change to scripts.) What will change is that people can\nadd VACUUM NOLOCK during the day to their cron scripts.\n\n> > Maybe just call the traditional vacuum VACUUM LOCK. It was the\n> > LOCK/NOLOCK idea that I think was important.\n> \n> Right now it's called VACUUM FULL, but I'm not particularly wedded to\n> that name. Does anyone else like VACUUM LOCK? Or have an even better\n> idea?\n\nFULL seems overloaded to me. Maybe LOCK or FORCE.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 10:52:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Name for new VACUUM"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Not necessarily. Concurrent VACUUM does truncate the relation if it can\n>> do so conveniently --- for example, it will successfully reclaim space\n>> if you do \"DELETE FROM foo; VACUUM foo;\". It just doesn't try as hard\n>> as the older VACUUM code does.\n\n> But it will not reclaim from UPDATE.\n\nWhat? I have no idea what you mean by that.\n\n> You also will have to VACUUM\n> NOLOCK right after your delete or the next INSERT is going to go on the\n> end and VACUUM NOLOCK is not going to compact the table, right?\n\nINSERTs don't go on the end in the first place, at least not under\nsteady-state conditions. That's what the free space map is all about.\n\n> My contention is that we are causing more problems for administrators by\n> changeing VACUUM's default behavior.\n\nThis is a curious definition of causing problems: making it work better\nis causing a problem? I didn't think we'd elevated backwards\ncompatibility to quite that much of a holy grail. To me, a backwards\ncompatibility problem is something that actually breaks an existing app.\nI do not see how changing vacuum's default behavior will break anything.\n\n>> Right now it's called VACUUM FULL, but I'm not particularly wedded to\n>> that name. Does anyone else like VACUUM LOCK? Or have an even better\n>> idea?\n\n> FULL seems overloaded to me. Maybe LOCK or FORCE.\n\nLOCK is pretty overloaded too, but I don't have any other objection to\nit. \"FORCE\" is meaningless; what are you forcing, and just how much\nforce are you applying?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 11:02:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Not necessarily. Concurrent VACUUM does truncate the relation if it can\n> >> do so conveniently --- for example, it will successfully reclaim space\n> >> if you do \"DELETE FROM foo; VACUUM foo;\". It just doesn't try as hard\n> >> as the older VACUUM code does.\n> \n> > But it will not reclaim from UPDATE.\n> \n> What? I have no idea what you mean by that.\n\nI meant that UPDATE of all rows in a table put the new rows at the end.\n\n> > You also will have to VACUUM\n> > NOLOCK right after your delete or the next INSERT is going to go on the\n> > end and VACUUM NOLOCK is not going to compact the table, right?\n> \n> INSERTs don't go on the end in the first place, at least not under\n> steady-state conditions. That's what the free space map is all about.\n\nBut you are assuming you have stuff in the free space map for the table\nalready, right? I as not assuming that.\n\n> > My contention is that we are causing more problems for administrators by\n> > changeing VACUUM's default behavior.\n> \n> This is a curious definition of causing problems: making it work better\n> is causing a problem? I didn't think we'd elevated backwards\n> compatibility to quite that much of a holy grail. To me, a backwards\n> compatibility problem is something that actually breaks an existing app.\n> I do not see how changing vacuum's default behavior will break anything.\n\nIt will not break. It is just you were saying making VACUUM NOLOCK the\ndefault is less work for administrators because they don't have to\nupdate their scripts. I am saying that there is more updating required\nfor making NOLOCK the default. However, maybe more typing if they do\nNOLOCk more frequently.\n\n> >> Right now it's called VACUUM FULL, but I'm not particularly wedded to\n> >> that name. Does anyone else like VACUUM LOCK? Or have an even better\n> >> idea?\n> \n> > FULL seems overloaded to me. Maybe LOCK or FORCE.\n> \n> LOCK is pretty overloaded too, but I don't have any other objection to\n> it. \"FORCE\" is meaningless; what are you forcing, and just how much\n> force are you applying?\n\nNo idea. ANALYZE isn't the greatest word either, but it was mine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 11:58:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Name for new VACUUM"
},
{
"msg_contents": "> It does no such thing. The only difference is that it's willing to move\n> a few tuples around if it can thereby free up (and truncate) whole pages\n> at the end of the table. (In a live system you'd better hope it's only\n> a few tuples, anyway ;-) ... or you'll be waiting a long time.) It\n> doesn't even do a complete defrag; it stops moving tuples as soon as it\n> finds that it won't be able to truncate the table any further. So\n> there's *not* that much difference.\n> \n> > VACUUM DEFRAG?\n> > VACUUM COMPRESS?\n> \n> While these look kinda ugly to me, I can find no stronger objection than\n> that. (Well, maybe I could complain that these overstate what old-style\n> vacuum actually does, but that's even weaker.) What do other people\n> think?\n\nI kind of like COMPRESS, though VACUUM NOLOCK can do compress sometimes\ntoo. That gets confusing. That's why I hit on LOCK. I couldn't think\nof another _unique_ thing old vacuum did.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 12:04:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Name for new VACUUM"
},
{
"msg_contents": "Tom Lane wrote:\n>\n> > VACUUM DEFRAG?\n> > VACUUM COMPRESS?\n>\n> While these look kinda ugly to me, I can find no stronger objection than\n> that. (Well, maybe I could complain that these overstate what old-style\n> vacuum actually does, but that's even weaker.) What do other people\n> think?\n\n What I think? That this entire discussion wasted far too much\n time already. These commands live usually in crontabs and\n aren't typed in that often. So give the baby \"a name\" and\n done. VACUUM CLASSIC and VACUUM LIGHT or whatever.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 3 Aug 2001 12:15:11 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Name for new VACUUM"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> What I think? That this entire discussion wasted far too much\n> time already.\n\nI agree. VACUUM FULL is what's in the code and docs today, and I\nhaven't heard any good reason to expend the effort to change it...\nnone of the other proposals are visibly better, merely different.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 12:20:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Name for new VACUUM "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> INSERTs don't go on the end in the first place, at least not under\n>> steady-state conditions. That's what the free space map is all about.\n\n> But you are assuming you have stuff in the free space map for the table\n> already, right? I as not assuming that.\n\nBut that is going to be the normal state of affairs, at least for people\nwho don't reboot their postmasters every few minutes as we developers\ntend to do.\n\nSure, you can point to situations where lazy VACUUM doesn't do as well\nas full VACUUM. That's the point of having two implementations isn't it?\nIf we didn't need full VACUUM at all any more, we'd have removed it.\nThe existence of such situations is not justification for claiming that\nlazy VACUUM isn't an appropriate default behavior. The question is\nwhich one is more appropriate for typical installations under typical\noperating conditions --- and in that sort of scenario there *will* be\ninfo in the free space map.\n\nEven more to the point, those typical installations do not want\nexclusive-locked VACUUM. Haven't you paid any attention to the user\ncomplaints we've been hearing for the last N years? People want a\nnonexclusive VACUUM (or no VACUUM at all, but that's not a choice we can\noffer them now.) That *is* what the typical dbadmin will want to run,\nand that's why I say it should be the default. If you think that most\npeople will want to stick with exclusive VACUUM, I'd like to see some\nevidence for that position (so that I know why the time I spent on that\nproject was wasted ;-)).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 18:48:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> Not necessarily. Concurrent VACUUM does truncate the relation if it can\n> > >> do so conveniently --- for example, it will successfully reclaim space\n> > >> if you do \"DELETE FROM foo; VACUUM foo;\". It just doesn't try as hard\n> > >> as the older VACUUM code does.\n> >\n> > > But it will not reclaim from UPDATE.\n> >\n> > What? I have no idea what you mean by that.\n> \n> I meant that UPDATE of all rows in a table put the new rows at the end.\n\nOTOH if you do it twice it will reclaim ;)\n\nUPDATE everything;\nVACUUM;\nUPDATE everything;\nVACUUM;\n\n---------------\nHannu\n",
"msg_date": "Sun, 05 Aug 2001 19:38:31 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Name for new VACUUM"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > ... people looked at me like I had two heads when I told them about\n> > \"vacuum.\" It wasn't obvious to them what it did.\n> \n> I won't dispute that, but changing a command name that's been around for\n> ten or fifteen years strikes me as a recipe for more confusion, not\n> less.\n> \n> > However, saying that VACUUM NOLOCK and VACUUM LOCK do \"more-or-less\n> > the same thing\" really isn't so. Think about it, the VACUUM LOCK,\n> > practically rebuilds a tables representation,\n> \n> It does no such thing. \n\nJust out of curiosity - does CLUSTER currently \"practically rebuild\na tables representation\" ?\n\n--------------\nHannu\n",
"msg_date": "Sun, 05 Aug 2001 19:45:47 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: Name for new VACUUM"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Just out of curiosity - does CLUSTER currently \"practically rebuild\n> > a tables representation\" ?\n> \n> CLUSTER currently *loses* most of a table's representation :-(.\n> It needs work.\n\nat least \\h CLUSTER in psql seems to imply that it is OK to use CLUSTER\n?\n\nDo we have some indication of last CLUSTER command (like an OID column\nof \ncluster index field) in pg_relation so that VACUUM caould make better \ndecisions when moving tuples ?\n\n> But since the whole point of CLUSTER is to physically rearrange the\n> tuples of a table, it seems to me that it's in a different category\n> from VACUUM anyway.\n\nAnother way to look at it is as \"VACUUM LOCK AND PERFORM HEAVY\nREARRANGEMENTS\"\n\nOr does the current implementation actually do the rearrangement by \nappending all out-of-index-order tuples to the end and _not_ clean up \nunused space requiring an additional vacuum after CLUSTER ?\n\n--------------\nHannu\n",
"msg_date": "Sun, 05 Aug 2001 23:00:38 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: Name for new VACUUM"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Just out of curiosity - does CLUSTER currently \"practically rebuild\n> > a tables representation\" ?\n> \n> CLUSTER currently *loses* most of a table's representation :-(.\n> It needs work.\n\nThe easiest implememntaion of CLUSTER seems to be something along the\nlines of\n\n-- lock original table for WRITE in all backends\n\nCREATE TABLE CLUSTERED_T \nAS \nSELECT * FROM ORIGINAL_T ORDER BY INDEX_COLUMNS;\n\n-- now do a move of CLUSTERED_T -> ORIGINAL_T\n-- and then REINDEX\n-- flush cache in all backends\n\n-- unlock the altered table\n\nThis would need an actual 2xSIZE + size of dead tuples of space but\nwould \nprobably be fastest in situations where heavy rearrangement is needed.\n\n> But since the whole point of CLUSTER is to physically rearrange the\n> tuples of a table, it seems to me that it's in a different category\n> from VACUUM anyway.\n\nOTOH it is the closest thing to VACUUM among \"standard\" SQL commands ;)\n\n-----------------\nHannu\n",
"msg_date": "Sun, 05 Aug 2001 23:10:58 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: Name for new VACUUM"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Just out of curiosity - does CLUSTER currently \"practically rebuild\n> a tables representation\" ?\n\nCLUSTER currently *loses* most of a table's representation :-(.\nIt needs work.\n\nBut since the whole point of CLUSTER is to physically rearrange the\ntuples of a table, it seems to me that it's in a different category\nfrom VACUUM anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Aug 2001 14:38:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Name for new VACUUM "
}
] |
[
{
"msg_contents": "Tom,\n\nplease apply attached patch to current CVS.\n\n1. Fixed error with empty array ( '{}' ),\n test data changed to include such data\n2. Test a dimension of an array ( we support only one-dimension)\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Thu, 2 Aug 2001 19:00:56 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "patch for contrib/intarray (current CVS)"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Tom,\n> \n> please apply attached patch to current CVS.\n> \n> 1. Fixed error with empty array ( '{}' ),\n> test data changed to include such data\n> 2. Test a dimension of an array ( we support only one-dimension)\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 18:17:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch for contrib/intarray (current CVS)"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> please apply attached patch to current CVS.\n> 1. Fixed error with empty array ( '{}' ),\n> test data changed to include such data\n> 2. Test a dimension of an array ( we support only one-dimension)\n\nLooks okay in a quick glance, except error message spelling is poor:\n\n! #define ARRISNULL(x) ( (x) ? ( ( ARR_NDIM(x) == NDIM ) ? ( ( ARRNELEMS( x ) ) ? 0 : 1 ) : ( ( ARR_NDIM(x) ) ? (elog(ERROR,\"Array is not one-dimentional: %d dimentions\", ARR_NDIM(x)),1) : 1 ) ) : 1 )\n\nShould be \"one-dimensional\" and \"dimensions\". Bruce, would you fix that\nwhen you apply it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 21:47:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch for contrib/intarray (current CVS) "
},
{
"msg_contents": "On Thu, 2 Aug 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > please apply attached patch to current CVS.\n> > 1. Fixed error with empty array ( '{}' ),\n> > test data changed to include such data\n> > 2. Test a dimension of an array ( we support only one-dimension)\n>\n> Looks okay in a quick glance, except error message spelling is poor:\n>\n> ! #define ARRISNULL(x) ( (x) ? ( ( ARR_NDIM(x) == NDIM ) ? ( ( ARRNELEMS( x ) ) ? 0 : 1 ) : ( ( ARR_NDIM(x) ) ? (elog(ERROR,\"Array is not one-dimentional: %d dimentions\", ARR_NDIM(x)),1) : 1 ) ) : 1 )\n>\n> Should be \"one-dimensional\" and \"dimensions\". Bruce, would you fix that\n> when you apply it?\n\noops :-%\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 3 Aug 2001 07:01:59 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: patch for contrib/intarray (current CVS) "
},
{
"msg_contents": "\nSure.\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > please apply attached patch to current CVS.\n> > 1. Fixed error with empty array ( '{}' ),\n> > test data changed to include such data\n> > 2. Test a dimension of an array ( we support only one-dimension)\n> \n> Looks okay in a quick glance, except error message spelling is poor:\n> \n> ! #define ARRISNULL(x) ( (x) ? ( ( ARR_NDIM(x) == NDIM ) ? ( ( ARRNELEMS( x ) ) ? 0 : 1 ) : ( ( ARR_NDIM(x) ) ? (elog(ERROR,\"Array is not one-dimentional: %d dimentions\", ARR_NDIM(x)),1) : 1 ) ) : 1 )\n> \n> Should be \"one-dimensional\" and \"dimensions\". Bruce, would you fix that\n> when you apply it?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 09:36:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch for contrib/intarray (current CVS)"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n> Tom,\n> \n> please apply attached patch to current CVS.\n> \n> 1. Fixed error with empty array ( '{}' ),\n> test data changed to include such data\n> 2. Test a dimension of an array ( we support only one-dimension)\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 15:35:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch for contrib/intarray (current CVS)"
},
{
"msg_contents": "\nChange made.\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > please apply attached patch to current CVS.\n> > 1. Fixed error with empty array ( '{}' ),\n> > test data changed to include such data\n> > 2. Test a dimension of an array ( we support only one-dimension)\n> \n> Looks okay in a quick glance, except error message spelling is poor:\n> \n> ! #define ARRISNULL(x) ( (x) ? ( ( ARR_NDIM(x) == NDIM ) ? ( ( ARRNELEMS( x ) ) ? 0 : 1 ) : ( ( ARR_NDIM(x) ) ? (elog(ERROR,\"Array is not one-dimentional: %d dimentions\", ARR_NDIM(x)),1) : 1 ) ) : 1 )\n> \n> Should be \"one-dimensional\" and \"dimensions\". Bruce, would you fix that\n> when you apply it?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 15:36:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch for contrib/intarray (current CVS)"
}
] |
[
{
"msg_contents": "\n Hello All,\n \n I want to know if there's a way to\n create a FUNCTION that use a Perl script file?\n \n thanks...\n\n",
"msg_date": "2 Aug 2001 18:28:39 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "FUNCTION Question..."
},
{
"msg_contents": "Read the docs:\n\nhttp://www.postgresql.org/users-lounge/docs/7.1/postgres/plperl.html\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of gabriel\n> Sent: Friday, 3 August 2001 2:29 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] FUNCTION Question...\n> \n> \n> \n> Hello All,\n> \n> I want to know if there's a way to\n> create a FUNCTION that use a Perl script file?\n> \n> thanks...\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n",
"msg_date": "Fri, 3 Aug 2001 09:26:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: FUNCTION Question..."
}
] |
[
{
"msg_contents": "> On Thu, 2 Aug 2001, Bruce Momjian wrote:\n> \n> > Yes! That was it the Solaris patch I remember, SCM_CREDENTIALS.\n> \n> Can you provide a pointer to this patch? I just grepped Solaris includes\n> in vain for SCM_CRED.\n> \n> The keyword \"SCM_CREDENTIALS\" is actually used by Linux, whereas FreeBSD\n> uses \"SCM_CREDS\", so perhaps you are mistaken and the patch was for either \n> Linux or BSD instead of Solaris?\n\nFound it:\n\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=115140\n\nSee the entire thread for the comments about it.\n\nHe says Linux and BSD support it, and that it was invented by Solaris. \nI see SCM_CREDS on BSD/OS. I wonder if this is what we should use\ninstead of the PEER define we just added?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 18:05:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Found it:\n> \thttp://fts.postgresql.org/db/mw/msg.html?mid=115140\n> See the entire thread for the comments about it.\n\nThat patch uses SO_PEERCRED, and is the direct ancestor of the\npresent Debian patches. I haven't seen any code go by that uses\nthe SCM_CREDS message directly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 18:32:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Allow IDENT authentication on local connections (Linux\n\tonly)"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Found it:\n> > \thttp://fts.postgresql.org/db/mw/msg.html?mid=115140\n> > See the entire thread for the comments about it.\n> \n> That patch uses SO_PEERCRED, and is the direct ancestor of the\n> present Debian patches. I haven't seen any code go by that uses\n> the SCM_CREDS message directly.\n\nBummer.\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 09:25:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Allow IDENT authentication on local connections\n\t(Linux only)"
}
] |
[
{
"msg_contents": "If there is a comment on a view, pg_dumpall can put them in the wrong order:\n\n--\n-- pg_dumpall (7.2devel)\n--\n...\n--\n-- TOC Entry ID 363 (OID 31291)\n--\n-- Name: VIEW \"all_persons\" Type: COMMENT Owner:\n--\n\nCOMMENT ON VIEW \"all_persons\" IS 'All persons - individuals or not';\n\n--\n-- TOC Entry ID 362 (OID 31308)\n--\n-- Name: all_persons Type: VIEW Owner: olly\n--\n\nCREATE VIEW \"all_persons\" as SELECT person.ptype, person.id, person.name, \nperson.address, person.salutation, person.envelope, person.email, person.www \nFROM person;\n\n\n\nThis seems to have happened with every view in this dump. I haven't managed\nto work out why it happens.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"All scripture is given by inspiration of God, and is \n profitable for doctrine, for reproof, for correction, \n for instruction in righteousness;\" \n II Timothy 3:16 \n\n\n",
"msg_date": "Thu, 02 Aug 2001 23:29:34 +0100",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "pg_dumpall problem in 7.1 and cvs"
},
{
"msg_contents": "\"Oliver Elphick\" <olly@lfix.co.uk> writes:\n> If there is a comment on a view, pg_dumpall can put them in the wrong order:\n\nDrat. I fixed the identical problem for permissions a little while ago,\nbut didn't realize that it extended to comments too. Thanks for the\nreport!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 19:18:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dumpall problem in 7.1 and cvs "
},
{
"msg_contents": "\"Oliver Elphick\" <olly@lfix.co.uk> writes:\n> If there is a comment on a view, pg_dumpall can put them in the wrong order:\n\nI've committed a fix for this in both CVS tip and REL7_1_STABLE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 16:48:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dumpall problem in 7.1 and cvs "
}
] |
[
{
"msg_contents": "\nWitaj !\n\nWyjecha�em na wakacje - odpowiem na Tw�j\nlist po powrocie.\n\nW sprawach zwi�zanych z firm� SKY-NET\nprosz� pisa� bezpo�rednio na info@sky.pl\n\t\t \n\t\t\t \n",
"msg_date": "Fri, 3 Aug 2001 00:33:00 +0200 (CEST)",
"msg_from": "rychu@sky.pl",
"msg_from_op": true,
"msg_subject": "Odpowied��� automatyczna"
}
] |
[
{
"msg_contents": "Is TRUNCATE supposed to be equivalent to DELETE FROM blah?\n\nBecause I notice that DELETE triggers are not called when you truncate a\ntable... Isn't that a bad thing?\n\nChris\n\n",
"msg_date": "Fri, 3 Aug 2001 09:40:09 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "TRUNCATE question"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> Is TRUNCATE supposed to be equivalent to DELETE FROM blah?\n> \n> Because I notice that DELETE triggers are not called when you truncate a\n> table... Isn't that a bad thing?\n\nIt's supposed to work that way - same as Oracle. \n\nMike Mascari \nmascarm@mascari.com\n",
"msg_date": "Thu, 02 Aug 2001 21:56:49 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
},
{
"msg_contents": "Makes for a real pain when the nice and safe foreign keys aren't\nreally nice and safe anymore.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Mike Mascari\" <mascarm@mascari.com>\nTo: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nCc: \"Hackers\" <pgsql-hackers@postgresql.org>\nSent: Thursday, August 02, 2001 9:56 PM\nSubject: Re: [HACKERS] TRUNCATE question\n\n\n> Christopher Kings-Lynne wrote:\n> >\n> > Is TRUNCATE supposed to be equivalent to DELETE FROM blah?\n> >\n> > Because I notice that DELETE triggers are not called when you\ntruncate a\n> > table... Isn't that a bad thing?\n>\n> It's supposed to work that way - same as Oracle.\n>\n> Mike Mascari\n> mascarm@mascari.com\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Thu, 2 Aug 2001 22:11:20 -0400",
"msg_from": "\"Rod Taylor\" <rbt@barchord.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
},
{
"msg_contents": "Rod Taylor wrote:\n> \n> Makes for a real pain when the nice and safe foreign keys aren't\n> really nice and safe anymore.\n> >\n> > It's supposed to work that way - same as Oracle.\n\nTRUNCATE TABLE is essentially short-hand for DROP/CREATE, but preserves\nGRANT permissions, associations from its oid in functions, views, etc.\nOracle disallows TRUNCATE on a table involved in a referential integrity\nrelationship, but doesn't disallow the behavior for a normal ON DELETE\ntrigger. According to previous discussions, PostgreSQL should behave\nsimilarly. If it does not, its a bug. I haven't checked the status since\n7.1.0, so I don't know.\n\nAccordingly, as of 7.1.0, nothing stops you in PostgreSQL from\nperforming a DROP/CREATE on a table involved in a referential integrity\nrelationship. Now your foreign keys are completely gone. I haven't\nchecked that behavior in later versions, however. Oracle requires DROP\nTABLE <table> CASCADE CONSTRAINTS to force a DROP of a table involved in\na primary/foreign key relationship.\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Thu, 02 Aug 2001 22:40:20 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
},
{
"msg_contents": "I agree it matches the description. That said, it rather surprised me\nwhen Triggers and things didn't go off. Primarily due to the 'Works\nlike a Delete *'. The description has changed since I first\ndiscovered it though.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Mike Mascari\" <mascarm@mascari.com>\nTo: \"Rod Taylor\" <rbt@barchord.com>\nCc: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>; \"Hackers\"\n<pgsql-hackers@postgresql.org>\nSent: Thursday, August 02, 2001 10:40 PM\nSubject: Re: [HACKERS] TRUNCATE question\n\n\n> Rod Taylor wrote:\n> >\n> > Makes for a real pain when the nice and safe foreign keys aren't\n> > really nice and safe anymore.\n> > >\n> > > It's supposed to work that way - same as Oracle.\n>\n> TRUNCATE TABLE is essentially short-hand for DROP/CREATE, but\npreserves\n> GRANT permissions, associations from its oid in functions, views,\netc.\n> Oracle disallows TRUNCATE on a table involved in a referential\nintegrity\n> relationship, but doesn't disallow the behavior for a normal ON\nDELETE\n> trigger. According to previous discussions, PostgreSQL should behave\n> similarly. If it does not, its a bug. I haven't checked the status\nsince\n> 7.1.0, so I don't know.\n>\n> Accordingly, as of 7.1.0, nothing stops you in PostgreSQL from\n> performing a DROP/CREATE on a table involved in a referential\nintegrity\n> relationship. Now your foreign keys are completely gone. I haven't\n> checked that behavior in later versions, however. Oracle requires\nDROP\n> TABLE <table> CASCADE CONSTRAINTS to force a DROP of a table\ninvolved in\n> a primary/foreign key relationship.\n>\n> Mike Mascari\n> mascarm@mascari.com\n>\n\n",
"msg_date": "Thu, 2 Aug 2001 22:43:47 -0400",
"msg_from": "\"Rod Taylor\" <rbt@barchord.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
},
{
"msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> Christopher Kings-Lynne wrote:\n>> Is TRUNCATE supposed to be equivalent to DELETE FROM blah?\n>> \n>> Because I notice that DELETE triggers are not called when you truncate a\n>> table... Isn't that a bad thing?\n\n> It's supposed to work that way - same as Oracle. \n\nAFAICT, the whole point of TRUNCATE is to skip all the fancy stuff like\ndelete triggers and just zero the table. Irreversibly. If you don't\nlike it, don't use it...\n\nPerhaps TRUNCATE should require superuser privilege, just to protect\npeople from themselves? Not that the DBAs are necessarily any smarter\nthan anyone else, but at least they're supposed to know what they're\ndoing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 23:04:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question "
},
{
"msg_contents": "I wrote:\n> Perhaps TRUNCATE should require superuser privilege, just to protect\n> people from themselves?\n\nAlternative possibilities came to mind just after I hit \"send\" ...\n\n1. Refuse TRUNCATE if the table has any DELETE triggers. (Are there\nany other conditions to check for?)\n\n2. If the table has DELETE triggers, allow TRUNCATE only to the\nsuperuser.\n\nOur current behavior is to allow TRUNCATE only to the table owner,\nwhich seems to miss the point from a purely semantic point of view.\nAnyone with DELETE privileges can do a universal DELETE, so why\nshouldn't the faster alternative be available to them?\n\nDoes Oracle have any special permission checks for TRUNCATE?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 23:12:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I wrote:\n> > Perhaps TRUNCATE should require superuser privilege, just to protect\n> > people from themselves?\n> \n> Alternative possibilities came to mind just after I hit \"send\" ...\n> \n> 1. Refuse TRUNCATE if the table has any DELETE triggers. (Are there\n> any other conditions to check for?)\n> \n> 2. If the table has DELETE triggers, allow TRUNCATE only to the\n> superuser.\n> \n> Our current behavior is to allow TRUNCATE only to the table owner,\n> which seems to miss the point from a purely semantic point of view.\n> Anyone with DELETE privileges can do a universal DELETE, so why\n> shouldn't the faster alternative be available to them?\n> \n> Does Oracle have any special permission checks for TRUNCATE?\n\nHere are the rules for Oracle:\n\n1. The table must be in your schema (i.e., you're the table owner)\nor you have been granted the DELETE ANY TABLE System Privilege. We\nneed System Privileges, BTW.\n\n2. The table cannot be truncated if it is the parent of a\nreferential integrity constraint. The exception is that if the\nintegrity constraint is entirely self-referencing.\n\n3. If the table has ON DELETE triggers, the TRUNCATE does not fire\nthose triggers nor does Oracle prohibit you from TRUNCATE-ing a\ntable with ON DELETE triggers.\n\n4. The TRUNCATE command generates no rollback information.\n\n5. Like all Oracle DDL statements, TRUNCATE implicitly commits and\nbegins a new transaction.\n\nI'd like to see PostgreSQL do all but #5; its been two years, but\nnow I'm a believer ;-).\n\nMike Mascari\nmascarm@mascari.com\n\n> \n> regards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 00:30:08 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
},
{
"msg_contents": "Tom Lane wrote:\n\n> I wrote:\n> > Perhaps TRUNCATE should require superuser privilege, just to protect\n> > people from themselves?\n>\n> Alternative possibilities came to mind just after I hit \"send\" ...\n>\n> 1. Refuse TRUNCATE if the table has any DELETE triggers. (Are there\n> any other conditions to check for?)\n>\n> 2. If the table has DELETE triggers, allow TRUNCATE only to the\n> superuser.\n>\n> Our current behavior is to allow TRUNCATE only to the table owner,\n> which seems to miss the point from a purely semantic point of view.\n> Anyone with DELETE privileges can do a universal DELETE, so why\n> shouldn't the faster alternative be available to them?\n>\n> Does Oracle have any special permission checks for TRUNCATE?\n\nTRUNCATE is a scary ass command. It is the 800 pound gorilla of delete.\nHere's what Oracle docs has to say about it:\n\n\"To remove all rows from a table or cluster and reset the STORAGE\nparameters to the values when the table or cluster was created.\n\nDeleting rows with the TRUNCATE statement can be more efficient than\ndropping and re-creating a table. Dropping and re-creating a table\ninvalidates the table's dependent objects, requires you to regrant object\nprivileges on the table, and requires you to re-create the table's indexes,\nintegrity constraint, and triggers and respecify its storage parameters.\nTruncating has none of these effects. \"\n\n\nThe \"Oracle 8 Complete Reference\" says:\n\n\"The TRUNCATE command is faster than a delete command because it generates\nno rollback information, does not fire any DELETE triggers (and therefore\nmust be used with caution), and does not record any information in the\nsnapshot log. In addition, using TRUNCATE does not invalidate the objects\ndepending on the deleted rows or the privileges on the table. You cannot\nroll back a TRUNCATE statement.\"\n\nNeither reference any special privileges or conditions for the statement,\nbut they are littered with sentences like: \"You should be SURE you really\nwant to TRUNCATE before doing it.\"\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 03 Aug 2001 16:29:44 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
},
{
"msg_contents": "\nIs there a TODO item here?\n\n\n> Tom Lane wrote:\n> > \n> > I wrote:\n> > > Perhaps TRUNCATE should require superuser privilege, just to protect\n> > > people from themselves?\n> > \n> > Alternative possibilities came to mind just after I hit \"send\" ...\n> > \n> > 1. Refuse TRUNCATE if the table has any DELETE triggers. (Are there\n> > any other conditions to check for?)\n> > \n> > 2. If the table has DELETE triggers, allow TRUNCATE only to the\n> > superuser.\n> > \n> > Our current behavior is to allow TRUNCATE only to the table owner,\n> > which seems to miss the point from a purely semantic point of view.\n> > Anyone with DELETE privileges can do a universal DELETE, so why\n> > shouldn't the faster alternative be available to them?\n> > \n> > Does Oracle have any special permission checks for TRUNCATE?\n> \n> Here are the rules for Oracle:\n> \n> 1. The table must be in your schema (i.e., you're the table owner)\n> or you have been granted the DELETE ANY TABLE System Privilege. We\n> need System Privileges, BTW.\n> \n> 2. The table cannot be truncated if it is the parent of a\n> referential integrity constraint. The exception is that if the\n> integrity constraint is entirely self-referencing.\n> \n> 3. If the table has ON DELETE triggers, the TRUNCATE does not fire\n> those triggers nor does Oracle prohibit you from TRUNCATE-ing a\n> table with ON DELETE triggers.\n> \n> 4. The TRUNCATE command generates no rollback information.\n> \n> 5. Like all Oracle DDL statements, TRUNCATE implicitly commits and\n> begins a new transaction.\n> \n> I'd like to see PostgreSQL do all but #5; its been two years, but\n> now I'm a believer ;-).\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> > \n> > regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 13:13:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Is there a TODO item here?\n\nYes. It should read:\n\n\"Disallow TRUNCATE TABLE on tables that are parents of a referential\nintegrity constraint\"\n\nIn PostgreSQL current sources I can do:\n\nCREATE TABLE employees (\n employeeid INTEGER PRIMARY KEY NOT NULL\n);\n\nCREATE TABLE salaries ( \n employeeid INTEGER NOT NULL REFERENCES employees(employeeid),\n salary FLOAT NOT NULL\n);\n\nINSERT INTO employees VALUES (1);\n\nINSERT INTO salaries VALUES (1, 45000);\n\nTRUNCATE TABLE employees;\n\nSELECT * FROM salaries;\n\n employeeid | salary \n------------+--------\n 1 | 45000\n(1 row)\n\nIn Oracle, the following occurs:\n\nCREATE TABLE employees (\n employeeid INTEGER NOT NULL PRIMARY KEY\n);\n\nCREATE TABLE salaries (\n employeeid INTEGER NOT NULL REFERENCES employees(employeeid),\n salary FLOAT NOT NULL\n);\n\nINSERT INTO employees VALUES (1);\n\nINSERT INTO salaries VALUES (1, 40000);\n\nTRUNCATE TABLE employees;\n\nTRUNCATE TABLE employees;\n *\nERROR at line 1:\nORA-02266: unique/primary keys in table referenced by enabled\nforeign keys\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Thu, 06 Sep 2001 16:01:16 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
},
{
"msg_contents": "\nAdded to TODO. Thanks.\n\n\n> Bruce Momjian wrote:\n> > \n> > Is there a TODO item here?\n> \n> Yes. It should read:\n> \n> \"Disallow TRUNCATE TABLE on tables that are parents of a referential\n> integrity constraint\"\n> \n> In PostgreSQL current sources I can do:\n> \n> CREATE TABLE employees (\n> employeeid INTEGER PRIMARY KEY NOT NULL\n> );\n> \n> CREATE TABLE salaries ( \n> employeeid INTEGER NOT NULL REFERENCES employees(employeeid),\n> salary FLOAT NOT NULL\n> );\n> \n> INSERT INTO employees VALUES (1);\n> \n> INSERT INTO salaries VALUES (1, 45000);\n> \n> TRUNCATE TABLE employees;\n> \n> SELECT * FROM salaries;\n> \n> employeeid | salary \n> ------------+--------\n> 1 | 45000\n> (1 row)\n> \n> In Oracle, the following occurs:\n> \n> CREATE TABLE employees (\n> employeeid INTEGER NOT NULL PRIMARY KEY\n> );\n> \n> CREATE TABLE salaries (\n> employeeid INTEGER NOT NULL REFERENCES employees(employeeid),\n> salary FLOAT NOT NULL\n> );\n> \n> INSERT INTO employees VALUES (1);\n> \n> INSERT INTO salaries VALUES (1, 40000);\n> \n> TRUNCATE TABLE employees;\n> \n> TRUNCATE TABLE employees;\n> *\n> ERROR at line 1:\n> ORA-02266: unique/primary keys in table referenced by enabled\n> foreign keys\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 16:11:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE question"
}
] |
[
{
"msg_contents": "I have an embedded application where we use PostgreSQL to store\nconfiguration data. For the past few years, we were on a 6.x system,\nand are finally trying to update to 7.1.2. One of the issues I face is\nthat the WAL files occupy a pretty significant amount of disk space. We\nhave figured out how to reduce the size of the files (we are using 500K,\nand it seems to be OK), but whats not clear is how we can limit the\nnumber of files to some maximum value (say 3 or 4). The configuration\nvariables seem to provide a guideline for this, but I have seen the\nactual number of files exceed these many times. We don't do large\nupdates/inserts as a rule, and for this application it would be better\nto wait while the files are committed rather than overrun the maximum\nnumber. The filesystems are all memory based, and we have a hard limit.\n\nAnyone have any pointers? I've done some cursory examination of the\ncode, but was hoping I might get some pointers to speed my progress.\n\nTIA,\n\nBob Crowe\n\nRCrowe@stbernard.com\n\n\n\n",
"msg_date": "Thu, 2 Aug 2001 21:26:43 -0700 (PDT)",
"msg_from": "RCrowe@stbernard.com",
"msg_from_op": true,
"msg_subject": "Any hints on how to limit WAL file disk usage?"
},
{
"msg_contents": "RCrowe@stbernard.com writes:\n> ... One of the issues I face is\n> that the WAL files occupy a pretty significant amount of disk space.\n> Anyone have any pointers?\n\nFirst off, install the patch depicted at \nhttp://www.ca.postgresql.org/mhonarc/pgsql-patches/2001-06/msg00061.html\n\nCVS tip includes some further hacking that limits the number of WAL\nsegment files to 2*CHECKPOINT_SEGMENTS + WAL_FILES + 1 --- ie, 112\nmegabytes with default settings. If that's still in the \"whoa, no way\"\nrange for you, I think the most appropriate attack would be to reduce\nthe WAL segment size to something less than the normal 16Mb. See\nXLogSegSize in src/include/access/xlog.h. For a low-traffic\ninstallation I suspect you could get away with 1Mb or so. (It wasn't\nentirely clear from your message whether you'd already discovered this\nsetting.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 01:24:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Any hints on how to limit WAL file disk usage? "
},
{
"msg_contents": "Thanks that will help a lot. I searched through July and August\narchives, but should have gone back to June too :(\n\nI'd like to keep the total space consumed by the WAL files to under\n3 MB or so. Not sure if thats practical or not. I'll experiment with\nthe provided patch. We did figure out how to reduce the WAL segment\nsize, and that helped a lot.\n\nThanks again,\n\nBob Crowe.\n\n\nOn 3 Aug, Tom Lane wrote:\n> RCrowe@stbernard.com writes:\n>> ... One of the issues I face is\n>> that the WAL files occupy a pretty significant amount of disk space.\n>> Anyone have any pointers?\n> \n> First off, install the patch depicted at \n> http://www.ca.postgresql.org/mhonarc/pgsql-patches/2001-06/msg00061.html\n> \n> CVS tip includes some further hacking that limits the number of WAL\n> segment files to 2*CHECKPOINT_SEGMENTS + WAL_FILES + 1 --- ie, 112\n> megabytes with default settings. If that's still in the \"whoa, no way\"\n> range for you, I think the most appropriate attack would be to reduce\n> the WAL segment size to something less than the normal 16Mb. See\n> XLogSegSize in src/include/access/xlog.h. For a low-traffic\n> installation I suspect you could get away with 1Mb or so. (It wasn't\n> entirely clear from your message whether you'd already discovered this\n> setting.)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n",
"msg_date": "Thu, 2 Aug 2001 22:41:39 -0700 (PDT)",
"msg_from": "RCrowe@stbernard.com",
"msg_from_op": true,
"msg_subject": "Re: Any hints on how to limit WAL file disk usage? "
}
] |
[
{
"msg_contents": "\n Postgresql can easily act as a Temporal database with\n some changes. Using this applications can get all changes performed on\n a particular tuple during its lifetime provided vaccum is not performed\n\n on that particular table.\n\n Requirements:\n 1) Every Tuble is uniquely identified by unique OID\n 2) The tuple should be never overreturn or deleted\n\n The above two conditions are easily satisfied by the existing\npostgresql\n database.\n\n\n Summary of changes:\n 1) New type of select statement need to added to the extisting\nparser\n to get all the records of a particular tuple with a specific\noid.\n By using xmin,xmax we can get sequence of changes performed\non that tuple.\n 2) While scanning the heap for the new select statement , the\ntuples which\n are marked as deleted (or invalid) are also picked up.\n 3) Need to add one more attribute for every table to indicate\nwether the\n tuple is dead(invalid) or not along with existing\nattributes oid,xmin,xmax.\n 4) We should make sure the Oid is unique taking care of\nwraparound problem.\n\n\n\n\nregards\njana\n",
"msg_date": "Fri, 03 Aug 2001 13:37:12 +0800",
"msg_from": "Janardhana Reddy <jana-reddy@mediaring.com.sg>",
"msg_from_op": true,
"msg_subject": "Postgresql as Temporal Datbase: Summary and proposal "
}
] |
[
{
"msg_contents": "Hello all,\n\n I've noted that the MOVE command won't send you an error message if\nyou try to position the cursor AFTER the actual end of the records. Is this\nok ??? I think it should return a proper error code and message, but\ninstead, it will return only an empty recordset.\n\nThe following code will demonstrate what I mean:\n\nrollback;begin;declare Test cursor for select typname from pg_type;move\n10000 in Test;fetch 10 in Test;close Test;end;\n\nI think 'move 10000' should raise an error...\nAny comments ?\nDoes Oracle or other DBMSs act like this ?\n\nI have another questions, since I can't find answers anywhere on the\ndocumentation...\n1) can DECLARE be modified to return the number of rows in the resultset,\nmaybe by adding some new parameter ?... Or it would it require to execute\nthe whole query before returning this value ?...\n2) How does cursor queries really work: they execute the query, store all\nrows in memory, and then pass small resultsets retrieved by FETCH commands\non the client, or does it produces and retrieves each recorset as each FETCH\ncommand is issued ??\n\nBest Regards,\nSteve Howe\n\n\n",
"msg_date": "Fri, 3 Aug 2001 03:51:32 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Cursor queries & fetches"
}
] |
[
{
"msg_contents": "It seems like this isn't possible. Could anyone give me a hint how to get\naround this? Been thinking about triggers etc, but can't quite figure out\nHOWTO do it...\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\ntoluene Honduras colonel FBI radar Ft. Meade 767 munitions Treasury\ngenetic Saddam Hussein supercomputer Uzi [Hello to all my fans in\ndomestic surveillance] Soviet\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "03 Aug 2001 09:44:16 +0200",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "PL/pgSQL: Return multiple rows"
},
{
"msg_contents": "I'm working on this (not plpgsql specific, though).\n\nI have most of this done, just need to merge it to the -current and send\nin the patch, but I was bogged down by RL :(\n\n\nOn 3 Aug 2001, Turbo Fredriksson wrote:\n\n> It seems like this isn't possible. Could anyone give me a hint how to get\n> around this? Been thinking about triggers etc, but can't quite figure out\n> HOWTO do it...\n> \n> \n\n",
"msg_date": "Fri, 3 Aug 2001 10:54:19 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL: Return multiple rows"
},
{
"msg_contents": "Quoting Alex Pilosov <alex@pilosoft.com>:\n\n> I'm working on this (not plpgsql specific, though).\n> \n> I have most of this done, just need to merge it to the -current and send\n> in the patch, but I was bogged down by RL :(\n\nProblem is that I'd REALLY would like a workaround in 7.1... I have 7.2 from\nCVS, but is that 'production quality' yet?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nsmuggle ammunition congress Uzi munitions Saddam Hussein\ncounter-intelligence quiche Rule Psix South Africa subway [Hello to\nall my fans in domestic surveillance] spy radar Semtex\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "06 Aug 2001 09:29:34 +0200",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL: Return multiple rows"
}
] |
[
{
"msg_contents": "Hi all,\n\nI frequently see posts asking, \"What does Oracle do in this case?\" when\ndiscussing, for example, how TRUNCATE should behave.\n\nThe Oracle documentation is available on-line\nvia the Oracle Technology Network (OTN) http://otn.oracle.com\nFree registration is required.\n\nThe documentation URL for the latest version is at\nhttp://download.oracle.com/otndoc/oracle9i/901_doc/index.htm\n\n\nCould someone check the legal implications of PostgreSQL developers reading\nthis?\n\n\nCheers,\n\nColin\n\n\n\n\n",
"msg_date": "Fri, 3 Aug 2001 10:07:01 +0200",
"msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Oracle documentation"
}
] |
[
{
"msg_contents": "\n> > At the same time that we announce support for optional OIDs,\n> > we should announce that, in future releases, OIDs will only be \n> > guaranteed unique (modulo wraparounds) within a single table.\n\n... if an appropriate unique constraint is explicitly created.\n\n> \n> Seems reasonable --- that will give people notice that we're thinking\n> about separate-OID-generator-per-table ideas.\n\nImho we should think about adding other parts to the external representation\nof OID before we start thinking about moving from 4 to 8 bytes in the heap.\nEssentially the oid would then be a concatenated e.g. 16 byte number,\nthat is constructed with:\n\n\toid128 = installation oid<<96 + class oid<<64 + for_future_use<<32 + tuple oid\n\nImho walking that direction would serve the \"OID\" idea a lot better,\nand could actually guarantee a globally unique oid, if the \"installation\noid\" was centrally managed.\n\nIt has the additional advantage of knowing the class by only looking at the oid.\n\nThe btree code could be specially tuned to only consider the lower 4(or 8) bytes\non insert and make an early exit for select where oid = wrong class id.\n\nAndreas\n",
"msg_date": "Fri, 3 Aug 2001 10:17:11 +0200 ",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "Proposal: OID wraparound: summary and proposal "
}
] |
[
{
"msg_contents": "\nHello all\n\nAnyone knows if there's a way to point a trigger \nto a external program file?\n\nHow??\n\nthanks...\n",
"msg_date": "3 Aug 2001 11:59:21 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "TRIGGER Question"
}
] |
[
{
"msg_contents": "I would just like to comment that for our project, GNU Enterprise, we \nuse our own 128 bit object ID that is unique (UUID) for every row in \nall tables.\n\nIt seems to me, without having looked into it, that having both a \nPostgreSQL UID and our own 128 bit objectid (UUID) is redundant and \nslows the whole process down. But we are storing object data in the \ndatabase and require and absolutely unique objectid. We are planning \nfor enterprise usage and expect to need 128 bits to uniquely define \nour objects.\n\nSo I would request strongly that we have an option for a 128 bit \nunique id for all rows in the database and/or that it is configurable \nso we can best decide how to use it. We would like to use our own \nand have the postgreSQL uid fast and small or have it larger and \nslower but remove the need to generate our own uid.\n\nNeil\nneilt@gnue.org\nGNU Enterprise\nhttp://www.gnuenterprise.org/\nhttp://www.gnuenterprise.org/~neilt/sc.html\n\n\nAt 10:17 AM +0200 8/3/01, Zeugswetter Andreas SB wrote:\n> > > At the same time that we announce support for optional OIDs,\n>> > we should announce that, in future releases, OIDs will only be\n>> > guaranteed unique (modulo wraparounds) within a single table.\n>\n>... if an appropriate unique constraint is explicitly created.\n>\n>>\n>> Seems reasonable --- that will give people notice that we're thinking\n>> about separate-OID-generator-per-table ideas.\n>\n>Imho we should think about adding other parts to the external representation\n>of OID before we start thinking about moving from 4 to 8 bytes in the heap.\n>Essentially the oid would then be a concatenated e.g. 16 byte number,\n>that is constructed with:\n>\n>\toid128 = installation oid<<96 + class oid<<64 + \n>for_future_use<<32 + tuple oid\n>\n>Imho walking that direction would serve the \"OID\" idea a lot better,\n>and could actually guarantee a globally unique oid, if the \"installation\n>oid\" was centrally managed.\n>\n>It has the additional advantage of knowing the class by only looking \n>at the oid.\n>\n>The btree code could be specially tuned to only consider the lower \n>4(or 8) bytes\n>on insert and make an early exit for select where oid = wrong class id.\n\n",
"msg_date": "Fri, 3 Aug 2001 08:17:10 -0500",
"msg_from": "Neil Tiffin <ntiffin@earthlink.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: OID wraparound: summary and proposal"
},
{
"msg_contents": "Neil Tiffin wrote:\n> \n> I would just like to comment that for our project, GNU Enterprise, we\n> use our own 128 bit object ID that is unique (UUID) for every row in\n> all tables.\n> \n> It seems to me, without having looked into it, that having both a\n> PostgreSQL UID and our own 128 bit objectid (UUID) is redundant and\n> slows the whole process down. But we are storing object data in the\n> database and require and absolutely unique objectid. We are planning\n> for enterprise usage and expect to need 128 bits to uniquely define\n> our objects.\n\nIs it just an 128-bit int from a sequence or does it have some internal \nstructure ?\n\nWhat kind of enterprise do you expect to have more than \n18 446 744 073 709 551 615 of objects that can uniquely be identified \nby 64 bits ?\n\n-------------\nHannu\n",
"msg_date": "Tue, 07 Aug 2001 10:09:24 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: OID wraparound: summary and proposal"
},
{
"msg_contents": "At 10:09 AM +0500 8/7/01, Hannu Krosing wrote:\n>Neil Tiffin wrote:\n>>\n>> I would just like to comment that for our project, GNU Enterprise, we\n>> use our own 128 bit object ID that is unique (UUID) for every row in\n>> all tables.\n>>\n>> It seems to me, without having looked into it, that having both a\n>> PostgreSQL UID and our own 128 bit objectid (UUID) is redundant and\n>> slows the whole process down. But we are storing object data in the\n>> database and require and absolutely unique objectid. We are planning\n>> for enterprise usage and expect to need 128 bits to uniquely define\n>> our objects.\n>\n>Is it just an 128-bit int from a sequence or does it have some internal\n>structure ?\n>\n>What kind of enterprise do you expect to have more than\n>18 446 744 073 709 551 615 of objects that can uniquely be identified\n>by 64 bits ?\n\nOur objectid is a UUID from libuuid (provided by e2fsprogs, requires \ndevelopment files. debian package uuid-dev provides all necessary \nfiles.) We use the text representation which IIRC is 33 characters \n(38 minus the \"-\") to store it in the database. (And I dont think \nthis is the best way to do it.) As for 64 bits being enough, you may \njust be right. Our developer that did this part of the code has left \n(and we are taking the opportunity to examine this).\n\nWe will eventually compete with SAP, Peoplesoft etc. and consider \nthat SAP has about 20,000 tables to represent an enterprise plus the \nlife of the system at 10 years and you start to knock down the number \nvery fast.\n\nI think in the short term we could be happy with a 64 bit id. As we \ndon't even have our first application working (but we are within a \ncouple of months) and it will be years before we have a system that \nwill perform in large scale environments.\n\nIn either case the perfect solution, for us, would be to be able to \nconfigure the PostgreSQL uid as none, 64 bit or 128 bit uid at \ncompile time. A default of 64 bits would be just fine. But we need \nto have the uid unique for the database or we will still have to \ncreate and use our own uid (and that will slow everything down).\n\nI have not even considered multiple database servers running \ndifferent database, which is our design goal. In this case we would \nlike to have a slimmed down (and blazingly fast) PostgreSQL server in \nwhich we manage the uid in our middleware. This is because the uid \nmust be unique accross all servers and database vendors. (I don't \nclaim to be a database guru, so if we are all wet here please feel \nfree to help correct our misunderstandings.)\n\n-- \nNeil\nneilt@gnue.org\nGNU Enterprise\nhttp://www.gnuenterprise.org/\nhttp://www.gnuenterprise.org/~neilt/sc.html\n",
"msg_date": "Tue, 7 Aug 2001 09:34:53 -0500",
"msg_from": "Neil Tiffin <ntiffin@earthlink.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: OID wraparound: summary and proposal"
},
{
"msg_contents": "Neil Tiffin <ntiffin@earthlink.net> writes:\n> I have not even considered multiple database servers running \n> different database, which is our design goal. In this case we would \n> like to have a slimmed down (and blazingly fast) PostgreSQL server in \n> which we manage the uid in our middleware. This is because the uid \n> must be unique accross all servers and database vendors.\n\nGiven those requirements, it seems like your UID *must* be an\napplication-defined column; there's no way you'll get a bunch of\ndifferent database vendors to all sign on to your approach to UIDs.\n\nSo in reality, I think the feature you want is precisely to be able\nto suppress Postgres' automatic OID generation on your table(s), since\nit's of no value to you. The number of cycles saved per insert isn't\ngoing to be all that large, but they'll add up...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 11:22:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: OID wraparound: summary and proposal "
},
{
"msg_contents": "At 11:22 AM -0400 8/7/01, Tom Lane wrote:\n>Neil Tiffin <ntiffin@earthlink.net> writes:\n>> I have not even considered multiple database servers running\n>> different database, which is our design goal. In this case we would\n>> like to have a slimmed down (and blazingly fast) PostgreSQL server in\n>> which we manage the uid in our middleware. This is because the uid\n>> must be unique accross all servers and database vendors.\n>\n>Given those requirements, it seems like your UID *must* be an\n>application-defined column; there's no way you'll get a bunch of\n>different database vendors to all sign on to your approach to UIDs.\n>\n>So in reality, I think the feature you want is precisely to be able\n>to suppress Postgres' automatic OID generation on your table(s), since\n>it's of no value to you. The number of cycles saved per insert isn't\n>going to be all that large, but they'll add up...\n\nThat sounds about right. Its amazing how having to write this stuff \ndown clarifies ones thoughts.\n\n-- \nNeil\nneilt@gnue.org\nGNU Enterprise\nhttp://www.gnuenterprise.org/\nhttp://www.gnuenterprise.org/~neilt/sc.html\n",
"msg_date": "Tue, 7 Aug 2001 13:19:42 -0500",
"msg_from": "Neil Tiffin <ntiffin@earthlink.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: OID wraparound: summary and proposal"
},
{
"msg_contents": "Neil Tiffin wrote:\n\n> I have not even considered multiple database servers running different \n> database, which is our design goal. In this case we would like to have \n> a slimmed down (and blazingly fast) PostgreSQL server in which we manage \n> the uid in our middleware. This is because the uid must be unique \n> accross all servers and database vendors. (I don't claim to be a \n> database guru, so if we are all wet here please feel free to help \n> correct our misunderstandings.)\n\nI am not 100% sure, but I would believe that the \noid/uid/whatever_we_call_it only has to be unique within the table.\n\nAt least as long as you don't exchange data between different databases. \nAs soon as you transfer data from db a to db b, it's good to have an \nobject id that is unique in the world.\n-- \nReinhard Mueller\nGNU Enterprise project\nhttp://www.gnue.org\n\n",
"msg_date": "Sat, 11 Aug 2001 22:54:55 +0200",
"msg_from": "Reinhard Mueller <reinhard.mueller@bytewise.at>",
"msg_from_op": false,
"msg_subject": "Re: [gnue-geas] Re: Proposal: OID wraparound: summary and proposal"
}
] |
[
{
"msg_contents": "\n> > FWIW, I changed my vote ;-). I'm not sure what Vadim and Lamar think\n> > at the moment, but I thought you made good arguments.\n> \n> I think Vadim was clearly NOOID. I vote OID.\n\nNOOID, but I can see the arguments on the other side :-)\nI would also vote GUC despite the pg_dump issue.\n\nAndreas\n",
"msg_date": "Fri, 3 Aug 2001 15:40:47 +0200 ",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: OID wraparound: summary and proposal"
}
] |
[
{
"msg_contents": "Hi,\n\nwe're getting back to GiST development and would like to discuss\nour plans for 7.2. This discussion doesn't touch any changes in system tables\nto solve index_formtuple problem.\nWe want to discuss implementation of null-safe interface to GiST\n(for reference see thread http://fts.postgresql.org/db/mw/msg.html?mid=1025848 )\n\nThere are seven user-defined functions one should write to create GiST opclass:\n\n1. equal - it's already null-safe. GiST core will not call 'equal'\n function if any parameter is NULL\n\n2,3. compress/decompress - always return NULL for NULL, so we could also\n handle them inside GiST core as for 'equal'\n\n4. penalty - accepts 2 parameters and never returns NULL. In existed\n implementations we've seen penalty returns 0 (zero) if\n any of the parameters is NULL. For this case we propose\n not to call 'penalty' if it's marked as isstrict.\n\n5. consistent - returns 'false' if any of the parameters is NULL.\n it never returns NULL.\n\n6. union - accepts array of keys and returns their union. If all keys are NULL\n then returns NULL, so we don't need to call this function.\n If not all keys are NULL we could:\n a) Clean up NULLs from array\n b) don't touch this array and require 'union' function to\n handle NULL's. In this case we need an additional array\n to point NULLs if we want to support parameters pass-by-value\n\n7. picksplit - accepts structure GIST_SPLITVEC and array of keys. It never\n returns NULL.\n there are 2 variants:\n a) Clean up NULLs from array\n don't need to change existed opclasses\n b) Require 'picksplit' to handle NULL'\n To support arguments passed by value we need additional\n array as in case 6.b\n\nSummary:\n\n 1. Only for one function - penalty, 'isstrict' mark could be required.\n in that case 'penalty' will be not called for NULL keys,\n otherwise, it's users responsibility to write null-safe code.\n Other functions could be handled inside GiST core without\n bothering of user. This is quite easy task.\n\n 2. For union and picksplit we propose to clean up NULLs from array of\n keys, so support of arguments 'passed-by-value' will not require\n changes of user interface. It would require some modification of current\n algorithm of splitting, but this wouldn't be a complex task for us.\n\n\nProposed solution solves the problem with 'pass-by-value' interface,\nwhile we don't see where it could be used because in GiST\nkey for index type of int4 is 8-byte.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 3 Aug 2001 17:00:14 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Null-safe GiST interface (proposal)"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> 2. For union and picksplit we propose to clean up NULLs from array of\n> keys, so support of arguments 'passed-by-value' will not require\n> changes of user interface. It would require some modification of current\n> algorithm of splitting, but this wouldn't be a complex task for us.\n\nSeems reasonable. Would there ever be a union or picksplit method that\nwould want to do anything with nulls except ignore them? I can't think\nof a reason to do differently, so you're just centralizing the logic to\nignore nulls in these methods.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 11:48:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Null-safe GiST interface (proposal) "
},
{
"msg_contents": "Hi,\n\nplease apply patch to current CVS which implements:\n\n1. null-safe interface to GiST\n (as proposed in http://fts.postgresql.org/db/mw/msg.html?mid=1028327)\n\n2. support for 'pass-by-value' arguments - to test this\n we used special opclass for int4 with values in range [0-2^15]\n More testing will be done after resolving problem with\n index_formtuple and implementation of B-tree using GiST\n\n3. small patch to contrib modules (seg,cube,rtree_gist,intarray) -\n mark functions as 'isstrict' where needed.\n\nPatch was intensively tested (attached test.tgz contains test suite):\n\nThis is a generic test suite for GiST (7.2):\n\n1. tests GiST multi-key indexes\n2. tests null-safe interface to GiST\n (see proposal http://fts.postgresql.org/db/mw/msg.html?mid=1028327)\n\nUSAGE:\n\nCreate db and install contrib modules: intarray, rtree_gist, seg, cube.\nEdit gen.pl for $pgsqlsrc\n% perl gen.pl > /tmp/data\n% psql TESTDB < test.sql\n\n\n\tRegards,\n\t\tOleg\n\n\n\nOn Fri, 3 Aug 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > 2. For union and picksplit we propose to clean up NULLs from array of\n> > keys, so support of arguments 'passed-by-value' will not require\n> > changes of user interface. It would require some modification of current\n> > algorithm of splitting, but this wouldn't be a complex task for us.\n>\n> Seems reasonable. Would there ever be a union or picksplit method that\n> would want to do anything with nulls except ignore them? I can't think\n> of a reason to do differently, so you're just centralizing the logic to\n> ignore nulls in these methods.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Tue, 7 Aug 2001 22:33:31 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Re: Null-safe GiST interface (proposal) "
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Hi,\n> \n> please apply patch to current CVS which implements:\n> \n> 1. null-safe interface to GiST\n> (as proposed in http://fts.postgresql.org/db/mw/msg.html?mid=1028327)\n> \n> 2. support for 'pass-by-value' arguments - to test this\n> we used special opclass for int4 with values in range [0-2^15]\n> More testing will be done after resolving problem with\n> index_formtuple and implementation of B-tree using GiST\n> \n> 3. small patch to contrib modules (seg,cube,rtree_gist,intarray) -\n> mark functions as 'isstrict' where needed.\n> \n> Patch was intensively tested (attached test.tgz contains test suite):\n> \n> This is a generic test suite for GiST (7.2):\n> \n> 1. tests GiST multi-key indexes\n> 2. tests null-safe interface to GiST\n> (see proposal http://fts.postgresql.org/db/mw/msg.html?mid=1028327)\n> \n> USAGE:\n> \n> Create db and install contrib modules: intarray, rtree_gist, seg, cube.\n> Edit gen.pl for $pgsqlsrc\n> % perl gen.pl > /tmp/data\n> % psql TESTDB < test.sql\n> \n> \n> \tRegards,\n> \t\tOleg\n> \n> \n> \n> On Fri, 3 Aug 2001, Tom Lane wrote:\n> \n> > Oleg Bartunov <oleg@sai.msu.su> writes:\n> > > 2. For union and picksplit we propose to clean up NULLs from array of\n> > > keys, so support of arguments 'passed-by-value' will not require\n> > > changes of user interface. It would require some modification of current\n> > > algorithm of splitting, but this wouldn't be a complex task for us.\n> >\n> > Seems reasonable. Would there ever be a union or picksplit method that\n> > would want to do anything with nulls except ignore them? I can't think\n> > of a reason to do differently, so you're just centralizing the logic to\n> > ignore nulls in these methods.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Aug 2001 11:19:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Null-safe GiST interface (proposal)"
},
{
"msg_contents": "\nMain patch applied. I did not apply the test code patch. Thanks.\n\n\n> Hi,\n> \n> please apply patch to current CVS which implements:\n> \n> 1. null-safe interface to GiST\n> (as proposed in http://fts.postgresql.org/db/mw/msg.html?mid=1028327)\n> \n> 2. support for 'pass-by-value' arguments - to test this\n> we used special opclass for int4 with values in range [0-2^15]\n> More testing will be done after resolving problem with\n> index_formtuple and implementation of B-tree using GiST\n> \n> 3. small patch to contrib modules (seg,cube,rtree_gist,intarray) -\n> mark functions as 'isstrict' where needed.\n> \n> Patch was intensively tested (attached test.tgz contains test suite):\n> \n> This is a generic test suite for GiST (7.2):\n> \n> 1. tests GiST multi-key indexes\n> 2. tests null-safe interface to GiST\n> (see proposal http://fts.postgresql.org/db/mw/msg.html?mid=1028327)\n> \n> USAGE:\n> \n> Create db and install contrib modules: intarray, rtree_gist, seg, cube.\n> Edit gen.pl for $pgsqlsrc\n> % perl gen.pl > /tmp/data\n> % psql TESTDB < test.sql\n> \n> \n> \tRegards,\n> \t\tOleg\n> \n> \n> \n> On Fri, 3 Aug 2001, Tom Lane wrote:\n> \n> > Oleg Bartunov <oleg@sai.msu.su> writes:\n> > > 2. For union and picksplit we propose to clean up NULLs from array of\n> > > keys, so support of arguments 'passed-by-value' will not require\n> > > changes of user interface. It would require some modification of current\n> > > algorithm of splitting, but this wouldn't be a complex task for us.\n> >\n> > Seems reasonable. Would there ever be a union or picksplit method that\n> > would want to do anything with nulls except ignore them? I can't think\n> > of a reason to do differently, so you're just centralizing the logic to\n> > ignore nulls in these methods.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 10 Aug 2001 10:34:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Null-safe GiST interface (proposal)"
}
] |
[
{
"msg_contents": "This patch is because Hurd does not support NOFILE. It is against current\ncvs.\n\nThe Debian bug report says, \"The upstream source makes use of NOFILE\nunconditionalized. As the Hurd doesn't have an arbitrary limit on the\nnumber of open files, this is not defined. But _SC_OPEN_MAX works fine\nand returns 1024 (applications can increase this as they want), so I\nsuggest the below diff. Please forward this upstream, too.\"\n\n\n*** pgsql.orig/src/backend/storage/file/fd.c\tFri Aug 3 16:28:46 2001\n--- pgsql/src/backend/storage/file/fd.c\tFri Aug 3 16:29:19 2001\n***************\n*** 290,297 ****\n--- 290,302 ----\n \t\tno_files = sysconf(_SC_OPEN_MAX);\n \t\tif (no_files == -1)\n \t\t{\n+ # tweak for Hurd, which does not support NOFILE\n+ #ifdef NOFILE\n \t\t\telog(DEBUG, \"pg_nofile: Unable to get _SC_OPEN_MAX using sysconf(); using \n%d\", NOFILE);\n \t\t\tno_files = (long) NOFILE;\n+ #else\n+ \t\t\telog(FATAL, \"pg_nofile: Unable to get _SC_OPEN_MAX using sysconf() and \nNOFILE is undefined\");\n+ #endif\n \t\t}\n #endif\n \n\n\nThis report was from Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-boch.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Love is patient, love is kind. It does not envy, it\n does not boast, it is not proud. It is not rude, it is\n not self seeking, it is not easily angered, it keeps\n no record of wrongs. Love does not delight in evil but\n rejoices with the truth. It always protects, always\n trusts, always hopes, always perseveres.\" \n I Corinthians 13:4-7 \n\n\n",
"msg_date": "Fri, 03 Aug 2001 16:37:05 +0100",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Small patch for Hurd"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> This patch is because Hurd does not support NOFILE. It is against current\n> cvs.\n> \n> The Debian bug report says, \"The upstream source makes use of NOFILE\n> unconditionalized. As the Hurd doesn't have an arbitrary limit on the\n> number of open files, this is not defined. But _SC_OPEN_MAX works fine\n> and returns 1024 (applications can increase this as they want), so I\n> suggest the below diff. Please forward this upstream, too.\"\n> \n> \n> *** pgsql.orig/src/backend/storage/file/fd.c\tFri Aug 3 16:28:46 2001\n> --- pgsql/src/backend/storage/file/fd.c\tFri Aug 3 16:29:19 2001\n> ***************\n> *** 290,297 ****\n> --- 290,302 ----\n> \t\tno_files = sysconf(_SC_OPEN_MAX);\n> \t\tif (no_files == -1)\n> \t\t{\n> + # tweak for Hurd, which does not support NOFILE\n> + #ifdef NOFILE\n> \t\t\telog(DEBUG, \"pg_nofile: Unable to get _SC_OPEN_MAX using sysconf(); using \n> %d\", NOFILE);\n> \t\t\tno_files = (long) NOFILE;\n> + #else\n> + \t\t\telog(FATAL, \"pg_nofile: Unable to get _SC_OPEN_MAX using sysconf() and \n> NOFILE is undefined\");\n> + #endif\n> \t\t}\n> #endif\n> \n> \n> \n> This report was from Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-boch.\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Love is patient, love is kind. It does not envy, it\n> does not boast, it is not proud. It is not rude, it is\n> not self seeking, it is not easily angered, it keeps\n> no record of wrongs. Love does not delight in evil but\n> rejoices with the truth. It always protects, always\n> trusts, always hopes, always perseveres.\" \n> I Corinthians 13:4-7 \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 3 Aug 2001 15:58:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small patch for Hurd"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n> This patch is because Hurd does not support NOFILE. It is against current\n> cvs.\n> \n> The Debian bug report says, \"The upstream source makes use of NOFILE\n> unconditionalized. As the Hurd doesn't have an arbitrary limit on the\n> number of open files, this is not defined. But _SC_OPEN_MAX works fine\n> and returns 1024 (applications can increase this as they want), so I\n> suggest the below diff. Please forward this upstream, too.\"\n> \n> \n> *** pgsql.orig/src/backend/storage/file/fd.c\tFri Aug 3 16:28:46 2001\n> --- pgsql/src/backend/storage/file/fd.c\tFri Aug 3 16:29:19 2001\n> ***************\n> *** 290,297 ****\n> --- 290,302 ----\n> \t\tno_files = sysconf(_SC_OPEN_MAX);\n> \t\tif (no_files == -1)\n> \t\t{\n> + # tweak for Hurd, which does not support NOFILE\n> + #ifdef NOFILE\n> \t\t\telog(DEBUG, \"pg_nofile: Unable to get _SC_OPEN_MAX using sysconf(); using \n> %d\", NOFILE);\n> \t\t\tno_files = (long) NOFILE;\n> + #else\n> + \t\t\telog(FATAL, \"pg_nofile: Unable to get _SC_OPEN_MAX using sysconf() and \n> NOFILE is undefined\");\n> + #endif\n> \t\t}\n> #endif\n> \n> \n> \n> This report was from Marcus Brinkmann <Marcus.Brinkmann@ruhr-uni-boch.\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"Love is patient, love is kind. It does not envy, it\n> does not boast, it is not proud. It is not rude, it is\n> not self seeking, it is not easily angered, it keeps\n> no record of wrongs. Love does not delight in evil but\n> rejoices with the truth. It always protects, always\n> trusts, always hopes, always perseveres.\" \n> I Corinthians 13:4-7 \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 15:42:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small patch for Hurd"
}
] |
[
{
"msg_contents": "There is some code in gram.y that detects whether you are in a RULE so\nNEW/OLD can be detected. Seems the value is reset on parser start and\nset on RULE start, but not reset on rule and. A multi-query string\ncould use NEW/OLD in the queries after the RULE even though they are\ninvalid. The following patch fixes this by resetting the flag when the\nrule action happens.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.238\ndiff -c -r2.238 gram.y\n*** src/backend/parser/gram.y\t2001/07/16 19:07:40\t2.238\n--- src/backend/parser/gram.y\t2001/08/03 14:45:37\n***************\n*** 2720,2725 ****\n--- 2720,2726 ----\n \t\t\t\t\tn->instead = $12;\n \t\t\t\t\tn->actions = $13;\n \t\t\t\t\t$$ = (Node *)n;\n+ \t\t\t\t\tQueryIsRule=FALSE;\n \t\t\t\t}\n \t\t;",
"msg_date": "Fri, 3 Aug 2001 11:47:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Rule flag in gram.y"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> There is some code in gram.y that detects whether you are in a RULE so\n> NEW/OLD can be detected. Seems the value is reset on parser start and\n> set on RULE start, but not reset on rule and. A multi-query string\n> could use NEW/OLD in the queries after the RULE even though they are\n> invalid. The following patch fixes this by resetting the flag when the\n> rule action happens.\n\nI was about to say \"fix ecpg's grammar too\", but a quick look shows that\nMichael was way ahead of the rest of us on this one ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2001 18:55:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule flag in gram.y "
}
] |
[
{
"msg_contents": "How feasible is it to use Postgres' debug log output to create a 'redo' log?\n\nWe need this functionality urgently, but the documentation regarding the \ndebug options / switches is rather thin and playing with the different \nsettings / debug levels for the whole day did not make me any wiser (and \nneither did studying the source code).\n\nIf there is really no documentation written yet, I volunteer if somebody \nprovides me with the neccessary background information.\n\nHorst\n======================\nDr. Horst Herb\nCohuna, Vic 3568 Australia\n======================\n",
"msg_date": "Sat, 4 Aug 2001 02:37:32 +1000",
"msg_from": "Horst Herb <hherb@malleenet.net.au>",
"msg_from_op": true,
"msg_subject": "redo log"
}
] |
[
{
"msg_contents": "Good day,\n\nI'm one of the authors of the new forthcoming PostgreSQL book, and I was\nwondering if someone directly affiliated with the project would like to\nmake a statement on Postgres's standards compliancy. Specifically, as it\ncorresponds to SQL89/1, SQL92/2, and SQL93/3, etc.\n\nI've been able to find a number of somewhat vague, informal statements\nthat basically amount to \"Postgres supports most of SQL92\", and a few\noffhand references to SQL99 data types, but I've also seen it written that\nsome PostgreSQL developers have claimed that it is the closest of any\nRDBMS to conforming to SQL92. I've also seen it written that there are\nsome areas of SQL92 which are considered to be \"ill-considered\" that will\nnever be implemented supposedly, though no references to which aspects of\nthe standard this might have been.\n\nSo, I suppose I'm looking for something more specific for print. ;) Any\ninformation would be appreciated.\n\n\n\nThanks for your time,\nJw.\n--\nJohn Worsley - Lead Programmer / Web Developer, Command Prompt, Inc.\nLinuxPorts - Resources for Linux Users | http://www.linuxports.com/\n(503) 736-4609 | jlx@commandprompt.com | webmaster@linuxports.com\n--\nBy way of pgsql-hackers@commandprompt.com\n\n",
"msg_date": "Fri, 3 Aug 2001 12:41:55 -0700 (PDT)",
"msg_from": "<pgsql-hackers@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Book, standards inquiry."
}
] |
[
{
"msg_contents": "I was going through the Todo list looking at the items that are planned \nfor 7.2 (i.e. those starting with a '-'). I was doing this to see if \nany might impact the jdbc driver. The only one that I thought might \nhave an impact on the jdbc code is the item:\n\n* -Make binary/file in/out interface for TOAST columns (base64)\n\nI looked through the 7.2 docs and I couldn't find any reference to this \nnew functionality, so I am assuming that it isn't completed yet. If this \nis going to be done for 7.2, I would like to get a better understanding \nof what functionality is going to be provided. That way I can decide \nhow best to expose that functionality through the jdbc interface.\n\nthanks,\n--Barry\n\n",
"msg_date": "Sat, 04 Aug 2001 16:41:59 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "Question about todo item"
},
{
"msg_contents": "> I was going through the Todo list looking at the items that are planned \n> for 7.2 (i.e. those starting with a '-'). I was doing this to see if \n> any might impact the jdbc driver. The only one that I thought might \n> have an impact on the jdbc code is the item:\n> \n> * -Make binary/file in/out interface for TOAST columns (base64)\n\nMarked items are done, not planned for 7.2.\n\n> I looked through the 7.2 docs and I couldn't find any reference to this \n> new functionality, so I am assuming that it isn't completed yet. If this \n> is going to be done for 7.2, I would like to get a better understanding \n> of what functionality is going to be provided. That way I can decide \n> how best to expose that functionality through the jdbc interface.\n\nNot sure on the docs issue, but it is a set of function uuencode,\nuudecode, etc that allow binary data to be uuencoded, then loaded into a\nbytea field as binary.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 21:14:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "OK. Those functions are in the docs. I didn't relate those functions \nand this todo item together.\n\nBy 'in/out interface for TOAST columns' I thought this item dealt with \nadding large object like functions to read/write/append to TOAST column \ndata. I know that has been talked about in the past on hackers. But I \ndon't see it on the todo list. Has that been done?\n\nthanks,\n--Barry\n\nBruce Momjian wrote:\n>>I was going through the Todo list looking at the items that are planned \n>>for 7.2 (i.e. those starting with a '-'). I was doing this to see if \n>>any might impact the jdbc driver. The only one that I thought might \n>>have an impact on the jdbc code is the item:\n>>\n>>* -Make binary/file in/out interface for TOAST columns (base64)\n>>\n> \n> Marked items are done, not planned for 7.2.\n> \n> \n>>I looked through the 7.2 docs and I couldn't find any reference to this \n>>new functionality, so I am assuming that it isn't completed yet. If this \n>>is going to be done for 7.2, I would like to get a better understanding \n>>of what functionality is going to be provided. That way I can decide \n>>how best to expose that functionality through the jdbc interface.\n>>\n> \n> Not sure on the docs issue, but it is a set of function uuencode,\n> uudecode, etc that allow binary data to be uuencoded, then loaded into a\n> bytea field as binary.\n> \n> \n\n\n",
"msg_date": "Sat, 04 Aug 2001 18:35:15 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "> OK. Those functions are in the docs. I didn't relate those functions \n> and this todo item together.\n> \n> By 'in/out interface for TOAST columns' I thought this item dealt with \n> adding large object like functions to read/write/append to TOAST column \n> data. I know that has been talked about in the past on hackers. But I \n> don't see it on the todo list. Has that been done?\n\nOnly large objects allow that kind of access. I don't think we will do\nthat for TOAST columns.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 21:51:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> By 'in/out interface for TOAST columns' I thought this item dealt with \n>> adding large object like functions to read/write/append to TOAST column \n>> data. I know that has been talked about in the past on hackers. But I \n>> don't see it on the todo list. Has that been done?\n\n> Only large objects allow that kind of access. I don't think we will do\n> that for TOAST columns.\n\nBarry's right --- that *has* been talked about, and I thought the\nconsensus was that we needed such functions. You don't necessarily\nwant to read or write a multi-megabyte TOASTed value all in one go.\nIf it's not on TODO then it should be. (But I suspect if you check\nthe archives, you'll discover that this is exactly what the TODO\nitem was really about.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Aug 2001 22:33:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> By 'in/out interface for TOAST columns' I thought this item dealt with \n> >> adding large object like functions to read/write/append to TOAST column \n> >> data. I know that has been talked about in the past on hackers. But I \n> >> don't see it on the todo list. Has that been done?\n> \n> > Only large objects allow that kind of access. I don't think we will do\n> > that for TOAST columns.\n> \n> Barry's right --- that *has* been talked about, and I thought the\n> consensus was that we needed such functions. You don't necessarily\n> want to read or write a multi-megabyte TOASTed value all in one go.\n> If it's not on TODO then it should be. (But I suspect if you check\n> the archives, you'll discover that this is exactly what the TODO\n> item was really about.)\n\nYes, I kept talking about it, but no one was interested, saying large\nobjects are better for that kind of access. When the uuencode idea came\naround, I though the read/write binary toast idea was dead.\n\nI agree we should have it, but I thought the problem was that we\ncouldn't come up with an API that worked.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 22:35:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I agree we should have it, but I thought the problem was that we\n> couldn't come up with an API that worked.\n\nAFAIR, no one's really tried yet. I do not recall any proposals\ngetting shot down ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Aug 2001 22:38:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I agree we should have it, but I thought the problem was that we\n> > couldn't come up with an API that worked.\n> \n> AFAIR, no one's really tried yet. I do not recall any proposals\n> getting shot down ...\n\nI keep bugging Jan about it, since pre-7.1 and no one has come up with\nan idea. I think the lack of any proposal or anyone even mentioning\nthey liked the idea made me give up, especially when uuencode at least\ngave us binary in/out.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 22:41:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I keep bugging Jan about it, since pre-7.1 and no one has come up with\n> an idea.\n\nWell, if you want an idea:\n\n\tBEGIN;\n\n\tSELECT open_toast_object(toastable_column) FROM tab WHERE ...;\n\n\t-- app checks that it got exactly one result back\n\n\t-- app lo_reads and/or lo_writes using ID returned by SELECT\n\n\tEND;\n\nImplementation is left as an exercise for the reader ;-).\n\nOffhand this seems like it would be doable for a column-value that\nwas actually moved out-of-line by TOAST, since the open_toast_object\nfunction could see and return the TOAST pointer, and then the read/\nwrite operations just hack on rows in pg_largeobject. The hard part\nis how to provide equivalent functionality (transparent to the client\nof course) when the particular value you select has *not* been moved\nout-of-line. Ideas anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Aug 2001 23:11:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I keep bugging Jan about it, since pre-7.1 and no one has come up with\n> > an idea.\n> \n> Well, if you want an idea:\n> \n> \tBEGIN;\n> \n> \tSELECT open_toast_object(toastable_column) FROM tab WHERE ...;\n> \n> \t-- app checks that it got exactly one result back\n> \n> \t-- app lo_reads and/or lo_writes using ID returned by SELECT\n> \n> \tEND;\n> \n> Implementation is left as an exercise for the reader ;-).\n> \n> Offhand this seems like it would be doable for a column-value that\n> was actually moved out-of-line by TOAST, since the open_toast_object\n> function could see and return the TOAST pointer, and then the read/\n> write operations just hack on rows in pg_largeobject. The hard part\n\nI am confused how pg_largeobject is involved?\n\n> is how to provide equivalent functionality (transparent to the client\n> of course) when the particular value you select has *not* been moved\n> out-of-line. Ideas anyone?\n\nDon't forget compression of TOAST columns. How do you fseek/read/write\nin there?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 4 Aug 2001 23:34:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Offhand this seems like it would be doable for a column-value that\n>> was actually moved out-of-line by TOAST, since the open_toast_object\n>> function could see and return the TOAST pointer, and then the read/\n>> write operations just hack on rows in pg_largeobject. The hard part\n\n> I am confused how pg_largeobject is involved?\n\ns/pg_largeobject/toast_table_for_relation/ ... sorry about that ...\n\n> Don't forget compression of TOAST columns. How do you fseek/read/write\n> in there?\n\nWell, you can *do* it, just don't expect it to be fast. The\nimplementation would have to read or write most of the value, not just\nthe segment you wanted. A person who actually expected to use this\nstuff would likely want to disable compression on a column he wanted\nrandom access within.\n\nHmm ... that provides an idea. We could easily add some additional\n'attstorage' settings that say *all* values of a column must be forced\nout-of-line (with or without allowing compression), regardless of size.\nThen, open_toast_object would work reliably on such a column. One\npossible user API to such an infrastructure is to invent BLOB and CLOB\ndatatypes, which are just like bytea and text except that they force the\nappropriate attstorage value. Ugly as sin, ain't it ... but I bet it\ncould be made to work.\n\nOkay, there's your idea. Now, who can do better?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Aug 2001 23:55:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I agree we should have it, but I thought the problem was that we\n> > > couldn't come up with an API that worked.\n> >\n> > AFAIR, no one's really tried yet. I do not recall any proposals\n> > getting shot down ...\n> \n> I keep bugging Jan about it, since pre-7.1 and no one has come up with\n> an idea. I think the lack of any proposal or anyone even mentioning\n> they liked the idea made me give up, especially when uuencode at least\n> gave us binary in/out.\n\nCan anyone recall, why was uuencode chosen over base64 encoding ?\n\n-----------------\nHannu\n",
"msg_date": "Sun, 05 Aug 2001 10:48:44 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I keep bugging Jan about it, since pre-7.1 and no one has come up with\n> > an idea.\n> \n> Well, if you want an idea:\n> \n> BEGIN;\n> \n> SELECT open_toast_object(toastable_column) FROM tab WHERE ...;\n> \n> -- app checks that it got exactly one result back\n> \n> -- app lo_reads and/or lo_writes using ID returned by SELECT\n> \n> END;\n> \n> Implementation is left as an exercise for the reader ;-).\n> \n> Offhand this seems like it would be doable for a column-value that\n> was actually moved out-of-line by TOAST, since the open_toast_object\n> function could see and return the TOAST pointer, and then the read/\n> write operations just hack on rows in pg_largeobject. The hard part\n> is how to provide equivalent functionality (transparent to the client\n> of course) when the particular value you select has *not* been moved\n> out-of-line. Ideas anyone?\n\nI'd propose the folllowing - \n\n BEGIN;\n\n DECLARE toastaccesscursor \n CURSOR FOR \n SELECT open_toast_object_handle(toastable_column) as\ntoast_object_handle FROM tab WHERE ...;\n\n -- while you get any rows\n\n FETCH 1 IN toastaccesscursor;\n \n -- app lo_reads and/or lo_writes using toast_object_handle\nreturned by SELECT\n \n END;\n\n\nIf we really wanted to have lo_xxx functionality on any toastable column\nit should be doable by \ncreating a fake toast-handle and manipulating the column value directly,\npreferrably automatically \nmoving the lo_written column to toast. Faking the handle should be easy\nas it has to live only while \ncursor is positioned on affected row .\n\nBut your another idea of creating special [B|C]LOB types that are\nallways saved to toast seems nicer\n\nCREATE TABLE breakfast (\n main eggs_and_bacon WITH TOAST = 'always,nocompress'\n);\n\nand just raise an error or do a silent conversion if a section is\nlo_written in a compressed \nor non-toasted column.\n\nAs TOAST is a general purpose feature of postgres I think that providing\nthe WITH options is more \ndesirable than special types for only a few of them. \n\nCLOB and BLOB could still be provided as shorthand names similar to\nSERIAL.\n\n---------------\nHannu\n",
"msg_date": "Sun, 05 Aug 2001 11:13:36 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I agree we should have it, but I thought the problem was that we\n> > couldn't come up with an API that worked.\n>\n> AFAIR, no one's really tried yet. I do not recall any proposals\n> getting shot down ...\n\n One of the problems I saw, and that's probably why we don't\n have a proposal yet, is, that the size of the data is\n recorded in the toast reference held in the main tuple. If\n you later open the toast value for writing, you'll change the\n size, but you'd need to change it in the main tuple too,\n what'd require a regular update on the main tuple, what I\n don't think we want to have here.\n\n The other problem is, if you insert a tuple containing a\n small value (e.g. empty string), it'll not get toasted and\n you can't force it to get. Later you open it for writing and\n pump a CD-image into. How do we convert the existing empty\n text datum into a toast reference in the main tuple?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 6 Aug 2001 10:35:00 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I keep bugging Jan about it, since pre-7.1 and no one has come up with\n> > an idea.\n>\n> Well, if you want an idea:\n>\n> BEGIN;\n>\n> SELECT open_toast_object(toastable_column) FROM tab WHERE ...;\n>\n> -- app checks that it got exactly one result back\n>\n> -- app lo_reads and/or lo_writes using ID returned by SELECT\n>\n> END;\n>\n> Implementation is left as an exercise for the reader ;-).\n>\n> Offhand this seems like it would be doable for a column-value that\n> was actually moved out-of-line by TOAST, since the open_toast_object\n> function could see and return the TOAST pointer, and then the read/\n> write operations just hack on rows in pg_largeobject. The hard part\n> is how to provide equivalent functionality (transparent to the client\n> of course) when the particular value you select has *not* been moved\n> out-of-line. Ideas anyone?\n\n TOAST values aren't stored in pg_largeobject. And how do you\n seek to a position in a compressed and then sliced object? We\n need a way to force the object over a streaming interface\n into uncompressed toast slices first. Let me think about it\n for two days, Okay?\n\n The interface lacks imho a mode (r/w/rw/a) argument. Other\n than that I'd like this part.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 6 Aug 2001 10:40:18 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> One of the problems I saw, and that's probably why we don't\n> have a proposal yet, is, that the size of the data is\n> recorded in the toast reference held in the main tuple. If\n> you later open the toast value for writing, you'll change the\n> size, but you'd need to change it in the main tuple too,\n> what'd require a regular update on the main tuple, what I\n> don't think we want to have here.\n\nWell, in fact, maybe we *should*. I was thinking last night about\nthe fact that large objects as they stand are broken from a\npermissions-checking point of view: anyone who knows an LO's OID\ncan read or write it. A LO-style interface for toasted columns must\nnot be so brain-dead. This says that a SELECT open_toast_object()\nshould deliver a read-only object reference, and that if you want\nto update, you should have to do an UPDATE.\n\nNow a read-only TOAST LO reference strikes me as no problem. If the\nopen() function finds that it's been handed a not-toasted value, it\ncan just save the value verbatim in the open-LO-reference table.\nThe value is not large, by definition, so this will work fine.\n\nAs for the update side of things, the best idea I can come up with\nis a multi-phase operation: open the value with a select, read/write\nthe reference, store the updated reference with UPDATE. Something\nlike:\n\n1. SELECT writable_toast_reference(column) FROM table WHERE ...;\n\n(Actually, SELECT FOR UPDATE would be the more common idiom.)\n\n2. Read and/or write the LO reference returned by SELECT. Note that\nthis must be defined to read/write a temporary work area --- if the\ntransaction aborts in this part, or commits without doing UPDATE,\nnothing has happened to the stored value referenced by the main table\nrow. (I think this happens automatically if we are hacking rows in\na toast table. If we are hacking an in-line value stored in the\nLO-reference table, we might at some point decide we need to shove it\nout to disk.)\n\n3. UPDATE table SET column = write_toast_reference(objectref) WHERE ...;\n\nwrite_toast_reference extracts the toastable column's data or reference\nfrom the LO table, closes the open LO reference (so you can't continue\nhacking the data afterwards), and proceeds with a normal UPDATE.\n\nIt would also be pretty straightforward to extend this to the INSERT\ncase: we just need an \"open\" function that creates a new, empty object\nof a TOASTable type in the LO reference table. Write on this, and\nfinally invoke write_toast_reference() in the INSERT.\n\n\nKinda grotty, but implementable, and it doesn't require a whole new set\nof permissions concepts. Can anyone improve on this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 10:53:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <JanWieck@yahoo.com> writes:\n> > One of the problems I saw, and that's probably why we don't\n> > have a proposal yet, is, that the size of the data is\n> > recorded in the toast reference held in the main tuple. If\n> > you later open the toast value for writing, you'll change the\n> > size, but you'd need to change it in the main tuple too,\n> > what'd require a regular update on the main tuple, what I\n> > don't think we want to have here.\n> \n> Well, in fact, maybe we *should*. \n\nI think so too, as we shouldnt do in-place modification in the toast \ntable anyway but give changed pages new trx ids, i.e UPDATE them.\n\nit could be somewhat tricky to change just a few pages if there are \nsome inter page pointers in toast-table. If its all done with regular\nindex only then this should pose no problem.\n\n> I was thinking last night about\n> the fact that large objects as they stand are broken from a\n> permissions-checking point of view: anyone who knows an LO's OID\n> can read or write it. A LO-style interface for toasted columns must\n> not be so brain-dead. This says that a SELECT open_toast_object()\n> should deliver a read-only object reference, and that if you want\n> to update, you should have to do an UPDATE.\n> \n> Now a read-only TOAST LO reference strikes me as no problem. If the\n> open() function finds that it's been handed a not-toasted value, it\n> can just save the value verbatim in the open-LO-reference table.\n> The value is not large, by definition, so this will work fine.\n> \n> As for the update side of things, the best idea I can come up with\n> is a multi-phase operation: open the value with a select, read/write\n> the reference, store the updated reference with UPDATE. Something\n> like:\n> \n> 1. SELECT writable_toast_reference(column) FROM table WHERE ...;\n> \n> (Actually, SELECT FOR UPDATE would be the more common idiom.)\n> \n> 2. Read and/or write the LO reference returned by SELECT. Note that\n> this must be defined to read/write a temporary work area --- if the\n> transaction aborts in this part, or commits without doing UPDATE,\n> nothing has happened to the stored value referenced by the main table\n> row. (I think this happens automatically if we are hacking rows in\n> a toast table. If we are hacking an in-line value stored in the\n> LO-reference table, we might at some point decide we need to shove it\n> out to disk.)\n\nbut in both inline and toast-table modified pages should have new \ntransaction id's like regular tuples and thus be handled by regular \ntransaction commit/abort mechanics, at least this seema as a postgres \nway to do it .\n\n> 3. UPDATE table SET column = write_toast_reference(objectref) WHERE ...;\n> \n> write_toast_reference extracts the toastable column's data or reference\n> from the LO table, closes the open LO reference (so you can't continue\n> hacking the data afterwards), and proceeds with a normal UPDATE.\n> \n> It would also be pretty straightforward to extend this to the INSERT\n> case: we just need an \"open\" function that creates a new, empty object\n> of a TOASTable type in the LO reference table. Write on this, and\n> finally invoke write_toast_reference() in the INSERT.\n> \n> Kinda grotty, but implementable, and it doesn't require a whole new set\n> of permissions concepts. Can anyone improve on this?\n\nIf toast table has the same permissions as the main table and lo_write \nhonours these then we should be ok.\n\n---------------\nHannu\n",
"msg_date": "Mon, 06 Aug 2001 18:00:05 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "Can this be added to the TODO list? (actually put back on the TODO list) \nAlong with this email thread?\n\nI feel that it is very important to have BLOB support in postgres that \nis similar to what the commercial databases provide. This could either \nmean fixing the current implementation or adding additional capabilities \nto toasted columns.\n\nThe major problem with the current LargeObject implementation is that \nwhen the row containing the LargeObject is deleted the LargeObject \nisn't. This can be a useful feature under some circumstances, but it \nisn't how other databases handle BLOBs. Thus porting code from other \ndatabases is a challenge. While it is true that this can be worked \naround through triggers, I don't like the manual nature of the workarounds.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n>>>Offhand this seems like it would be doable for a column-value that\n>>>was actually moved out-of-line by TOAST, since the open_toast_object\n>>>function could see and return the TOAST pointer, and then the read/\n>>>write operations just hack on rows in pg_largeobject. The hard part\n>>>\n> \n>>I am confused how pg_largeobject is involved?\n>>\n> \n> s/pg_largeobject/toast_table_for_relation/ ... sorry about that ...\n> \n> \n>>Don't forget compression of TOAST columns. How do you fseek/read/write\n>>in there?\n>>\n> \n> Well, you can *do* it, just don't expect it to be fast. The\n> implementation would have to read or write most of the value, not just\n> the segment you wanted. A person who actually expected to use this\n> stuff would likely want to disable compression on a column he wanted\n> random access within.\n> \n> Hmm ... that provides an idea. We could easily add some additional\n> 'attstorage' settings that say *all* values of a column must be forced\n> out-of-line (with or without allowing compression), regardless of size.\n> Then, open_toast_object would work reliably on such a column. One\n> possible user API to such an infrastructure is to invent BLOB and CLOB\n> datatypes, which are just like bytea and text except that they force the\n> appropriate attstorage value. Ugly as sin, ain't it ... but I bet it\n> could be made to work.\n> \n> Okay, there's your idea. Now, who can do better?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n\n",
"msg_date": "Thu, 16 Aug 2001 09:18:50 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about todo item"
},
{
"msg_contents": "I have added to TODO:\n\nBINARY DATA\n o -Add non-large-object binary field (already exists -- bytea)\n o -Make binary interface for TOAST columns (base64)\n o Improve vacuum of large objects (/contrib/vacuumlo)\n o Add security checking for large objects\n o Make file in/out interface for TOAST columns, similar to large object\n interface (force out-of-line storage and no compression)\n o Auto-delete large objects when referencing row is deleted\n\n> Can this be added to the TODO list? (actually put back on the TODO list) \n> Along with this email thread?\n> \n> I feel that it is very important to have BLOB support in postgres that \n> is similar to what the commercial databases provide. This could either \n> mean fixing the current implementation or adding additional capabilities \n> to toasted columns.\n> \n> The major problem with the current LargeObject implementation is that \n> when the row containing the LargeObject is deleted the LargeObject \n> isn't. This can be a useful feature under some circumstances, but it \n> isn't how other databases handle BLOBs. Thus porting code from other \n> databases is a challenge. While it is true that this can be worked \n> around through triggers, I don't like the manual nature of the workarounds.\n> \n> thanks,\n> --Barry\n> \n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \n> >>>Offhand this seems like it would be doable for a column-value that\n> >>>was actually moved out-of-line by TOAST, since the open_toast_object\n> >>>function could see and return the TOAST pointer, and then the read/\n> >>>write operations just hack on rows in pg_largeobject. The hard part\n> >>>\n> > \n> >>I am confused how pg_largeobject is involved?\n> >>\n> > \n> > s/pg_largeobject/toast_table_for_relation/ ... sorry about that ...\n> > \n> > \n> >>Don't forget compression of TOAST columns. How do you fseek/read/write\n> >>in there?\n> >>\n> > \n> > Well, you can *do* it, just don't expect it to be fast. The\n> > implementation would have to read or write most of the value, not just\n> > the segment you wanted. A person who actually expected to use this\n> > stuff would likely want to disable compression on a column he wanted\n> > random access within.\n> > \n> > Hmm ... that provides an idea. We could easily add some additional\n> > 'attstorage' settings that say *all* values of a column must be forced\n> > out-of-line (with or without allowing compression), regardless of size.\n> > Then, open_toast_object would work reliably on such a column. One\n> > possible user API to such an infrastructure is to invent BLOB and CLOB\n> > datatypes, which are just like bytea and text except that they force the\n> > appropriate attstorage value. Ugly as sin, ain't it ... but I bet it\n> > could be made to work.\n> > \n> > Okay, there's your idea. Now, who can do better?\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://www.postgresql.org/search.mpl\n> > \n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 16:36:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about todo item"
}
] |
[
{
"msg_contents": "I have been thinking about how to implement nested transactions /\nsavepoints. As you may remember, Vadim wants to add UNDO to WAL and\nthus enable this feature.\n\nSome objected because of the added WAL complexity and the problem with\nlong running transactions requiring lots of WAL segments.\n\nI have not been able to come up with any solution that doesn't have some\nUNDO capability to mark aborted tuples of the current transaction.\n\nMy idea is that we not put UNDO information into WAL but keep a List of\nrel ids / tuple ids in the memory of each backend and do the undo inside\nthe backend. We could go around and clear our transaction id from\ntuples that need to be undone.\n\nBasically, I am suggesting a per-backend UNDO segment. This seems to\nenable nested transactions without the disadvantages of putting it in\nWAL.\n\nAm I missing something about why UNDO should be in WAL? \n\nI realize UNDO in WAL would allow UNDO of any transaction, but we don't\nneed that in our current non-overwriting system. It is only nested\ntransactions we need to undo, and I don't think we need WAL writing for\nthat because we are always undoing something before we commit the main\ntransaction. In a crash recover, the entire transaction is aborted\nanyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 5 Aug 2001 00:32:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Idea for nested transactions / savepoints"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> My idea is that we not put UNDO information into WAL but keep a List of\n> rel ids / tuple ids in the memory of each backend and do the undo inside\n> the backend.\n\nThe complaints about WAL size amount to \"we don't have the disk space\nto keep track of this, for long-running transactions\". If it doesn't\nfit on disk, how likely is it that it will fit in memory?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Aug 2001 10:34:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea for nested transactions / savepoints "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> The complaints about WAL size amount to \"we don't have the disk space\n> >> to keep track of this, for long-running transactions\". If it doesn't\n> >> fit on disk, how likely is it that it will fit in memory?\n> \n> > Sure, we can put on the disk if that is better.\n> \n> I think you missed my point. Unless something can be done to make the\n> log info a lot smaller than it is now, keeping it all around until\n> transaction end is just not pleasant. Waving your hands and saying\n> that we'll keep it in a different place doesn't affect the fundamental\n> problem: if the transaction runs a long time, the log is too darn big.\n\nKeeping it in a different place does have other benefits - you can\ndiscard \neach subtransaction after it is committed/aborted regardless of what WAL \nlog does, so the chap who did a \"begin transaction\" 8 hours ago does not\nget \nsubtransactions kept as well, thus postponing the problem a lot.\n\n> There probably are things we can do --- for example, I bet an UNDO\n> log kept in this way wouldn't need to include page images.\n\nNot keeping something that does not need to be kept is always a good\nidea \nwhen preserving space is important.\n\n> But it's that sort of consideration that will make or break UNDO, \n> not where we store the info.\n\nBut \"how long do we need to keep the info\" _is_ an important\nconsideration.\n\n--------------\nHannu\n",
"msg_date": "Sun, 05 Aug 2001 22:52:00 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Idea for nested transactions / savepoints"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > My idea is that we not put UNDO information into WAL but keep a List of\n> > rel ids / tuple ids in the memory of each backend and do the undo inside\n> > the backend.\n> \n> The complaints about WAL size amount to \"we don't have the disk space\n> to keep track of this, for long-running transactions\". If it doesn't\n> fit on disk, how likely is it that it will fit in memory?\n\nSure, we can put on the disk if that is better. I thought the problem\nwith WAL undo is that you have to keep UNDO info around for all\ntransactions that are older than the earliest transaction. So, if I\nstart a nested transaction, and then sit at a prompt for 8 hours, all\nWAL logs are kept for 8 hours.\n\nWe can create a WAL file for every backend, and record just the nested\ntransaction information. In fact, once a nested transaction finishes,\nwe don't need the info anymore. Certainly we don't need to flush these\nto disk.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 5 Aug 2001 14:30:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea for nested transactions / savepoints"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> The complaints about WAL size amount to \"we don't have the disk space\n>> to keep track of this, for long-running transactions\". If it doesn't\n>> fit on disk, how likely is it that it will fit in memory?\n\n> Sure, we can put on the disk if that is better.\n\nI think you missed my point. Unless something can be done to make the\nlog info a lot smaller than it is now, keeping it all around until\ntransaction end is just not pleasant. Waving your hands and saying\nthat we'll keep it in a different place doesn't affect the fundamental\nproblem: if the transaction runs a long time, the log is too darn big.\n\nThere probably are things we can do --- for example, I bet an UNDO\nlog kept in this way wouldn't need to include page images. But it's\nthat sort of consideration that will make or break UNDO, not where\nwe store the info.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Aug 2001 14:44:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea for nested transactions / savepoints "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> The complaints about WAL size amount to \"we don't have the disk space\n> >> to keep track of this, for long-running transactions\". If it doesn't\n> >> fit on disk, how likely is it that it will fit in memory?\n> \n> > Sure, we can put on the disk if that is better.\n> \n> I think you missed my point. Unless something can be done to make the\n> log info a lot smaller than it is now, keeping it all around until\n> transaction end is just not pleasant. Waving your hands and saying\n> that we'll keep it in a different place doesn't affect the fundamental\n> problem: if the transaction runs a long time, the log is too darn big.\n\nWhen you said long running, I thought you were concerned about long\nrunning in duration, not large transaction. Long duration in one-WAL\nsetup would cause all transaction logs to be kept. Large transactions\nare another issue.\n\nOne solution may be to store just the relid if many tuples are modified\nin the same table. If you stored the command counter for start/end of\nthe nested transaction, it would be possible to sequential scan the\ntable and undo all the affected tuples. Does that help? Again, I am\njust throwing out ideas here, hoping something will catch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 5 Aug 2001 15:38:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea for nested transactions / savepoints"
},
{
"msg_contents": "> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> The complaints about WAL size amount to \"we don't have the disk space\n> > >> to keep track of this, for long-running transactions\". If it doesn't\n> > >> fit on disk, how likely is it that it will fit in memory?\n> > \n> > > Sure, we can put on the disk if that is better.\n> > \n> > I think you missed my point. Unless something can be done to make the\n> > log info a lot smaller than it is now, keeping it all around until\n> > transaction end is just not pleasant. Waving your hands and saying\n> > that we'll keep it in a different place doesn't affect the fundamental\n> > problem: if the transaction runs a long time, the log is too darn big.\n> \n> When you said long running, I thought you were concerned about long\n> running in duration, not large transaction. Long duration in one-WAL\n> setup would cause all transaction logs to be kept. Large transactions\n> are another issue.\n> \n> One solution may be to store just the relid if many tuples are modified\n> in the same table. If you stored the command counter for start/end of\n> the nested transaction, it would be possible to sequential scan the\n> table and undo all the affected tuples. Does that help? Again, I am\n> just throwing out ideas here, hoping something will catch.\n\nActually, we need to keep around nested transaction UNDO information\nonly until the nested transaction exits to the main transaction:\n\n\tBEGIN WORK;\n\t\tBEGIN WORK;\n\t\tCOMMIT;\n\t\t-- we can throw away the UNDO here\n\t\tBEGIN WORK;\n\t\t\tBEGIN WORK;\n\t\t\t...\n\t\t\tCOMMIT\n\t\tCOMMIT;\n\t\t-- we can throw away the UNDO here\n\tCOMMIT;\n\nWe are using the outside transaction for our ACID capabilities, and just\nusing UNDO for nested transaction capability.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 5 Aug 2001 21:16:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea for nested transactions / savepoints"
},
{
"msg_contents": "\nAdded to TODO.detail/transactions as a nested transaction idea.\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> The complaints about WAL size amount to \"we don't have the disk space\n> > >> to keep track of this, for long-running transactions\". If it doesn't\n> > >> fit on disk, how likely is it that it will fit in memory?\n> > \n> > > Sure, we can put on the disk if that is better.\n> > \n> > I think you missed my point. Unless something can be done to make the\n> > log info a lot smaller than it is now, keeping it all around until\n> > transaction end is just not pleasant. Waving your hands and saying\n> > that we'll keep it in a different place doesn't affect the fundamental\n> > problem: if the transaction runs a long time, the log is too darn big.\n> \n> When you said long running, I thought you were concerned about long\n> running in duration, not large transaction. Long duration in one-WAL\n> setup would cause all transaction logs to be kept. Large transactions\n> are another issue.\n> \n> One solution may be to store just the relid if many tuples are modified\n> in the same table. If you stored the command counter for start/end of\n> the nested transaction, it would be possible to sequential scan the\n> table and undo all the affected tuples. Does that help? Again, I am\n> just throwing out ideas here, hoping something will catch.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 16:41:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea for nested transactions / savepoints"
}
] |
[
{
"msg_contents": "Some of you got bounces over the last two weeks. I did't get any email.\nIs there a way to have the messages of mailing list L between dates X and\nY have sent to me with a majordomo command?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 5 Aug 2001 22:04:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Mailbox messed up"
},
{
"msg_contents": "> Some of you got bounces over the last two weeks. I did't get any email.\n> Is there a way to have the messages of mailing list L between dates X and\n> Y have sent to me with a majordomo command?\n\necho \"help\" | mail majordomo@postgresql.org\n\n-- \nSean Chittenden",
"msg_date": "Sun, 5 Aug 2001 13:17:03 -0700",
"msg_from": "Sean Chittenden <sean-pgsql-hackers@chittenden.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailbox messed up"
}
] |
[
{
"msg_contents": "I know. I forgot to put the -request in the e-mail address.\n",
"msg_date": "Sun, 05 Aug 2001 18:44:19 -0400",
"msg_from": "Digital Wokan <wokan@home.com>",
"msg_from_op": true,
"msg_subject": "Sorry about that unsubscribe"
}
] |
[
{
"msg_contents": "I have created a test data using pgbench, played with the partial\nindex.\n\ntest=# create index myindex on accounts(aid) where bid <> 0;\nCREATE\ntest=# explain select * from accounts where aid < 10 and bid <> 0;\n\nand I got a log message:\n\nDEBUG: clause_pred_clause_test: unknown pred_op\n\nIs this normal?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 06 Aug 2001 10:27:59 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "partial index"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> test=# create index myindex on accounts(aid) where bid <> 0;\n\n> test=# explain select * from accounts where aid < 10 and bid <> 0;\n\n> and I got a log message:\n\n> DEBUG: clause_pred_clause_test: unknown pred_op\n\n> Is this normal?\n\nYes. We might want to suppress those DEBUG messages before release.\nThe original implementation would have refused to let you create a\npartial index with such a WHERE clause, since <> isn't a btree-indexable\noperator. We agreed to let people create such indexes --- but\nindxpath.c's little theorem-prover can't do anything with such a\npredicate, and it complains about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Aug 2001 21:59:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: partial index "
}
] |
[
{
"msg_contents": "Is it at all possible to use vim to interact with psql to provide\ninput? That would be cool!\n\n",
"msg_date": "Mon, 6 Aug 2001 13:09:32 +1000 (EST)",
"msg_from": "Grant <grant@conprojan.com.au>",
"msg_from_op": true,
"msg_subject": "Vim!"
}
] |
[
{
"msg_contents": "\n> Some other databases have the notion of a ROWID which uniquely\nidentifies a row\n> within a table. OID can be used for that, but it means if you use it,\nyou must\n> limit the size of your whole database system.\n\nImho that is getting it all wrong. OID is *not* a suitable substitute\nfor other \ndb's ROWID.\n\nIf you take a few extra precautions then you can use XTID in PostgreSQL\ninstead of other's ROWID.\n\nWe often hear, that it is safer to use ROWID in Oracle and Informix than\nin \nPostgreSQL. It is only true that the risc of getting at the wrong record\nis \nlower. Are you going to take chances when manipulating rows ? NO !\nThus any sensible program working on ROWID's will have builtin\nprecautions,\nlike locking the table, or using additional where quals.\n\nI am still of the opinion, that we should invent an alias ROWID at the\nSQL level\nfor the current XTID. I do not think that it matters what datatype this\nROWID is,\nan arbitrary string like xtid is sufficient, it does not need to be an\ninteger.\n\nAndreas\n",
"msg_date": "Mon, 6 Aug 2001 08:38:03 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "I think you are focusing too much on \"ROWID\" and not enough on OID. The issue\nat hand is OID. It is a PostgreSQL cluster wide limitation. As data storage\ndecreases in price, the likelihood of people running into this limitation\nincreases. I have run into OID problems in my curent project. Geez, 40G 7200\nRPM drives are $120, amazing.\n\nTom has proposed being able to remove the OID from tables, to preserve this\nresource. I originally thought this was a good idea, but there are tools and\nutilities others may want to use in the future that require OIDs, thus they\nwould have to be re-written or abandoned altogether.\n\nIt seems to me, I guess and others too, that the OID mechanism should be on a\nper table basis. That way OIDs are much more likely to be unique, and TRUNCATE\non a table should reset it's OID counter to zero.\n\n\nZeugswetter Andreas SB SD wrote:\n> \n> > Some other databases have the notion of a ROWID which uniquely\n> identifies a row\n> > within a table. OID can be used for that, but it means if you use it,\n> you must\n> > limit the size of your whole database system.\n> \n> Imho that is getting it all wrong. OID is *not* a suitable substitute\n> for other\n> db's ROWID.\n> \n> If you take a few extra precautions then you can use XTID in PostgreSQL\n> instead of other's ROWID.\n> \n> We often hear, that it is safer to use ROWID in Oracle and Informix than\n> in\n> PostgreSQL. It is only true that the risc of getting at the wrong record\n> is\n> lower. Are you going to take chances when manipulating rows ? NO !\n> Thus any sensible program working on ROWID's will have builtin\n> precautions,\n> like locking the table, or using additional where quals.\n> \n> I am still of the opinion, that we should invent an alias ROWID at the\n> SQL level\n> for the current XTID. I do not think that it matters what datatype this\n> ROWID is,\n> an arbitrary string like xtid is sufficient, it does not need to be an\n> integer.\n> \n> Andreas\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Mon, 06 Aug 2001 07:17:17 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "On Mon, 6 Aug 2001, mlw wrote:\n\n> I think you are focusing too much on \"ROWID\" and not enough on OID. The issue\n> at hand is OID. It is a PostgreSQL cluster wide limitation. As data storage\n> decreases in price, the likelihood of people running into this limitation\n> increases. I have run into OID problems in my curent project. Geez, 40G 7200\n> RPM drives are $120, amazing.\nPossibly you were using OIDs for what they weren't intended ;)\n\n> Tom has proposed being able to remove the OID from tables, to preserve\n> this resource. I originally thought this was a good idea, but there\n> are tools and utilities others may want to use in the future that\n> require OIDs, thus they would have to be re-written or abandoned\n> altogether.\nWhat are these tools?\n\n> It seems to me, I guess and others too, that the OID mechanism should be on a\n> per table basis. That way OIDs are much more likely to be unique, and TRUNCATE\n> on a table should reset it's OID counter to zero.\nI disagree. OID as it is now is a mandatory SERIAL that is added to every\ntable. Most tables don't need such a field, those which do, well, they can\nkeep it as it is now (global per-database), or, if you want per-table\nsequence, just create a SERIAL field explicitly.\n\n",
"msg_date": "Mon, 6 Aug 2001 08:29:05 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "\nSomehow I guess I created a misunderstanding. I don't really care about\nROWID. I care that OID is a 32 bit number. The notion that each table could\nhave its own \"OID\" similar to a ROWID could be an intermediate solution. I\nhave flip-flopped a couple times about whether or not the OID being able to\nbe eliminated from some tables is a good idea. Some code depends on the\nOID.\n\nI have hit OID problems personally. To be honest I think it can be a huge\nproblem. As I have said, 40G disks are under $100. Just a few years ago a\n40G storage system would have cost $20K-$30K. BIG databases are being\ncreated today, which wouldn't have been funded just a few years ago. At my\ncompany we have an aggregated database of 3 distinctly large databases, and\nhit a bug in large OID numbers in 7.0.3.\n\nThe way I see it there are 4 options for the OID:\n(1) Keep OID handling as it is. I think everyone agrees that this is not an\noption.\n(2) Allow the ability to have tables without OIDs. This is a source of\ndebate.\n(3) Allow tables to have their own notion of an OID. This is harder to do,\nand also a source of debate.\n(4) Make OIDs 64 or 128 bit. (there are platform issues.)\n\n\n\n> > Some other databases have the notion of a ROWID which uniquely\n> identifies a row\n> > within a table. OID can be used for that, but it means if you use it,\n> you must\n> > limit the size of your whole database system.\n>\n> Imho that is getting it all wrong. OID is *not* a suitable substitute\n> for other\n> db's ROWID.\n>\n> If you take a few extra precautions then you can use XTID in PostgreSQL\n> instead of other's ROWID.\n>\n> We often hear, that it is safer to use ROWID in Oracle and Informix than\n> in\n> PostgreSQL. It is only true that the risc of getting at the wrong record\n> is\n> lower. Are you going to take chances when manipulating rows ? NO !\n> Thus any sensible program working on ROWID's will have builtin\n> precautions,\n> like locking the table, or using additional where quals.\n>\n> I am still of the opinion, that we should invent an alias ROWID at the\n> SQL level\n> for the current XTID. I do not think that it matters what datatype this\n> ROWID is,\n> an arbitrary string like xtid is sufficient, it does not need to be an\n> integer.\n>\n> Andreas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n",
"msg_date": "Mon, 13 Aug 2001 10:17:25 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hello, I'm having a problem vacuum a table and I didn't see an answer using\nthe fts engine.\n\nI have two questions:\n\n1) Is this a big problem, can it be fixed, do I have to dump / restore this\ntable?\n2) I found this problem from my nightly cron driven vacuum -a -z. When it\nhits this error the entire vacuumdb process stops immediately thus skipping\nany remaining databases. Should it do this? Or should it continue on and\nvacuum the other databases?\n\nHere is the error:\n\ncms_beau=# vacuum hits; (It works without the analyze phase of backup.)\nVACUUM\ncms_beau=# VACUUM verbose analyze hits;\nNOTICE: --Relation hits--\nNOTICE: Pages 8389: Changed 0, reaped 2, Empty 0, New 0; Tup 834575: Vac 0,\nKeep/VTL 4/4, Crash 0, UnUsed 6, MinLen 52, MaxLen 121; Re-using:\nFree/Avail. Space 376/64; EndEmpty/Avail. Pages 0/1. CPU 0.34s/0.05u sec.\nNOTICE: Index hits_id_key: Pages 1831; Tuples 834575: Deleted 0. CPU\n0.11s/0.56u sec.\nNOTICE: Rel hits: Pages: 8389 --> 8389; Tuple(s) moved: 0. CPU 0.00s/0.00u\nsec.\nNOTICE: --Relation pg_toast_6742393--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail.\nSpace 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_toast_6742393_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec.\nNOTICE: Analyzing...\nERROR: MemoryContextAlloc: invalid request size 4294079565\ncms_beau=#\n\nAdditional information:\n\nsort_mem = 16384\nshared_buffers = 8192\n\ncms_beau=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.1.2 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\ncms_beau=# \\d hits\n Table \"hits\"\n Attribute | Type | Modifier\n-------------+--------------------------+-----------------------------------\n------------\n id | integer | not null default\nnextval('hits_id_seq'::text)\n operator_id | integer |\n connected | timestamp with time zone | default 'now'\n page | text |\nIndex: hits_id_key\n\ncms_beau=# select count(*) from hits;\n count\n--------\n 834539\n(1 row)\n\n\nPlease let me know if there is any other information you need.\n\nThank you much,\n\nMatt O'Connor\n\n",
"msg_date": "Mon, 13 Aug 2001 11:08:09 -0500",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Help with Vacuum Failure"
},
{
"msg_contents": "mlw wrote:\n\n> The way I see it there are 4 options for the OID:\n[snip]\n> (2) Allow the ability to have tables without OIDs. This is a source of\n> debate.\n\nI think Tom Lane has already committed some patches to allow for this.\nSo, I think you should be able to try this from the latest CVS. (Tom?)\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300, \nToronto, ON M4P 2C9\n",
"msg_date": "Mon, 13 Aug 2001 14:04:57 -0400",
"msg_from": "Neil Padgett <npadgett@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n\n> Somehow I guess I created a misunderstanding. I don't really care about\n> ROWID. I care that OID is a 32 bit number. The notion that each table could\n> have its own \"OID\" similar to a ROWID could be an intermediate solution. I\n> have flip-flopped a couple times about whether or not the OID being able to\n> be eliminated from some tables is a good idea. Some code depends on the\n> OID.\n\nSee below...\n\n> The way I see it there are 4 options for the OID:\n\n> (2) Allow the ability to have tables without OIDs. This is a source of\n> debate.\n\nIf we do this, and default OIDs to \"on\", honestly, where's the\nproblem? If the DBA does nothing, things work as before (with\npotential OID wraparound issues). If you want to avoid/minimize the\nissues, turn off OIDs on your large tables, and write/fix your code to\ncope.\n\n> (3) Allow tables to have their own notion of an OID. This is harder to do,\n> and also a source of debate.\n> (4) Make OIDs 64 or 128 bit. (there are platform issues.)\n\n(5) [this was suggested earlier] Create separate spaces for \"system\"\nand \"user\" OIDs. This requires a similar mechanism to (3), but may be \nsomewhat easier.\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n",
"msg_date": "13 Aug 2001 14:07:58 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Neil Padgett <npadgett@redhat.com> writes:\n> mlw wrote:\n>> The way I see it there are 4 options for the OID:\n> [snip]\n>> (2) Allow the ability to have tables without OIDs. This is a source of\n>> debate.\n\n> I think Tom Lane has already committed some patches to allow for this.\n> So, I think you should be able to try this from the latest CVS. (Tom?)\n\nYes, it's done and in CVS. I think this is orthogonal to the other\nproposals: whatever we want to do with OID, it's a useful feature to\nbe able to suppress them for tables that you're sure don't need one.\n\nI thought the discussion had more or less concluded that separate-OID-\ngenerator-per-table was the next step to take. That won't get done in\ntime for 7.2, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 14:56:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "\"Matthew T. O'Connor\" <matthew@zeut.net> writes:\n> cms_beau=# vacuum hits; (It works without the analyze phase of backup.)\n> VACUUM\n> cms_beau=# VACUUM verbose analyze hits;\n> NOTICE: --Relation hits--\n> NOTICE: Pages 8389: Changed 0, reaped 2, Empty 0, New 0; Tup 834575: Vac 0,\n> Keep/VTL 4/4, Crash 0, UnUsed 6, MinLen 52, MaxLen 121; Re-using:\n> Free/Avail. Space 376/64; EndEmpty/Avail. Pages 0/1. CPU 0.34s/0.05u sec.\n> NOTICE: Index hits_id_key: Pages 1831; Tuples 834575: Deleted 0. CPU\n> 0.11s/0.56u sec.\n> NOTICE: Rel hits: Pages: 8389 --> 8389; Tuple(s) moved: 0. CPU 0.00s/0.00u\n> sec.\n> NOTICE: --Relation pg_toast_6742393--\n> NOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0,\n> Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail.\n> Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\n> NOTICE: Index pg_toast_6742393_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec.\n> NOTICE: Analyzing...\n> ERROR: MemoryContextAlloc: invalid request size 4294079565\n> cms_beau=#\n\nThis looks like you have corrupted data in your table --- specifically,\na variable-length value with a bogus length word. If so, you'll get a\nsimilar error during any attempt to access the particular value or row\nthat's corrupted. A quick check of this theory is to try to pg_dump\nthe table --- if it fails with the same sort of error, then you have\na problem.\n\n> PostgreSQL 7.1.2 on i686-pc-linux-gnu, compiled by GCC 2.96\n\n2.96? AFAICT 2.95.3 is the latest official release of GCC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 11:12:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Help with Vacuum Failure "
},
{
"msg_contents": ">\n> > The way I see it there are 4 options for the OID:\n>\n\nWhat about a vacuum analyze for the database that renumbers theOIDs\nback at some baseline? There is still a limitation on the total number\nof active rows in the database (0.5 * 2^32), but at least we wouldn't\nhave this timebomb.\n\nDale Johnson\n\n\n\n",
"msg_date": "Tue, 21 Aug 2001 18:35:18 -0700",
"msg_from": "Dale Johnson <dale@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: OID wraparound: summary and proposal"
}
] |
[
{
"msg_contents": "\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > test=# create index myindex on accounts(aid) where bid <> 0;\n\nHmm ? Am I reading correctly ? a restriction that is on a field, that \nis not in the index ? Does that make sense ? (aid --> bid)\n\n> The original implementation would have refused to let you create a\n> partial index with such a WHERE clause, since <> isn't a\nbtree-indexable\n> operator.\n\nBut that is sad, since it would be a rather important use. Couldn't it\nbe \nrewritten to: (aid < 0 or aid > 0) ? (I assume you meant aid) \n\nAndreas\n",
"msg_date": "Mon, 6 Aug 2001 09:29:10 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: partial index "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> test=# create index myindex on accounts(aid) where bid <> 0;\n\n> Hmm ? Am I reading correctly ? a restriction that is on a field, that \n> is not in the index ? Does that make sense ?\n\nYes it does, and in fact it's one of the more important applications of\npartial indexes. It's the only way that a partial index can be cheaper\nto scan than a full index. Consider:\n\n\tcreate index foofull on foo (f1);\n\n\tcreate index foopartial on foo (f1) where f1 < 100;\n\n\tcreate index foopartial2 on foo (f1) where f2 > 100;\n\nNow\n\n\tselect * from foo where f1 < 200;\n\ncannot use either of the partial indexes, it will have to use foofull\nor a seqscan.\n\n\tselect * from foo where f1 < 50;\n\ncan use foopartial, but the number of rows retrieved using the index\nwill be just the same as if it used foofull. Cost savings will be\nmarginal at best.\n\n\tselect * from foo where f1 < 50 and f2 > 200;\n\ncan use foopartial2, and since some of the rows have already been\nfiltered from the index on the basis of f2, this will be cheaper than\nusing either of the other indexes.\n\nWhen I was testing the partial-index additions awhile back, at first\nI thought it was a bug that the planner didn't show a preference for the\npartial index in a case like #2. But it was right; the indexscan will\ncover the same number of rows and indexentries with either index. If\nthe partial index is much smaller than the full index, you might save\none or two disk reads during the initial btree descent --- but that's\nall. So a partial index constructed along the lines of foopartial might\nsave work at insert/update time (if it's much smaller than a full index)\nbut it's no better for selecting. The only way that having both full\nand partial indexes on a column could make sense is if the partial\nindex's predicate mentions another column.\n\nSee also the previous discussion about using predicates with UNIQUE\nindexes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 10:08:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: partial index "
}
] |
[
{
"msg_contents": "\n> Even more to the point, those typical installations do not want\n> exclusive-locked VACUUM. Haven't you paid any attention to the user\n> complaints we've been hearing for the last N years? People want a\n> nonexclusive VACUUM (or no VACUUM at all, but that's not a choice we\ncan\n> offer them now.) That *is* what the typical dbadmin will want to run,\n> and that's why I say it should be the default.\n\nI agree.\n\nAndreas\n",
"msg_date": "Mon, 6 Aug 2001 09:55:33 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Name for new VACUUM "
}
] |
[
{
"msg_contents": "hello,\n\nThis weekend I read the User guide front to back. I hadn't done that\nsince 6.2 or thereabouts...\n\nI have one request for improvement:\n\nIn the CREATE DATABASE chapter it may be usefull to include the options\nfor LATIN1 etc. I was a bit surprized that they weren't there seeing the\ninternational following of the software. For a newcomer this could save\na certain number of hours of headscratching when the accentuated\ncharacters don't show up...\n\nIf I was to rate the manual I would say \"better than many manuals for\nshrinkwrap software but loses steam towards the end\".\n\nCheers\n\nTony Grant\n\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nMacromedia UltraDev with PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n",
"msg_date": "06 Aug 2001 13:40:20 +0200",
"msg_from": "Tony Grant <tony@animaproductions.com>",
"msg_from_op": true,
"msg_subject": "user guide"
},
{
"msg_contents": "On 6 Aug 2001, Tony Grant wrote:\n\n> hello,\n> \n> This weekend I read the User guide front to back. I hadn't done that\n> since 6.2 or thereabouts...\n> \n> I have one request for improvement:\n\n<snip>\n\nI have a suggestion, as well: I think the PostgreSQL documentatio could\nuse a section that discusses all the maintenance issues in one place.\nDocumentation for VACUUM is stuck in the Reference section, when I think\nit makes sense to also cover it in the Admin Section. Here's waht I'd like\nto see in Adin:\n\n Backup and Restor (already there)\n Database Mainentence\n VACUUM\n VACUUM ANALYZE\n Rebuilding Indexes (Does this even apply to PG?)\n\nThanks,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\nDavid@Wheeler.net ICQ: 15726394\n Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n",
"msg_date": "Mon, 6 Aug 2001 08:35:54 -0700 (PDT)",
"msg_from": "David Wheeler <David@Wheeler.net>",
"msg_from_op": false,
"msg_subject": "Re: user guide"
},
{
"msg_contents": "David Wheeler <David@Wheeler.net> writes:\n> I have a suggestion, as well: I think the PostgreSQL documentatio could\n> use a section that discusses all the maintenance issues in one place.\n\nI agree. Do I hear a volunteer to write it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 12:19:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: user guide "
},
{
"msg_contents": "David Wheeler writes:\n\n> I have a suggestion, as well: I think the PostgreSQL documentatio could\n> use a section that discusses all the maintenance issues in one place.\n\nThis is already on the TODO list.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 6 Aug 2001 18:27:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: user guide"
},
{
"msg_contents": "Hi,\n\n> > I have a suggestion, as well: I think the PostgreSQL documentatio could\n> > use a section that discusses all the maintenance issues in one place.\n>\n>I agree. Do I hear a volunteer to write it?\n\nI'd be interested in looking at it.\nIs it just this todo item : Add mention of VACUUM, log rotation to \nAdministrator's Guide ?\n\n\n-----------------\n Chris Smith\nhttp://www.squiz.net/\n",
"msg_date": "Tue, 07 Aug 2001 09:05:13 +1000",
"msg_from": "Chris <csmith@squiz.net>",
"msg_from_op": false,
"msg_subject": "Re: user guide "
},
{
"msg_contents": "> > I have a suggestion, as well: I think the PostgreSQL documentatio could\n> > use a section that discusses all the maintenance issues in one place.\n> This is already on the TODO list.\n\nI've started expanding the Admin chapter on database recovery (expanding\nwas easy; it has been \"Any volunteers to write this?\" for two or three\nyears now). At the moment, I've split the info into two chapters on\n\"Failure\" (discussing various failure scenarios) and \"Recovery\".\n\nComments and organizational suggestions are welcome. Will commit\nsometime soon so others can contribute if they would like.\n\n - Thomas\n",
"msg_date": "Tue, 07 Aug 2001 00:07:39 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: user guide"
},
{
"msg_contents": "Chris <csmith@squiz.net> writes:\n>>> I have a suggestion, as well: I think the PostgreSQL documentatio could\n>>> use a section that discusses all the maintenance issues in one place.\n>> \n>> I agree. Do I hear a volunteer to write it?\n\n> I'd be interested in looking at it.\n> Is it just this todo item : Add mention of VACUUM, log rotation to \n> Administrator's Guide ?\n\nThat was added as a result of the thread starting at\nhttp://fts.postgresql.org/db/mw/msg.html?mid=115811\n\nThere are two things I think we need: one is a discussion of each of\nthese topics (which AFAIR are presently discussed nowhere); the other\nis a short \"checklist of maintenance concerns\" that links to longer\ndiscussions of each relevant topic. For example, there's already a\ndiscussion of backup methods, but it'd be good if that were listed in a\nchecklist of things for the budding admin to think about.\n\nOr you could reorganize the Admin Guide so that backup and these other\nthings are sections in a Maintenance chapter. Not sure if that'd be\nbetter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 20:14:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re: user guide "
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> I've started expanding the Admin chapter on database recovery (expanding\n> was easy; it has been \"Any volunteers to write this?\" for two or three\n> years now). At the moment, I've split the info into two chapters on\n> \"Failure\" (discussing various failure scenarios) and \"Recovery\".\n\nSeems like that should be only one chapter.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 8 Aug 2001 01:11:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] user guide"
},
{
"msg_contents": "> > I've started expanding the Admin chapter on database recovery (expanding\n> > was easy; it has been \"Any volunteers to write this?\" for two or three\n> > years now). At the moment, I've split the info into two chapters on\n> > \"Failure\" (discussing various failure scenarios) and \"Recovery\".\n> Seems like that should be only one chapter.\n\nYes, maybe. Easy enough to merge back together. I split them because\nthere is not a one to one correspondence between potential failures and\nrecovery procedures.\n\n - Thomas\n",
"msg_date": "Wed, 08 Aug 2001 01:19:16 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] user guide"
}
] |
[
{
"msg_contents": "Grant wrote:\n>Is it at all possible to use vim to interact with psql to \n>provide input?\n\nWhy are you asking this on hackers? Please read\nhttp://www.postgresql.org/devel-corner/ (\"YOU MUST TRY ELSEWHERE\nFIRST\")\n\nYes, psql can call vim. Its in the user documentation. You may\nwant to read that too.\n\nRegards,\nRen� Pijlman\n",
"msg_date": "Mon, 06 Aug 2001 13:44:29 +0200",
"msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>",
"msg_from_op": true,
"msg_subject": "Re: Vim!"
}
] |
[
{
"msg_contents": "\n> It seems to me, I guess and others too, that the OID mechanism should\nbe on a\n> per table basis. That way OIDs are much more likely to be unique, and\nTRUNCATE\n> on a table should reset it's OID counter to zero.\n\nSeems to me, that this would be no different than a performance improved\nversion\nof SERIAL.\nIf you really need OID, you imho want the systemid tableid tupleid\ncombo.\nA lot of people seem to use OID, when they really could use XTID. That\nis\nwhat I wanted to say.\n\nAndreas\n",
"msg_date": "Mon, 6 Aug 2001 14:17:21 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "RE: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > It seems to me, I guess and others too, that the OID mechanism should\n> be on a\n> > per table basis. That way OIDs are much more likely to be unique, and\n> TRUNCATE\n> > on a table should reset it's OID counter to zero.\n> \n> Seems to me, that this would be no different than a performance improved\n> version\n> of SERIAL.\n> If you really need OID, you imho want the systemid tableid tupleid\n> combo.\n> A lot of people seem to use OID, when they really could use XTID. That\n> is\n> what I wanted to say.\n> \n\nI don't care about having an OID or ROWID, I care that there is a 2^32 limit to\nthe current OID strategy and that a quick fix of allowing tables to exist\nwithout OIDs may break some existing software. I was suggesting the OIDs be\nmanaged on a \"per table\" basis as a better solution.\n\nIn reality, a 32 bit OID, even isolated per table, may be too small. Databases\nare getting HUGE. 40G disk drives are less than $100 bucks, in a few months 80G\ndrives will be less than $200, one can put together 200G RAID systems for about\n$1000, a terabyte for about $5000. A database that would have needed an\nenterprise level system, just 7 years ago, can be run on a $500 desktop today.\n\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Mon, 06 Aug 2001 08:42:32 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "On Mon, 6 Aug 2001, mlw wrote:\n\n> Zeugswetter Andreas SB SD wrote:\n> > \n> > > It seems to me, I guess and others too, that the OID mechanism should\n> > be on a\n> > > per table basis. That way OIDs are much more likely to be unique, and\n> > TRUNCATE\n> > > on a table should reset it's OID counter to zero.\n> > \n> > Seems to me, that this would be no different than a performance improved\n> > version\n> > of SERIAL.\n> > If you really need OID, you imho want the systemid tableid tupleid\n> > combo.\n> > A lot of people seem to use OID, when they really could use XTID. That\n> > is\n> > what I wanted to say.\n> > \n> \n> I don't care about having an OID or ROWID, I care that there is a 2^32 limit to\n> the current OID strategy and that a quick fix of allowing tables to exist\n> without OIDs may break some existing software. I was suggesting the OIDs be\n> managed on a \"per table\" basis as a better solution.\nAgain, what existing software demands per-table OID field? Isn't it what\nprimary keys are for?\n\n> In reality, a 32 bit OID, even isolated per table, may be too small.\n> Databases are getting HUGE. 40G disk drives are less than $100 bucks,\n> in a few months 80G drives will be less than $200, one can put\n> together 200G RAID systems for about $1000, a terabyte for about\n> $5000. A database that would have needed an enterprise level system,\n> just 7 years ago, can be run on a $500 desktop today.\nIf its too small for you, make a serial8 datatype (or something like\nthis), and use it for your tables. For me, I have tables which have very\nfew fields, and I don't want to waste 4 bytes/row (much less 8) for OID.\n\n",
"msg_date": "Mon, 6 Aug 2001 08:50:09 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "mlw wrote:\n> \n> Zeugswetter Andreas SB SD wrote:\n> >\n> > > It seems to me, I guess and others too, that the OID mechanism should\n> > be on a\n> > > per table basis. That way OIDs are much more likely to be unique, and\n> > TRUNCATE\n> > > on a table should reset it's OID counter to zero.\n> >\n> > Seems to me, that this would be no different than a performance improved\n> > version of SERIAL.\n> > If you really need OID, you imho want the systemid tableid tupleid\n> > combo.\n\nhaving such an global_oid fits nicely with having table-uniqe oids.\n\njust do \n\nselect 'mysite.'||text(tableoid)||'.'||text(oid) as global_oid from\nmytable;\n\nto get it\n\n> I don't care about having an OID or ROWID, I care that there is a 2^32 limit to\n> the current OID strategy and that a quick fix of allowing tables to exist\n> without OIDs may break some existing software. I was suggesting the OIDs be\n> managed on a \"per table\" basis as a better solution.\n\nNow that we have tableoid the need of globally unique oid is much\ndiminished.\n \n> In reality, a 32 bit OID, even isolated per table, may be too small. Databases\n> are getting HUGE. 40G disk drives are less than $100 bucks, in a few months 80G\n> drives will be less than $200, one can put together 200G RAID systems for about\n> $1000, a terabyte for about $5000. A database that would have needed an\n> enterprise level system, just 7 years ago, can be run on a $500 desktop today.\n\nAnd my PalmPilot has more memory adn storage and processor power than\nPDP-11 \nwhere UNIX was developed ;)\n\nSo the real solution will be going to 64-bit OID's and XIDS, just that\nsome \nplatforms (I'd like to know which) dont have a good \"long long\"\nimplementation yet;\n\n\n\n------------------\nHannu\n",
"msg_date": "Mon, 06 Aug 2001 16:48:15 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > It seems to me, I guess and others too, that the OID mechanism should\n> be on a\n> > per table basis. That way OIDs are much more likely to be unique, and\n> TRUNCATE\n> > on a table should reset it's OID counter to zero.\n> \n> Seems to me, that this would be no different than a performance improved\n> version\n> of SERIAL.\n> If you really need OID, you imho want the systemid tableid tupleid\n> combo.\n\nor (systemid.tableid.tupleid.versioninterval) if you want to be able to\ntime-travel\n\n---------------\nHannu\n",
"msg_date": "Mon, 06 Aug 2001 16:50:47 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "> -----Original Message-----\n> From: Alex Pilosov\n>\n> On Mon, 6 Aug 2001, mlw wrote:\n>\n> > Zeugswetter Andreas SB SD wrote:\n> > >\n> > > > It seems to me, I guess and others too, that the OID\n> mechanism should\n> > > be on a\n> > > > per table basis. That way OIDs are much more likely to be\n> unique, and\n> > > TRUNCATE\n> > > > on a table should reset it's OID counter to zero.\n> > >\n> > > Seems to me, that this would be no different than a\n> performance improved\n> > > version\n> > > of SERIAL.\n> > > If you really need OID, you imho want the systemid tableid tupleid\n> > > combo.\n> > > A lot of people seem to use OID, when they really could use XTID. That\n> > > is\n> > > what I wanted to say.\n> > >\n> >\n> > I don't care about having an OID or ROWID, I care that there is\n> a 2^32 limit to\n> > the current OID strategy and that a quick fix of allowing\n> tables to exist\n> > without OIDs may break some existing software. I was suggesting\n> the OIDs be\n> > managed on a \"per table\" basis as a better solution.\n> Again, what existing software demands per-table OID field? Isn't it what\n> primary keys are for?\n>\n\nI was just about to implement updatable cursors in psqlODBC using\nTID and OID. I've half done it but the rest is pending now. I've had the\nthe plan since I introduced Tid scan in 7.0.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Tue, 7 Aug 2001 02:46:58 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hmm, this has proven more contentious than I expected ;-). It seems the\none thing that absolutely everybody agrees on is that 4-byte OIDs are no\nlonger workable as global identifiers.\n\nMy feeling after reading the discussions is that the best way to go\nin the long run is to change from a database-wide OID generator to\nper-table OID generators, and to say that if you want a database-wide\nunique identifier then you should use <table oid, row oid> as that\nidentifier. If you want cluster-wide or universe-wide uniqueness then\nyou stick additional fields on the front of that. Unique IDs formed\nin this way are a lot more useful than IDs taken from a simple global\nsequence, because you can use the subfields to determine where to look\nfor the object.\n\nIf OID remains at 4 bytes then this still isn't very satisfactory for\ntables that are likely to have more than 4 billion INSERTs in their\nlifetime. However, rather than imposing the cost of 8-byte OIDs\neverywhere, I'd be inclined to say that people who need unique\nidentifiers in such tables should use user-defined columns generated\nfrom int8 sequences. (Obviously it would help if we created an\nint8-based sequence type... but that's certainly doable.) Perhaps in\nanother few years, when all portability and performance issues with int8\nare history, we could think about changing OID to 8 bytes everywhere;\nbut I don't think that's a good idea just yet.\n\nI do not think it is feasible to try to implement per-table OID\ngeneration for 7.2. What I'd like to do for 7.2 is continue with\nmy previous proposal of making OID generation optional on a per-table\nbasis (but the default is still to generate them). This seems to fit\nwell with an eventual migration to per-table OIDs, since it still seems\nto me that some tables don't need them at all --- particularly, tables\nthat are using an int8 column as key because wraparound is expected.\nAlso, I will change pg_description as previously discussed, since this\nis clearly necessary in a per-table-OID world.\n\nComments, objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 15:08:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Could we modify the Relation structure to hold an Oid counter? So every where\nPostgres calls \"newoid(void)\" it gets changed to pass the relation structure it\nwill be associated with, i.e. newoid(Relation *). That way, every relation\ncould have its own counter, AND perhaps its own spinlock. Relations are shared\namongst the various processes, correct? If you pass NULL as the relation, you\nget an OID out of the ShmemVariableCache->nextXid.\n\nAm I being overly simplistic?\n\n\n",
"msg_date": "Mon, 06 Aug 2001 15:29:48 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Am I being overly simplistic?\n\nYes. For one thing, Relation structs are *not* shared, nor even\npersistent (the relcache will happily discard them). For another, you\nhaven't mentioned how we keep the counter up-to-date across system\nrestarts.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 15:45:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tom Lane wrote:\n>\n> If OID remains at 4 bytes then this still isn't very satisfactory for\n> tables that are likely to have more than 4 billion INSERTs in their\n> lifetime. However, rather than imposing the cost of 8-byte OIDs\n> everywhere, I'd be inclined to say that people who need unique\n> identifiers in such tables should use user-defined columns generated\n> from int8 sequences. (Obviously it would help if we created an\n> int8-based sequence type... but that's certainly doable.) Perhaps in\n> another few years, when all portability and performance issues with int8\n> are history, we could think about changing OID to 8 bytes everywhere;\n> but I don't think that's a good idea just yet.\n\nWhich are those platforms that currently lack 8-byte ints or whose \n8-byte ints are limited to values below 2^31 ?\n\nManaging huge tables on such platforms seems to be quite hard anyway .\n\nI guess that the change of OID from 4 to 8 bytes could be carried out as\na \ncompile time option ?\n\n> I do not think it is feasible to try to implement per-table OID\n> generation for 7.2. What I'd like to do for 7.2 is continue with\n> my previous proposal of making OID generation optional on a per-table\n> basis (but the default is still to generate them). This seems to fit\n> well with an eventual migration to per-table OIDs, since it still seems\n> to me that some tables don't need them at all --- particularly, tables\n> that are using an int8 column as key because wraparound is expected.\n> Also, I will change pg_description as previously discussed, since this\n> is clearly necessary in a per-table-OID world.\n\nChanging pg_description to (table_oid,row_oid) seems reasonable for\nother \nreasons too, like going from description to the describee. I dont think \nthat pg_attribute is such a heavy OID-eater, except perhaps in case\nwhere \neach transaction creates and destroys temporary tables with very high \nnumber of columns.\n\n-----------------\nHannu\n",
"msg_date": "Tue, 07 Aug 2001 09:09:33 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Am I being overly simplistic?\n> \n> Yes. For one thing, Relation structs are *not* shared, nor even\n> persistent (the relcache will happily discard them). \n\nWill it be easier to make Relation shared and persistent or creating \na new shared structure that has just a counter+lock for each \nrelation oid ?\n\n> For another, you\n> haven't mentioned how we keep the counter up-to-date across system\n> restarts.\n\nPerhaps write it to database at checkpoints and get the last INSERTED\nrecord \nfrom WAL at restart ? \n\nProbably too simplistic as well ;)\n",
"msg_date": "Tue, 07 Aug 2001 09:14:13 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I guess that the change of OID from 4 to 8 bytes could be carried out\n> as a compile time option ?\n\nNot unless you like the notion that the wire protocol depends on a\ncompile time option.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 09:35:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Will it be easier to make Relation shared and persistent or creating \n> a new shared structure that has just a counter+lock for each \n> relation oid ?\n\nThe latter. Relation (by which I mean a whole relcache entry with all\nits subsidiary structure, not only struct RelationData) is too large,\ncomplex and heavyweight a structure to be a good candidate for moving\ninto shared memory. It also contains a lot of backend-local status\ndata in its current incarnation.\n\nSome kind of shared cache for sequence generators (essentially,\ngeneralizing the existing shared OID counter into N counters) is\nprobably the answer. But it would have to be a cache, not the whole\ntruth, so there'd need to be an underlying table that holds counters not\ncurrently swapped into cache. That part we don't have a good model for\nin the existing OID-generator code, nor in the existing sequence code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 09:48:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tom,\n\nIf we have WITH NOOID, why not having a WITH OID32 and WITH OID64 (or\nsomething of a sort)\nas well (being OID32 the default and OID an alias to it)? \nThe last would not be available on some systems \n(who will use a system that does not support long long as a database\nserver anyway?)\n\nThe wire protocol will always handle the (tableoid) long form,\nreferences will always store\nthe long form... The OID32 would exist only to allow people to save\nspace in tables that need\nOIDs but not the 64 bit version.\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Tue, 07 Aug 2001 11:17:55 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> The wire protocol will always handle the (tableoid) long form,\n\nI think you are handwaving away what is precisely the most painful\naspect. To allow 64-bit type OIDs in the wire protocol, we must\n(a) have a protocol version jump, and (b) force all servers and all\nclient libraries to be 64-bit-capable. While I'm prepared to think\nthat \"int8 is really only 32 bits wide\" is tolerable within a single\nserver installation, I really don't want to deal with such headaches\nbetween clients and servers. Can you imagine how hard it will be\nto track down a bug that arises because one old client is dropping\nthe high-order bits of type OIDs? Only installations that had been\nup for years would ever see a problem; how likely is it that anyone\nwould even remember that some of their clients were not 64-bit-ready?\n\nWhen we're ready to make that jump, I think we should just move to\n64 bit OIDs, full stop, no exceptions, no turning back, no \"configure\ntime option\", no backwards compatibility with old clients. Anything\nelse is a time bomb. I'd even be inclined to start running the OID\ncounter at 4-billion-plus-1, to help flush out anyplace that drops the\nhigh half.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 11:36:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > I guess that the change of OID from 4 to 8 bytes could be carried out\n> > as a compile time option ?\n> \n> Not unless you like the notion that the wire protocol depends on a\n> compile time option.\n\nThat could be a separate option, perhaps even a runtime one.\n\nAnd yet another flag for determining weather to raise an error on\nwire-oid \noverflow or just to masquerade it as rollower ;)\n\n--------------\nHannu\n",
"msg_date": "Tue, 07 Aug 2001 18:10:06 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > The wire protocol will always handle the (tableoid) long form,\n> \n> I think you are handwaving away what is precisely the most painful\n> aspect. To allow 64-bit type OIDs in the wire protocol, we must\n> (a) have a protocol version jump, and (b) force all servers and all\n> client libraries to be 64-bit-capable. While I'm prepared to think\n> that \"int8 is really only 32 bits wide\" is tolerable within a single\n> server installation, I really don't want to deal with such headaches\n> between clients and servers. Can you imagine how hard it will be\n> to track down a bug that arises because one old client is dropping\n> the high-order bits of type OIDs? Only installations that had been\n> up for years would ever see a problem; how likely is it that anyone\n> would even remember that some of their clients were not 64-bit-ready?\n> \n\nA protocol bump is inevitable if we ever want to deal with 64 bit OIDs,\nso the sooner we do it the better. \n\nSomeone pointed out that even with optional OIDs and per table OIDs,\nwe would still need to allow per table OIDs to be more than 32 bits \n(I am taking his word for it). If that is the case, the scenario you\ndescribed above is inevitable.\n\n\n> When we're ready to make that jump, I think we should just move to\n> 64 bit OIDs, full stop, no exceptions, no turning back, no \"configure\n> time option\", no backwards compatibility with old clients. Anything\n> else is a time bomb. I'd even be inclined to start running the OID\n> counter at 4-billion-plus-1, to help flush out anyplace that drops the\n> high half.\n> \n\nThat would be the way to go. We are just trying to buy some time with\nthe other measures.\n\nBut some folks are complaining of having to use 64 bit OIDs when they\ndon't really need them, so that is why I proposed the OID32/OID64 option.\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Tue, 07 Aug 2001 16:24:08 -0400",
"msg_from": "Fernando Nasser <fnasser@cygnus.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > The wire protocol will always handle the (tableoid) long form,\n> \n> I think you are handwaving away what is precisely the most painful\n> aspect. To allow 64-bit type OIDs in the wire protocol, we must\n> (a) have a protocol version jump, and (b) force all servers and all\n> client libraries to be 64-bit-capable. While I'm prepared to think\n> that \"int8 is really only 32 bits wide\" is tolerable within a single\n> server installation, I really don't want to deal with such headaches\n> between clients and servers. Can you imagine how hard it will be\n> to track down a bug that arises because one old client is dropping\n> the high-order bits of type OIDs? \n\nWhen I thought of it, my solution was to issue a NOTICE on each and \nvery OID truncation - they should be visible enough to force upgrade ;)\n\n> Only installations that had been\n> up for years would ever see a problem; how likely is it that anyone\n> would even remember that some of their clients were not 64-bit-ready?\n> \n> When we're ready to make that jump, I think we should just move to\n> 64 bit OIDs, full stop, no exceptions, no turning back, no \"configure\n> time option\", no backwards compatibility with old clients. Anything\n> else is a time bomb. I'd even be inclined to start running the OID\n> counter at 4-billion-plus-1, to help flush out anyplace that drops the\n> high half.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 09 Aug 2001 01:19:11 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound: summary and proposal"
}
] |
[
{
"msg_contents": "\n Hi,\n\n the NLS is failed for:\n\n$ make install prefix=/home/PG_DEVEL/X/\n .\n .\n [cut]\n .\n . \nfor lang in de; do \\\n /bin/sh ../../../config/install-sh -c -m 644 $lang.mo\n/usr/lib/postgresql/share/locale/$lang/LC_MESSAGES/postgres.mo || exit 1; \\\ndone\ncp: cannot create regular file\n/usr/lib/postgresql/share/locale/de/LC_MESSAGES/#\n^^^^^^^^^^^^^^^^^^^\n use directly prefix from ./configure and ignore the prefix option\nfor 'make'. All other PG stuff are correct for this.\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 6 Aug 2001 15:24:15 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "failed: make install prefix=/foo/bar"
},
{
"msg_contents": "Karel Zak writes:\n\n> $ make install prefix=/home/PG_DEVEL/X/\n\n> cp: cannot create regular file\n> /usr/lib/postgresql/share/locale/de/LC_MESSAGES/#\n\nI have fixed this. Note, however, that this probably won't do what you\nwant anyway, because the compiled-in path references will still use the\nprefix you specified to configure. The only uses for \"make install\nprefix=elsewhere\" are some rather specific circumstances where the\nlocation specified for installation will be mapped the location used at\nuse time, e.g. with symlinks (e.g., using GNU Stow), or with an AFS file\nsystem. If you simply change your mind about the installation prefix you\nneed to make distclean first and rebuild everything.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 6 Aug 2001 18:00:40 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: failed: make install prefix=/foo/bar"
},
{
"msg_contents": "On Mon, Aug 06, 2001 at 06:00:40PM +0200, Peter Eisentraut wrote:\n> Karel Zak writes:\n> \n> > $ make install prefix=/home/PG_DEVEL/X/\n> \n> > cp: cannot create regular file\n> > /usr/lib/postgresql/share/locale/de/LC_MESSAGES/#\n> \n> I have fixed this. Note, however, that this probably won't do what you\n> want anyway, because the compiled-in path references will still use the\n> prefix you specified to configure. The only uses for \"make install\n> prefix=elsewhere\" are some rather specific circumstances where the\n> location specified for installation will be mapped the location used at\n> use time, e.g. with symlinks (e.g., using GNU Stow), or with an AFS file\n> system. If you simply change your mind about the installation prefix you\n> need to make distclean first and rebuild everything.\n\n Yes, you are right. But \"make install prefix=...\" can be used for\npackage building if you want install all to some temp directory and\ncompress it.\n\n\t\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 7 Aug 2001 08:52:39 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: failed: make install prefix=/foo/bar"
},
{
"msg_contents": "Karel Zak writes:\n\n> Yes, you are right. But \"make install prefix=...\" can be used for\n> package building if you want install all to some temp directory and\n> compress it.\n\nIn that case it's much better to use \"make install DESTDIR=...\".\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 7 Aug 2001 12:13:29 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: failed: make install prefix=/foo/bar"
}
] |
[
{
"msg_contents": "config.guess now supports OpenUNIX 8, AIX 5, HPUX on IA64, and Linux on\nPPC64. Enjoy.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 6 Aug 2001 16:07:28 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "config.guess updated in CVS"
}
] |
[
{
"msg_contents": "\n> > test=# create index myindex on accounts(aid) where bid <> 0;\n> \n> > Hmm ? Am I reading correctly ? a restriction that is on a field,\nthat \n> > is not in the index ? Does that make sense ?\n> \n> Yes it does, and in fact it's one of the more important applications\nof\n> partial indexes. It's the only way that a partial index can be\ncheaper\n> to scan than a full index.\n\nOk, yes, sounds great, but then back to Tatsuo's question:\nWhy is the index atestpartial not used (instead DEBUG) ?\n\n\tcreate table atest (aid int, bid int);\n\tcreate index atestpartial on atest (aid) where bid <> 0;\n\tselect * from atest where aid=1 and bid <> 0;\n\nand instead seq scan for 1 mio rows 2 rows where bid <> 0\n\nSince bid is not in an index the evaluation of usability obviously \nshould not be based on index ops ?\n\nAndreas\n",
"msg_date": "Mon, 6 Aug 2001 17:05:13 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "RE: AW: partial index "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Since bid is not in an index the evaluation of usability obviously \n> should not be based on index ops ?\n\nFeel free to reimplement the theorem-prover, taking special care to\nbe able to prove things about operators that you have zero information\nabout the semantics of.\n\nThe reason the prover works with btree-indexable operators is that\nit can infer a lot of semantics from the index opclass relationships.\nThis has nothing to do with whether the index itself is btree or not,\nlet alone whether the variables used in the predicate are in the index.\nIt's just a way to do something useful within a reasonable amount of\ncode.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 11:25:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: partial index "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> Since bid is not in an index the evaluation of usability obviously \n>> should not be based on index ops ?\n\nActually, now that I think about it, there's no reason that the prover\ncouldn't try a simple equal() on a WHERE clause and predicate clause\nbefore moving on to the btree-semantics-based tests. If the clauses\nare statically identical then one implies the other, no? This would\nwork nicely for clauses like IS [NOT] NULL, and would give us at least a\nlittle bit of ability to deal with non-btree operator clauses.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 12:16:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: partial index "
}
] |
[
{
"msg_contents": "We have five different expected files for the int2 and int4 tests because\nevery system has a different idea on what to print for ERANGE. I'm about\nto add another version. Would it make more sense to hard code one wording\nand not use strerror here?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 6 Aug 2001 21:40:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Not representable result out of too large range"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> We have five different expected files for the int2 and int4 tests because\n> every system has a different idea on what to print for ERANGE. I'm about\n> to add another version. Would it make more sense to hard code one wording\n> and not use strerror here?\n\nKinda sounds like the path of least resistance, doesn't it? I assume\nyou'd do the substitution inside elog(), so it's consistent for all\nplaces that might report ERANGE via %m ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 16:02:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Not representable result out of too large range "
}
] |
[
{
"msg_contents": "I have been thinking about implementing int8-based sequences to go along\nwith the existing int4-based ones. The amount of code involved doesn't\nseem very large, but there are some interesting questions about the API.\nSome points for discussion:\n\n* On machines that don't offer an 8-byte-int C datatype, the int8\nsequence type would still exist, but it couldn't actually count higher\nthan 2^31. This is the same as the behavior of our int8 datatype on\nsuch machines.\n\n* What should be the CREATE syntax for such sequences? I lean towards\nadding an optional clause to CREATE SEQUENCE, which might be spelled\nlike \"TYPE INT8\", \"TYPE BIGINT\", or just \"INT8\" or \"BIGINT\".\n\n* How should one invoke nextval() and friends on such a sequence?\nDirectly applying the existing convention, eg, nextval('sequencename'),\nwon't work because those functions are declared to return int4. One\npossible answer is to require people to write nextval8('sequencename')\nand so forth. This is ugly; it would be nice to allow automatic\noverloading of the function name the way we can do for most datatypes.\nWe have had discussions to the effect that this method of referencing\nsequences is ugly and broken, anyway.\n\nPerhaps we could allow people to write nextval(sequencename) and/or\nsequencename.nextval, which would expose the sequence object to the\nparser so that datatype overloading could occur. I am envisioning\nhaving two archetype sequence objects, one int4 and the other int8,\nand making every other sequence object be an inheritance child of one of\nthese. Then, declaring nextval functions that operate on the two parent\ndatatypes would work --- at least to the extent that we could do type\nresolution to choose which function to apply. I'm not sure yet how to\nkeep the parser from adding the sequence to the query's join set when\nyou do something like that :-(. It would be easier to make it work for\nthe sequencename.nextval notation, I think, but do we want to encourage\npeople to use that syntax? It's a PostQuel-ism that we may have to\ndiscard in order to support SQL92 schemas.\n\nIn any case, can anyone think of cases where it's a good idea to allow\nthe sequence name to be specified as a string --- for example, because\nyou want to compute the sequence name at runtime? To support that,\nI think we'd have little choice but to accept nextval8('sequencename').\nI'd rather move away from the string-based approach, but I don't know\nif we can get away with that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 16:27:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Notes about int8 sequences"
},
{
"msg_contents": "Tom Lane wrote:\n\n> * How should one invoke nextval() and friends on such a sequence?\n> Directly applying the existing convention, eg, nextval('sequencename'),\n> won't work because those functions are declared to return int4. One\n\nI'm not really a hacker, but why couldn't you simply change nextval to return int8 in all cases? Presumably there is an automatic (and silent) conversion from int8 to int4 where the range fits? The overhead of creating an int8 return value for an old-style int4 sequence (and converting it back to int4 for the INSERT/UPDATE) seems very small.\n\nI'm missing something obvious again?\n\n\nAllan.\n\n",
"msg_date": "Mon, 06 Aug 2001 22:24:02 +0100",
"msg_from": "Allan Engelhardt <allane@cybaea.com>",
"msg_from_op": false,
"msg_subject": "Re: Notes about int8 sequences"
},
{
"msg_contents": "Allan Engelhardt <allane@cybaea.com> writes:\n> I'm not really a hacker, but why couldn't you simply change nextval to\n> return int8 in all cases?\n\nHmm. That's a possibility. There's some potential for trouble if an\napplication is expecting an int4 result from \"SELECT nextval()\" and\ngets int8 instead, but if we think we could live with that...\n\nActually, if we thought we could live with that, my inclination would be\nto blow off int4-based sequences altogether, and just redefine SEQUENCE\nobjects as operating on INT8. Interesting thought, eh?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 17:33:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Notes about int8 sequences "
},
{
"msg_contents": "On Mon, 6 Aug 2001, Tom Lane wrote:\n\n> Hmm. That's a possibility. There's some potential for trouble if an\n> application is expecting an int4 result from \"SELECT nextval()\" and\n> gets int8 instead, but if we think we could live with that...\n\nI assume there will be the same limitations as you mentioned in your\noriginal message. Ie. some systems don't have an 8-byte-int C datatype\nso would still have the 2^31 limit.\n\n> Actually, if we thought we could live with that, my inclination would be\n> to blow off int4-based sequences altogether, and just redefine SEQUENCE\n> objects as operating on INT8. Interesting thought, eh?\n\nMore than interesting ... excellant. Bigger is better, right?\n\n\nCheers,\nRod\n-- \n Remove the word 'try' from your vocabulary ... \n Don't try. Do it or don't do it ...\n Steers try!\n\n Don Aslett\n\n\n\n",
"msg_date": "Mon, 6 Aug 2001 15:34:21 -0700 (PDT)",
"msg_from": "\"Roderick A. Anderson\" <raanders@tincan.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: Notes about int8 sequences "
},
{
"msg_contents": "\"Roderick A. Anderson\" <raanders@tincan.org> writes:\n> On Mon, 6 Aug 2001, Tom Lane wrote:\n>> Hmm. That's a possibility. There's some potential for trouble if an\n>> application is expecting an int4 result from \"SELECT nextval()\" and\n>> gets int8 instead, but if we think we could live with that...\n\n> I assume there will be the same limitations as you mentioned in your\n> original message. Ie. some systems don't have an 8-byte-int C datatype\n> so would still have the 2^31 limit.\n\nCheck.\n\n>> Actually, if we thought we could live with that, my inclination would be\n>> to blow off int4-based sequences altogether, and just redefine SEQUENCE\n>> objects as operating on INT8. Interesting thought, eh?\n\n> More than interesting ... excellant. Bigger is better, right?\n\nUntil it breaks your app, yes ;-)\n\nOne thing that would have to be thought about is whether the SERIAL\npseudo-type should generate an int8 instead of int4 column. On\ncompatibility grounds, it might be better to leave it generating int4,\nand invent a second pseudo-type SERIAL8 that is just the same except\nfor making an int8 column. I'm more worried about changing the datatype\nof a user column than I am about changing the output type of nextval(),\nso I'd be sort of inclined to have two SERIAL types even if we change\nnextval() to int8. Thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 19:02:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: Notes about int8 sequences "
},
{
"msg_contents": "> One thing that would have to be thought about is whether the SERIAL\n> pseudo-type should generate an int8 instead of int4 column. On\n> compatibility grounds, it might be better to leave it generating int4,\n> and invent a second pseudo-type SERIAL8 that is just the same except\n> for making an int8 column. I'm more worried about changing the datatype\n> of a user column than I am about changing the output type of nextval(),\n> so I'd be sort of inclined to have two SERIAL types even if we change\n> nextval() to int8. Thoughts?\n\nHmm. How far away are we from doing SERIAL in a way that you find more\nacceptable than the current technique of mucking around internally with\nsequences and default values? Changes there may not be impacted by\ndecisions we make now on an int8 type, but it might be good to think\nabout it beforehand.\n\nIf we do blast ahead with a SERIAL8, then we should consider\nimplementing a SERIAL4 and then aliasing SERIAL to one or the other (can\nbe done in the parser as you know).\n\n - Thomas\n",
"msg_date": "Tue, 07 Aug 2001 00:12:28 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Notes about int8 sequences"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> Hmm. How far away are we from doing SERIAL in a way that you find more\n> acceptable than the current technique of mucking around internally with\n> sequences and default values?\n\nI'd say \"won't happen for 7.2\", whereas it seems like changing sequences\nto use int8 is something that could get done this month.\n\nA true SERIAL type is something that we should think about along with\nper-table OID generation, since they have essentially the same\nrequirement: a lightweight sequence generator. Our present approach of\na one-row table to represent a sequence is not sufficiently lightweight,\nIMHO, either from an implementation or a conceptual viewpoint. (In\nparticular, it requires each sequence to have a unique name taken from\nthe table namespace, whereas for both table OIDs and serial columns\nI think we'd much prefer the sequences to be anonymous ... or at least\nin a different namespace. But how do we change that without breaking a\nlot of existing application code?)\n\nOffhand I don't see that adding a SERIAL8 type to the mix (or just\nchanging SERIAL to be int8) would make this any harder or easier.\nThe underlying implementation is exposed just as much as before,\nbut not any more so.\n\n> If we do blast ahead with a SERIAL8, then we should consider\n> implementing a SERIAL4 and then aliasing SERIAL to one or the other (can\n> be done in the parser as you know).\n\nSure, that'd be a reasonable way to set it up, if we decide to have two\nSERIAL types.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 20:32:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Notes about int8 sequences "
},
{
"msg_contents": "At 07:02 PM 06-08-2001 -0400, Tom Lane wrote:\n>pseudo-type should generate an int8 instead of int4 column. On\n>compatibility grounds, it might be better to leave it generating int4,\n>and invent a second pseudo-type SERIAL8 that is just the same except\n>for making an int8 column. I'm more worried about changing the datatype\n>of a user column than I am about changing the output type of nextval(),\n>so I'd be sort of inclined to have two SERIAL types even if we change\n>nextval() to int8. Thoughts?\n\nserial8 sounds ok to me.\n\nI use currval.\n\nCheerio,\nLink.\n\n",
"msg_date": "Tue, 07 Aug 2001 14:20:25 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Re: Notes about int8 sequences "
},
{
"msg_contents": "On Mon, 6 Aug 2001, Tom Lane wrote:\n\n> * How should one invoke nextval() and friends on such a sequence?\n\n> Perhaps we could allow people to write nextval(sequencename) and/or\n> sequencename.nextval, which would expose the sequence object to the\n> parser so that datatype overloading could occur.\n\nI'm not worried about the size of the return type of\na sequence, but I like the idea of Oracle-compatible\n\"seq.nextval\" syntax.\n\nMatthew.\n\n",
"msg_date": "Tue, 7 Aug 2001 08:17:56 +0100 (BST)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Notes about int8 sequences"
},
{
"msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> I'm not worried about the size of the return type of\n> a sequence, but I like the idea of Oracle-compatible\n> \"seq.nextval\" syntax.\n\nI didn't realize we had any Oracle-compatibility issues here. What\nexactly does Oracle's sequence facility look like?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 09:51:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Notes about int8 sequences "
},
{
"msg_contents": "On Tue, 7 Aug 2001, Tom Lane wrote:\n\n> > I'm not worried about the size of the return type of\n> > a sequence, but I like the idea of Oracle-compatible\n> > \"seq.nextval\" syntax.\n>\n> I didn't realize we had any Oracle-compatibility issues here. What\n> exactly does Oracle's sequence facility look like?\n\nIt's exactly \"seqname.nextval\". It seems that it\ncan be used in exactly the places where PG allows\nnextval(\"seqname\") (subject to the usual sprinkling\nof \"from dual\"s, of course).\n\nMatthew.\n\n",
"msg_date": "Tue, 7 Aug 2001 15:53:27 +0100 (BST)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Notes about int8 sequences "
}
] |
[
{
"msg_contents": "This was discussed on pgsql-general a little bit on 21-July, but the\ndiscussion died off without reaching a conclusion. I'd like to\nput out a concrete proposal and see if anyone has objections.\n\n1. SUM() and AVG() for int2 and int4 inputs should accumulate the\nrunning sum as an INT8, not a NUMERIC, for speed reasons. INT8 seems\nlarge enough to avoid overflow in practical situations. The final\noutput datatype of AVG() will still be NUMERIC, but the final output\nof SUM() will become INT8 for these two input types.\n\n2. STDDEV() and VARIANCE() for int2 and int4 inputs will continue to\nuse NUMERIC for accuracy and overflow reasons (accumulating sum(x^2)\nis much more prone to overflow than sum(x)). So will all these\naggregates for INT8.\n\n3. As a separate proposal, we could change COUNT()'s running counter\nand output datatype from INT4 to INT8. This would make it a little\nslower but effectively overflow-proof.\n\n\nAll of these changes are within the latitude that the SQL92 spec\naffords (it just says that the output values are exact numeric with\nimplementation-defined precision and scale). Issues to consider are:\n\n* On machines with no 8-byte-int C datatype, the accumulator would\neffectively be int4. This would make the behavior no worse than\ncurrently for COUNT(), and no worse than it was in 7.0 for SUM() and\nAVG(), so that doesn't bother me a whole lot. But it would be a\nnew source of cross-platform behavioral differences.\n\n* Changing the output datatype of these operations --- especially COUNT\n--- might affect or even break applications. We got a few complaints,\nnot many, about changing SUM() and AVG() from integer to NUMERIC output\nin 7.1. Changing SUM() to INT8 isn't likely to hurt anyone who survived\nthat transition. But COUNT() is much more widely used and is more\nlikely to affect people. Should we keep it at INT4 output to avoid\ncompatibility problems?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 17:27:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Use int8 for int4/int2 aggregate accumulators?"
},
{
"msg_contents": "I wrote:\n> 3. As a separate proposal, we could change COUNT()'s running counter\n> and output datatype from INT4 to INT8. This would make it a little\n> slower but effectively overflow-proof.\n\n> * Changing the output datatype of these operations --- especially COUNT\n> --- might affect or even break applications. We got a few complaints,\n> not many, about changing SUM() and AVG() from integer to NUMERIC output\n> in 7.1. Changing SUM() to INT8 isn't likely to hurt anyone who survived\n> that transition. But COUNT() is much more widely used and is more\n> likely to affect people. Should we keep it at INT4 output to avoid\n> compatibility problems?\n\nI started working on this, and immediately got a pile of regression test\nfailures arising from:\n\n create function rtest_viewfunc1(int4) returns int4 as\n 'select count(*) from rtest_view2 where a = $1'\n language 'sql';\n+ ERROR: return type mismatch in function: declared to return integer, returns bigint\n\nWhile it'd be easy enough to change this regression test, this does\nhighlight my concern about changing the output type of COUNT().\n\nI'm currently thinking that leaving the output type of COUNT() alone\nmight be the better part of valor. Possibly we could invent a separate\naggregate COUNT8() that returns int8, for use by them that need it.\n\nComments anyone? There wasn't a lot of discussion before...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 18:05:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Use int8 for int4/int2 aggregate accumulators? "
},
{
"msg_contents": "Tom Lane writes:\n\n> I started working on this, and immediately got a pile of regression test\n> failures arising from:\n>\n> create function rtest_viewfunc1(int4) returns int4 as\n> 'select count(*) from rtest_view2 where a = $1'\n> language 'sql';\n> + ERROR: return type mismatch in function: declared to return integer, returns bigint\n\nMaybe instead of testing for strict equality of the types, test for\ncompatibility.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 14 Aug 2001 16:57:52 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Use int8 for int4/int2 aggregate accumulators? "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> create function rtest_viewfunc1(int4) returns int4 as\n>> 'select count(*) from rtest_view2 where a = $1'\n>> language 'sql';\n>> + ERROR: return type mismatch in function: declared to return integer, returns bigint\n\n> Maybe instead of testing for strict equality of the types, test for\n> compatibility.\n\nWe could try to force-convert the result of an SQL function to the right\nthing, I suppose, but I'm worried that that might mask programmer errors\nmore than it helps.\n\nOn the other hand, the equivalent forced conversion happens already in\nplpgsql functions; it's only SQL-language functions that are so picky.\nMaybe your idea is good. Anyone else have an opinion?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 11:00:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: Use int8 for int4/int2 aggregate accumulators? "
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> create function rtest_viewfunc1(int4) returns int4 as\n> >> 'select count(*) from rtest_view2 where a = $1'\n> >> language 'sql';\n> >> + ERROR: return type mismatch in function: declared to return integer, returns bigint\n> \n> > Maybe instead of testing for strict equality of the types, test for\n> > compatibility.\n> \n> We could try to force-convert the result of an SQL function to the right\n> thing, I suppose, but I'm worried that that might mask programmer errors\n> more than it helps.\n> \n> On the other hand, the equivalent forced conversion happens already in\n> plpgsql functions; it's only SQL-language functions that are so picky.\n> Maybe your idea is good. Anyone else have an opinion?\n\nI don't know. Doing a force for SQL functions and not for others seems\nkind of confusing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 14 Aug 2001 12:01:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Use int8 for int4/int2 aggregate accumulators?"
},
{
"msg_contents": "Tom Lane writes:\n\n> We could try to force-convert the result of an SQL function to the right\n> thing, I suppose, but I'm worried that that might mask programmer errors\n> more than it helps.\n\nWhat I had in mind was to allow type conversion between the same\nTypeCategory(). The SQL function analyzer is extraordinarily picky:\n\ncreate function test(int) returns varchar as '\n select substring(''PostgreSQL'' from $1);\n' language sql;\nERROR: return type mismatch in function: declared to return character\nvarying, returns text\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 14 Aug 2001 22:48:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Use int8 for int4/int2 aggregate accumulators? "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> What I had in mind was to allow type conversion between the same\n> TypeCategory(). The SQL function analyzer is extraordinarily picky:\n\nI just finished looking at that. It'd be possible (and probably\nreasonable) to accept a binary-compatible datatype rather than requiring\nexact equality; this would fix your varchar-vs-text example. However,\ninserting any runtime type conversion would require a significant amount\nof code added. We couldn't do it just by inserting the conversion\nfunction call into the SELECT querytree, because that'd alter the SELECT\nsemantics, if not actively crash it --- an example of a likely crash is\n\n\tcreate function mymax() returns int4 as '\n\t\tselect int8col from tab order by int8col desc limit 1'\n\tlanguage sql;\n\nHere, the prepared parsetree is already set up to apply int8 sorting to\nthe first column of its result. If we try to insert a cast-to-int4,\nwe will end up sorting int4 data with int8 operators -- instant\ncoredump.\n\nSo the conversion function application would have to be done at runtime\nin the SQL function manager, which is more code than I care to take on\nat the moment.\n\nNote also that there is code in there to figure out whether a targetlist\nsatisfies a tuple return datatype; should we also apply automatic type\nconversion to elements of such a list? It's getting to be more of a\nstretch to say that this is being helpful rather than masking programmer\nerror.\n\nBut binary compatibility is easy. Shall we do that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Aug 2001 17:05:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: Use int8 for int4/int2 aggregate accumulators? "
},
{
"msg_contents": "> Note also that there is code in there to figure out whether a targetlist\n> satisfies a tuple return datatype; should we also apply automatic type\n> conversion to elements of such a list? It's getting to be more of a\n> stretch to say that this is being helpful rather than masking programmer\n> error.\n> \n> But binary compatibility is easy. Shall we do that?\n\nIf we don't do binary compatible already, we certainly should.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 14 Aug 2001 18:23:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Use int8 for int4/int2 aggregate accumulators?"
}
] |
[
{
"msg_contents": "Presently, we have hand-assigned OIDs running up to about 1950\n(according to the unused_oids script). The range up to 16K is reserved\nfor hand-assigned OIDs, and the OID counter starts at 16384 at initdb.\nA peek in pg_database shows datlastsysoid = 18931 in current sources, so\na total of about 2550 OIDs are machine-assigned during initdb. The bulk\nof these last are in pg_attribute (827 rows) and pg_description (1221\nrows).\n\nAll the hand-assigned OIDs are established by lines like\n\nDATA(insert OID = 23 (\tint4\t PGUID 4 10 t b t \\054 0 0 int4in int4out int4in int4out i p _null_ ));\n\nin catalog include files. We also have lines like\n\nDATA(insert OID = 0 ( avg\tPGUID int4_accum\tnumeric_avg\t\t23\t 1231 1700 \"{0,0,0}\" ));\n\nwhich do not assign a specific OID to the row --- instead the row will\nreceive a machine-generated OID (at or above 16k) when it is loaded.\n\nWhat bothers me about this scheme is that genbki.sh can only create\npg_description entries for objects with hand-assigned OIDs. It\nprocesses the DESCR() macro by emitting the OID of the last DATA macro,\nalong with the description text, into a data file that's eventually\ncopied into pg_description. But if there's no hand-assigned OID it has\nto punt --- it doesn't know what OID the object will have. This means\nwe can't assign initdb-time descriptions to aggregate functions (for\nexample), since we don't give them hand-assigned OIDs.\n\nThere are a couple of possible ways to attack this, but the one that\nseems best to me is to allow genbki.sh itself to assign OIDs to DATA\nlines that don't have one. It could start at, say, 10000, staying well\nclear of both the hand-assigned OIDs and the ones that will be generated\non-the-fly by the backend. Then it would know the correct OID to\nassociate with any DESCR macro.\n\nComments, objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 21:41:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Planned change in initdb-time OID allocation"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Presently, we have hand-assigned OIDs running up to about 1950\n> (according to the unused_oids script). The range up to 16K is reserved\n> for hand-assigned OIDs, and the OID counter starts at 16384 at initdb.\n> A peek in pg_database shows datlastsysoid = 18931 in current sources, so\n> a total of about 2550 OIDs are machine-assigned during initdb. \n...\n> \n> There are a couple of possible ways to attack this, but the one that\n> seems best to me is to allow genbki.sh itself to assign OIDs to DATA\n> lines that don't have one. It could start at, say, 10000, staying well\n> clear of both the hand-assigned OIDs and the ones that will be generated\n> on-the-fly by the backend. Then it would know the correct OID to\n> associate with any DESCR macro.\n> \n> Comments, objections?\n\nI was wondering in the past if it would simply be better to have an\n.SQL script which is submitted to the template1 database at\npost-initdb time with COMMENT ON statements to document built-in\ntypes, functions, system relations, etc. I should, after all, be\nable to issue a \"\\d+ pg_class\" in psql and get a description of the\ncolumns. The .SQL script could potentially contain COMMENT ON\nstatements localized to the language in which the database is\ninstalled, but it wouldn't care what OIDs were assigned (if any) to\nthe various objects being documented.\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Mon, 06 Aug 2001 22:50:55 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Planned change in initdb-time OID allocation"
},
{
"msg_contents": "> What bothers me about this scheme is that genbki.sh can only create\n> pg_description entries for objects with hand-assigned OIDs. It\n> processes the DESCR() macro by emitting the OID of the last DATA macro,\n> along with the description text, into a data file that's eventually\n> copied into pg_description. But if there's no hand-assigned OID it has\n> to punt --- it doesn't know what OID the object will have. This means\n> we can't assign initdb-time descriptions to aggregate functions (for\n> example), since we don't give them hand-assigned OIDs.\n\nThis was a known problem when I implemented pg_description. Your\nsolution sounds good.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 6 Aug 2001 22:51:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Planned change in initdb-time OID allocation"
},
{
"msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> I was wondering in the past if it would simply be better to have an\n> .SQL script which is submitted to the template1 database at\n> post-initdb time with COMMENT ON statements to document built-in\n> types, functions, system relations, etc.\n\nThe nice thing about the way it's done now is that the descriptions\nsit right next to the defining commands in the catalog/*.h files.\nHaving to maintain a separate script file doesn't seem like a win.\nAlmost certainly, the descriptions would be poorly maintained ---\nye olde out-of-sight, out-of-mind problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 23:24:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Planned change in initdb-time OID allocation "
}
] |
[
{
"msg_contents": "Hi,\n\nThe latest patch we submitted to the fulltextindex module improved lots of\nthings but something we could not get to work was the apparently correct use\nof the PG_GETARG* macros, etc.\n\nWhenever we used these macros, we always got 0 or NULL as our values. So,\nwe reverted to the trigger->tgargs array. Looking at the macro, it's\naccessing fcinfo->arg[n] array. For us, the fcinfo->arg array was always\nempty.\n\nWhat's going on? Although it works, someone with more experience might want\nto have a quick look at it. The problem is that we suspect it will fail if\nsomeone tries to FTI a TOAST-ed column.\n\nChris\n\n",
"msg_date": "Tue, 7 Aug 2001 10:01:12 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "FTI contrib"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> The latest patch we submitted to the fulltextindex module improved lots of\n> things but something we could not get to work was the apparently correct use\n> of the PG_GETARG* macros, etc.\n\n> Whenever we used these macros, we always got 0 or NULL as our values. So,\n> we reverted to the trigger->tgargs array.\n\nTrigger functions don't get their arguments the normal way. The GETARG\nmacros don't know anything about trigger arguments... so the original\ncode was correct as it was. I haven't had time to look at your patch,\nbut maybe I should go do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2001 22:43:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FTI contrib "
},
{
"msg_contents": "\nHas this been addressed?\n\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > The latest patch we submitted to the fulltextindex module improved lots of\n> > things but something we could not get to work was the apparently correct use\n> > of the PG_GETARG* macros, etc.\n> \n> > Whenever we used these macros, we always got 0 or NULL as our values. So,\n> > we reverted to the trigger->tgargs array.\n> \n> Trigger functions don't get their arguments the normal way. The GETARG\n> macros don't know anything about trigger arguments... so the original\n> code was correct as it was. I haven't had time to look at your patch,\n> but maybe I should go do that.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 6 Sep 2001 16:47:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FTI contrib"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Has this been addressed?\n\nIIRC, I looked at the patch and decided it was okay.\n\n\t\t\tregards, tom lane\n\n>> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> The latest patch we submitted to the fulltextindex module improved lots of\n> things but something we could not get to work was the apparently correct use\n> of the PG_GETARG* macros, etc.\n>> \n> Whenever we used these macros, we always got 0 or NULL as our values. So,\n> we reverted to the trigger->tgargs array.\n>> \n>> Trigger functions don't get their arguments the normal way. The GETARG\n>> macros don't know anything about trigger arguments... so the original\n>> code was correct as it was. I haven't had time to look at your patch,\n>> but maybe I should go do that.\n>> \n>> regards, tom lane\n",
"msg_date": "Thu, 06 Sep 2001 17:31:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FTI contrib "
},
{
"msg_contents": "Well, the FTI code that was committed works perfectly - it compiles fine\nagainst 7.0.3 and 7.1.2 and is in use indexing 2 columns in 20000 row tables\nin two production and one test servers.\n\nThe updated fti.pl we submitted still uses the PGConnect style functions,\nrather than the PG::Connect style functions. However, I don't know why\nthere is this different in Pg.pm???\n\nMy issue with accessing args was that the docs on writing functions and\ntriggers were a bit confusing. I got the impression that one had to access\nthe trigger args via GETARG macros - but it turns out that is not the case.\n\nStill, someone may wish to review the fti code, and check our optimizations.\nPlus, since it's is 100% backwards compatible with the version in 7.1.2, you\nmight want to back port it to the 7.1.* branch?\n\nCheers,\n\nChris\n\n> Has this been addressed?\n>\n>\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > The latest patch we submitted to the fulltextindex module\n> improved lots of\n> > > things but something we could not get to work was the\n> apparently correct use\n> > > of the PG_GETARG* macros, etc.\n> >\n> > > Whenever we used these macros, we always got 0 or NULL as our\n> values. So,\n> > > we reverted to the trigger->tgargs array.\n> >\n> > Trigger functions don't get their arguments the normal way. The GETARG\n> > macros don't know anything about trigger arguments... so the original\n> > code was correct as it was. I haven't had time to look at your patch,\n> > but maybe I should go do that.\n> >\n> > \t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 7 Sep 2001 10:16:56 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: FTI contrib"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Plus, since it's is 100% backwards compatible with the version in 7.1.2, you\n> might want to back port it to the 7.1.* branch?\n\nSince we're hoping to go beta with 7.2 next week, I doubt there will be\nany further releases in the 7.1.* branch.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 Sep 2001 22:40:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FTI contrib "
}
] |
[
{
"msg_contents": "Hi guys,\n\nJust wondering if we are going to release a version 7.1.3 or not?\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 07 Aug 2001 14:03:44 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "\nI'm missing an email here somewhere, and apologize ... I'm just getting my\nmailboxes back in order now after moving to a dial-up link vs high speed\n(moved to a non-high-speed neighboorhood *sigh*) ...\n\nTom, can you resend that list of changes you sent to me earlier?\n\nOn Tue, 7 Aug 2001, Justin Clift wrote:\n\n> Hi guys,\n>\n> Just wondering if we are going to release a version 7.1.3 or not?\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Tue, 7 Aug 2001 09:54:49 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> (moved to a non-high-speed neighboorhood *sigh*) ...\n\nUgh :-(\n\n> Tom, can you resend that list of changes you sent to me earlier?\n\nAttached is the updated list. Note there are a couple of changes listed\nthat aren't actually in REL7_1_STABLE yet, but if we are going to make\na release it would be easy and profitable to back-patch them. I will\nbe happy to take care of that gruntwork if we decide on a release.\n\n\t\t\tregards, tom lane\n\n\n2001-08-03 16:14 tgl\n\n\t* src/bin/pg_dump/: pg_dump.c, pg_dump.h (REL7_1_STABLE):\n\tBack-patch fixes for dumping user-defined types and dumping\n\tcomments on views.\n\n2001-07-31 14:39 tgl\n\n\t* src/: backend/optimizer/path/allpaths.c,\n\tbackend/optimizer/util/clauses.c, backend/utils/adt/ruleutils.c,\n\tinclude/optimizer/clauses.h (REL7_1_STABLE): Fix optimizer to\n\tnot try to push WHERE\n\tclauses down into a sub-SELECT that has a DISTINCT ON clause, per\n\tbug report from Anthony Wood. While at it, improve the\n\tDISTINCT-ON-clause recognizer routine to not be fooled by out-\n\tof-order DISTINCT lists. Also, back-patch earlier fix to not push\n\tdown into sub-SELECT with LIMIT.\n\n2001-07-29 18:12 tgl\n\n\t* src/bin/pg_dump/: pg_dump.c (REL7_1_STABLE), pg_dump.c: Arrange\n\tfor GRANT/REVOKE on a view to be dumped at the right time, namely\n\tafter the view definition rather than before it. Bug introduced in\n\t7.1 by changes to dump stuff in OID ordering.\n\n2001-07-16 13:57 tgl\n\n\t* src/backend/optimizer/path/allpaths.c: Do not push down quals\n\tinto subqueries that have LIMIT/OFFSET clauses, since the added\n\tqual could change the set of rows that get past the LIMIT. Per\n\tdiscussion on pgsql-sql 7/15/01.\n\n2001-07-11 17:53 momjian\n\n\t* src/backend/commands/copy.c: Disable COPY TO/FROM on views.\n\n2001-07-05 22:13 ishii\n\n\t* doc/src/sgml/backup.sgml (REL7_1_STABLE): Fix typo. createdb -t\n\t--> createdb -T\n\n2001-07-03 12:49 tgl\n\n\t* src/backend/utils/init/miscinit.c: Don't go into infinite loop if\n\t/home/postgres/testversion/data directory is not writable.\n\n2001-07-02 15:31 tgl\n\n\t* src/test/regress/expected/: abstime-solaris-1947.out,\n\tabstime.out: Update abstime expected results to match\n\tpost-30-June-2001 reality. Probably the right fix is to remove\n\t'current' special value entirely, but I don't want to see\n\tregression test failures until that happens.\n\n2001-06-29 12:34 tgl\n\n\t* src/backend/commands/: vacuum.c (REL7_1_STABLE), vacuum.c: Fix\n\tlongstanding error in VACUUM: sometimes would examine a buffer page\n\tafter writing/unpinning it. An actual failure is unlikely, unless\n\tthe system is tremendously short of buffers ... but a bug is a bug.\n\n2001-06-12 21:02 tgl\n\n\t* src/pl/plpgsql/src/pl_exec.c (REL7_1_STABLE): Back-patch fix for\n\tattempt to pfree a value that's not palloc'd (it's a field of a\n\ttuple). I see Jan has already fixed this in current sources, but\n\t7.1.* is pretty badly broken here.\n\n2001-06-12 14:54 tgl\n\n\t* src/backend/rewrite/: rewriteHandler.c (REL7_1_STABLE),\n\trewriteHandler.c: Repair problem with multi-action rules in\n\tcombination with any nontrivial manipulation of rtable/jointree by\n\tplanner. Rewriter was generating actions that shared\n\trtable/jointree substructure, which caused havoc when planner got\n\tto the later actions that it'd already mucked up.\n\n2001-06-06 14:54 wieck\n\n\t* src/pl/plpgsql/src/gram.y: Patch from Ian Lance Taylor fixing\n\tmultiple cursor arguments and buffer zero termination.\n\t\n\tJan\n\n2001-06-06 13:18 tgl\n\n\t* src/backend/access/transam/xlog.c (REL7_1_STABLE): Back-patch\n\tchange to not keep WAL segments just for UNDO information.\n\n2001-05-31 17:49 momjian\n\n\t* doc/src/sgml/: release.sgml (REL7_1_STABLE), release.sgml: Forgot\n\tSGML section section id tag for 7.1.\n\n2001-05-31 13:32 tgl\n\n\t* src/backend/utils/adt/: ri_triggers.c (REL7_1_STABLE),\n\tri_triggers.c: RI triggers would fail for datatypes using old-style\n\tequal function, because cached fmgr info contained reference to a\n\tshorter-lived data structure. Also guard against possibility that\n\tfmgr_info could fail, leaving an incomplete entry present in the\n\thash table.\n\n2001-05-27 21:00 ishii\n\n\t* src/backend/utils/mb/: conv.c (REL7_1_STABLE), conv.c: Fix a\n\tmessage error in utf_to_local\n",
"msg_date": "Tue, 07 Aug 2001 10:47:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "If we decide to release 7.1.3 I'd like to see our patch for\ncontrib/intarray too.\n\n\tOleg\nOn Tue, 7 Aug 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > (moved to a non-high-speed neighboorhood *sigh*) ...\n>\n> Ugh :-(\n>\n> > Tom, can you resend that list of changes you sent to me earlier?\n>\n> Attached is the updated list. Note there are a couple of changes listed\n> that aren't actually in REL7_1_STABLE yet, but if we are going to make\n> a release it would be easy and profitable to back-patch them. I will\n> be happy to take care of that gruntwork if we decide on a release.\n>\n> \t\t\tregards, tom lane\n>\n>\n> 2001-08-03 16:14 tgl\n>\n> \t* src/bin/pg_dump/: pg_dump.c, pg_dump.h (REL7_1_STABLE):\n> \tBack-patch fixes for dumping user-defined types and dumping\n> \tcomments on views.\n>\n> 2001-07-31 14:39 tgl\n>\n> \t* src/: backend/optimizer/path/allpaths.c,\n> \tbackend/optimizer/util/clauses.c, backend/utils/adt/ruleutils.c,\n> \tinclude/optimizer/clauses.h (REL7_1_STABLE): Fix optimizer to\n> \tnot try to push WHERE\n> \tclauses down into a sub-SELECT that has a DISTINCT ON clause, per\n> \tbug report from Anthony Wood. While at it, improve the\n> \tDISTINCT-ON-clause recognizer routine to not be fooled by out-\n> \tof-order DISTINCT lists. Also, back-patch earlier fix to not push\n> \tdown into sub-SELECT with LIMIT.\n>\n> 2001-07-29 18:12 tgl\n>\n> \t* src/bin/pg_dump/: pg_dump.c (REL7_1_STABLE), pg_dump.c: Arrange\n> \tfor GRANT/REVOKE on a view to be dumped at the right time, namely\n> \tafter the view definition rather than before it. Bug introduced in\n> \t7.1 by changes to dump stuff in OID ordering.\n>\n> 2001-07-16 13:57 tgl\n>\n> \t* src/backend/optimizer/path/allpaths.c: Do not push down quals\n> \tinto subqueries that have LIMIT/OFFSET clauses, since the added\n> \tqual could change the set of rows that get past the LIMIT. Per\n> \tdiscussion on pgsql-sql 7/15/01.\n>\n> 2001-07-11 17:53 momjian\n>\n> \t* src/backend/commands/copy.c: Disable COPY TO/FROM on views.\n>\n> 2001-07-05 22:13 ishii\n>\n> \t* doc/src/sgml/backup.sgml (REL7_1_STABLE): Fix typo. createdb -t\n> \t--> createdb -T\n>\n> 2001-07-03 12:49 tgl\n>\n> \t* src/backend/utils/init/miscinit.c: Don't go into infinite loop if\n> \t/home/postgres/testversion/data directory is not writable.\n>\n> 2001-07-02 15:31 tgl\n>\n> \t* src/test/regress/expected/: abstime-solaris-1947.out,\n> \tabstime.out: Update abstime expected results to match\n> \tpost-30-June-2001 reality. Probably the right fix is to remove\n> \t'current' special value entirely, but I don't want to see\n> \tregression test failures until that happens.\n>\n> 2001-06-29 12:34 tgl\n>\n> \t* src/backend/commands/: vacuum.c (REL7_1_STABLE), vacuum.c: Fix\n> \tlongstanding error in VACUUM: sometimes would examine a buffer page\n> \tafter writing/unpinning it. An actual failure is unlikely, unless\n> \tthe system is tremendously short of buffers ... but a bug is a bug.\n>\n> 2001-06-12 21:02 tgl\n>\n> \t* src/pl/plpgsql/src/pl_exec.c (REL7_1_STABLE): Back-patch fix for\n> \tattempt to pfree a value that's not palloc'd (it's a field of a\n> \ttuple). I see Jan has already fixed this in current sources, but\n> \t7.1.* is pretty badly broken here.\n>\n> 2001-06-12 14:54 tgl\n>\n> \t* src/backend/rewrite/: rewriteHandler.c (REL7_1_STABLE),\n> \trewriteHandler.c: Repair problem with multi-action rules in\n> \tcombination with any nontrivial manipulation of rtable/jointree by\n> \tplanner. Rewriter was generating actions that shared\n> \trtable/jointree substructure, which caused havoc when planner got\n> \tto the later actions that it'd already mucked up.\n>\n> 2001-06-06 14:54 wieck\n>\n> \t* src/pl/plpgsql/src/gram.y: Patch from Ian Lance Taylor fixing\n> \tmultiple cursor arguments and buffer zero termination.\n>\n> \tJan\n>\n> 2001-06-06 13:18 tgl\n>\n> \t* src/backend/access/transam/xlog.c (REL7_1_STABLE): Back-patch\n> \tchange to not keep WAL segments just for UNDO information.\n>\n> 2001-05-31 17:49 momjian\n>\n> \t* doc/src/sgml/: release.sgml (REL7_1_STABLE), release.sgml: Forgot\n> \tSGML section section id tag for 7.1.\n>\n> 2001-05-31 13:32 tgl\n>\n> \t* src/backend/utils/adt/: ri_triggers.c (REL7_1_STABLE),\n> \tri_triggers.c: RI triggers would fail for datatypes using old-style\n> \tequal function, because cached fmgr info contained reference to a\n> \tshorter-lived data structure. Also guard against possibility that\n> \tfmgr_info could fail, leaving an incomplete entry present in the\n> \thash table.\n>\n> 2001-05-27 21:00 ishii\n>\n> \t* src/backend/utils/mb/: conv.c (REL7_1_STABLE), conv.c: Fix a\n> \tmessage error in utf_to_local\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 7 Aug 2001 21:25:52 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> If we decide to release 7.1.3 I'd like to see our patch for\n> contrib/intarray too.\n\nWhich one?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 14:56:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "On Tue, 7 Aug 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > If we decide to release 7.1.3 I'd like to see our patch for\n> > contrib/intarray too.\n>\n> Which one?\n\nPatch I've submitted last week. It's in current CVS\n\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1028099\n\nom,\n\nplease apply attached patch to current CVS.\n\n1. Fixed error with empty array ( '{}' ),\n test data changed to include such data\n2. Test a dimension of an array ( we support only one-dimension)\n\nRegards,\nOleg\n\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 7 Aug 2001 22:10:57 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "\n\nThe list looks good to me as far as doing a v7.1.3 ... anyone object to\nit?\n\nOn Tue, 7 Aug 2001, Oleg Bartunov wrote:\n\n> If we decide to release 7.1.3 I'd like to see our patch for\n> contrib/intarray too.\n>\n> \tOleg\n> On Tue, 7 Aug 2001, Tom Lane wrote:\n>\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > (moved to a non-high-speed neighboorhood *sigh*) ...\n> >\n> > Ugh :-(\n> >\n> > > Tom, can you resend that list of changes you sent to me earlier?\n> >\n> > Attached is the updated list. Note there are a couple of changes listed\n> > that aren't actually in REL7_1_STABLE yet, but if we are going to make\n> > a release it would be easy and profitable to back-patch them. I will\n> > be happy to take care of that gruntwork if we decide on a release.\n> >\n> > \t\t\tregards, tom lane\n> >\n> >\n> > 2001-08-03 16:14 tgl\n> >\n> > \t* src/bin/pg_dump/: pg_dump.c, pg_dump.h (REL7_1_STABLE):\n> > \tBack-patch fixes for dumping user-defined types and dumping\n> > \tcomments on views.\n> >\n> > 2001-07-31 14:39 tgl\n> >\n> > \t* src/: backend/optimizer/path/allpaths.c,\n> > \tbackend/optimizer/util/clauses.c, backend/utils/adt/ruleutils.c,\n> > \tinclude/optimizer/clauses.h (REL7_1_STABLE): Fix optimizer to\n> > \tnot try to push WHERE\n> > \tclauses down into a sub-SELECT that has a DISTINCT ON clause, per\n> > \tbug report from Anthony Wood. While at it, improve the\n> > \tDISTINCT-ON-clause recognizer routine to not be fooled by out-\n> > \tof-order DISTINCT lists. Also, back-patch earlier fix to not push\n> > \tdown into sub-SELECT with LIMIT.\n> >\n> > 2001-07-29 18:12 tgl\n> >\n> > \t* src/bin/pg_dump/: pg_dump.c (REL7_1_STABLE), pg_dump.c: Arrange\n> > \tfor GRANT/REVOKE on a view to be dumped at the right time, namely\n> > \tafter the view definition rather than before it. Bug introduced in\n> > \t7.1 by changes to dump stuff in OID ordering.\n> >\n> > 2001-07-16 13:57 tgl\n> >\n> > \t* src/backend/optimizer/path/allpaths.c: Do not push down quals\n> > \tinto subqueries that have LIMIT/OFFSET clauses, since the added\n> > \tqual could change the set of rows that get past the LIMIT. Per\n> > \tdiscussion on pgsql-sql 7/15/01.\n> >\n> > 2001-07-11 17:53 momjian\n> >\n> > \t* src/backend/commands/copy.c: Disable COPY TO/FROM on views.\n> >\n> > 2001-07-05 22:13 ishii\n> >\n> > \t* doc/src/sgml/backup.sgml (REL7_1_STABLE): Fix typo. createdb -t\n> > \t--> createdb -T\n> >\n> > 2001-07-03 12:49 tgl\n> >\n> > \t* src/backend/utils/init/miscinit.c: Don't go into infinite loop if\n> > \t/home/postgres/testversion/data directory is not writable.\n> >\n> > 2001-07-02 15:31 tgl\n> >\n> > \t* src/test/regress/expected/: abstime-solaris-1947.out,\n> > \tabstime.out: Update abstime expected results to match\n> > \tpost-30-June-2001 reality. Probably the right fix is to remove\n> > \t'current' special value entirely, but I don't want to see\n> > \tregression test failures until that happens.\n> >\n> > 2001-06-29 12:34 tgl\n> >\n> > \t* src/backend/commands/: vacuum.c (REL7_1_STABLE), vacuum.c: Fix\n> > \tlongstanding error in VACUUM: sometimes would examine a buffer page\n> > \tafter writing/unpinning it. An actual failure is unlikely, unless\n> > \tthe system is tremendously short of buffers ... but a bug is a bug.\n> >\n> > 2001-06-12 21:02 tgl\n> >\n> > \t* src/pl/plpgsql/src/pl_exec.c (REL7_1_STABLE): Back-patch fix for\n> > \tattempt to pfree a value that's not palloc'd (it's a field of a\n> > \ttuple). I see Jan has already fixed this in current sources, but\n> > \t7.1.* is pretty badly broken here.\n> >\n> > 2001-06-12 14:54 tgl\n> >\n> > \t* src/backend/rewrite/: rewriteHandler.c (REL7_1_STABLE),\n> > \trewriteHandler.c: Repair problem with multi-action rules in\n> > \tcombination with any nontrivial manipulation of rtable/jointree by\n> > \tplanner. Rewriter was generating actions that shared\n> > \trtable/jointree substructure, which caused havoc when planner got\n> > \tto the later actions that it'd already mucked up.\n> >\n> > 2001-06-06 14:54 wieck\n> >\n> > \t* src/pl/plpgsql/src/gram.y: Patch from Ian Lance Taylor fixing\n> > \tmultiple cursor arguments and buffer zero termination.\n> >\n> > \tJan\n> >\n> > 2001-06-06 13:18 tgl\n> >\n> > \t* src/backend/access/transam/xlog.c (REL7_1_STABLE): Back-patch\n> > \tchange to not keep WAL segments just for UNDO information.\n> >\n> > 2001-05-31 17:49 momjian\n> >\n> > \t* doc/src/sgml/: release.sgml (REL7_1_STABLE), release.sgml: Forgot\n> > \tSGML section section id tag for 7.1.\n> >\n> > 2001-05-31 13:32 tgl\n> >\n> > \t* src/backend/utils/adt/: ri_triggers.c (REL7_1_STABLE),\n> > \tri_triggers.c: RI triggers would fail for datatypes using old-style\n> > \tequal function, because cached fmgr info contained reference to a\n> > \tshorter-lived data structure. Also guard against possibility that\n> > \tfmgr_info could fail, leaving an incomplete entry present in the\n> > \thash table.\n> >\n> > 2001-05-27 21:00 ishii\n> >\n> > \t* src/backend/utils/mb/: conv.c (REL7_1_STABLE), conv.c: Fix a\n> > \tmessage error in utf_to_local\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Wed, 8 Aug 2001 14:07:50 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> The list looks good to me as far as doing a v7.1.3 ... anyone object to\n> it?\n\nOkay, I'll wrap up the last couple of back-patches this evening...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 17:29:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "On Wed, 8 Aug 2001, Marc G. Fournier wrote:\n\n>\n>\n> The list looks good to me as far as doing a v7.1.3 ... anyone object to\n> it?\n\nNot me, but I'm curious as to how far away 7.2 is?\n\nVince.\n\n>\n> On Tue, 7 Aug 2001, Oleg Bartunov wrote:\n>\n> > If we decide to release 7.1.3 I'd like to see our patch for\n> > contrib/intarray too.\n> >\n> > \tOleg\n> > On Tue, 7 Aug 2001, Tom Lane wrote:\n> >\n> > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > (moved to a non-high-speed neighboorhood *sigh*) ...\n> > >\n> > > Ugh :-(\n> > >\n> > > > Tom, can you resend that list of changes you sent to me earlier?\n> > >\n> > > Attached is the updated list. Note there are a couple of changes listed\n> > > that aren't actually in REL7_1_STABLE yet, but if we are going to make\n> > > a release it would be easy and profitable to back-patch them. I will\n> > > be happy to take care of that gruntwork if we decide on a release.\n> > >\n> > > \t\t\tregards, tom lane\n> > >\n> > >\n> > > 2001-08-03 16:14 tgl\n> > >\n> > > \t* src/bin/pg_dump/: pg_dump.c, pg_dump.h (REL7_1_STABLE):\n> > > \tBack-patch fixes for dumping user-defined types and dumping\n> > > \tcomments on views.\n> > >\n> > > 2001-07-31 14:39 tgl\n> > >\n> > > \t* src/: backend/optimizer/path/allpaths.c,\n> > > \tbackend/optimizer/util/clauses.c, backend/utils/adt/ruleutils.c,\n> > > \tinclude/optimizer/clauses.h (REL7_1_STABLE): Fix optimizer to\n> > > \tnot try to push WHERE\n> > > \tclauses down into a sub-SELECT that has a DISTINCT ON clause, per\n> > > \tbug report from Anthony Wood. While at it, improve the\n> > > \tDISTINCT-ON-clause recognizer routine to not be fooled by out-\n> > > \tof-order DISTINCT lists. Also, back-patch earlier fix to not push\n> > > \tdown into sub-SELECT with LIMIT.\n> > >\n> > > 2001-07-29 18:12 tgl\n> > >\n> > > \t* src/bin/pg_dump/: pg_dump.c (REL7_1_STABLE), pg_dump.c: Arrange\n> > > \tfor GRANT/REVOKE on a view to be dumped at the right time, namely\n> > > \tafter the view definition rather than before it. Bug introduced in\n> > > \t7.1 by changes to dump stuff in OID ordering.\n> > >\n> > > 2001-07-16 13:57 tgl\n> > >\n> > > \t* src/backend/optimizer/path/allpaths.c: Do not push down quals\n> > > \tinto subqueries that have LIMIT/OFFSET clauses, since the added\n> > > \tqual could change the set of rows that get past the LIMIT. Per\n> > > \tdiscussion on pgsql-sql 7/15/01.\n> > >\n> > > 2001-07-11 17:53 momjian\n> > >\n> > > \t* src/backend/commands/copy.c: Disable COPY TO/FROM on views.\n> > >\n> > > 2001-07-05 22:13 ishii\n> > >\n> > > \t* doc/src/sgml/backup.sgml (REL7_1_STABLE): Fix typo. createdb -t\n> > > \t--> createdb -T\n> > >\n> > > 2001-07-03 12:49 tgl\n> > >\n> > > \t* src/backend/utils/init/miscinit.c: Don't go into infinite loop if\n> > > \t/home/postgres/testversion/data directory is not writable.\n> > >\n> > > 2001-07-02 15:31 tgl\n> > >\n> > > \t* src/test/regress/expected/: abstime-solaris-1947.out,\n> > > \tabstime.out: Update abstime expected results to match\n> > > \tpost-30-June-2001 reality. Probably the right fix is to remove\n> > > \t'current' special value entirely, but I don't want to see\n> > > \tregression test failures until that happens.\n> > >\n> > > 2001-06-29 12:34 tgl\n> > >\n> > > \t* src/backend/commands/: vacuum.c (REL7_1_STABLE), vacuum.c: Fix\n> > > \tlongstanding error in VACUUM: sometimes would examine a buffer page\n> > > \tafter writing/unpinning it. An actual failure is unlikely, unless\n> > > \tthe system is tremendously short of buffers ... but a bug is a bug.\n> > >\n> > > 2001-06-12 21:02 tgl\n> > >\n> > > \t* src/pl/plpgsql/src/pl_exec.c (REL7_1_STABLE): Back-patch fix for\n> > > \tattempt to pfree a value that's not palloc'd (it's a field of a\n> > > \ttuple). I see Jan has already fixed this in current sources, but\n> > > \t7.1.* is pretty badly broken here.\n> > >\n> > > 2001-06-12 14:54 tgl\n> > >\n> > > \t* src/backend/rewrite/: rewriteHandler.c (REL7_1_STABLE),\n> > > \trewriteHandler.c: Repair problem with multi-action rules in\n> > > \tcombination with any nontrivial manipulation of rtable/jointree by\n> > > \tplanner. Rewriter was generating actions that shared\n> > > \trtable/jointree substructure, which caused havoc when planner got\n> > > \tto the later actions that it'd already mucked up.\n> > >\n> > > 2001-06-06 14:54 wieck\n> > >\n> > > \t* src/pl/plpgsql/src/gram.y: Patch from Ian Lance Taylor fixing\n> > > \tmultiple cursor arguments and buffer zero termination.\n> > >\n> > > \tJan\n> > >\n> > > 2001-06-06 13:18 tgl\n> > >\n> > > \t* src/backend/access/transam/xlog.c (REL7_1_STABLE): Back-patch\n> > > \tchange to not keep WAL segments just for UNDO information.\n> > >\n> > > 2001-05-31 17:49 momjian\n> > >\n> > > \t* doc/src/sgml/: release.sgml (REL7_1_STABLE), release.sgml: Forgot\n> > > \tSGML section section id tag for 7.1.\n> > >\n> > > 2001-05-31 13:32 tgl\n> > >\n> > > \t* src/backend/utils/adt/: ri_triggers.c (REL7_1_STABLE),\n> > > \tri_triggers.c: RI triggers would fail for datatypes using old-style\n> > > \tequal function, because cached fmgr info contained reference to a\n> > > \tshorter-lived data structure. Also guard against possibility that\n> > > \tfmgr_info could fail, leaving an incomplete entry present in the\n> > > \thash table.\n> > >\n> > > 2001-05-27 21:00 ishii\n> > >\n> > > \t* src/backend/utils/mb/: conv.c (REL7_1_STABLE), conv.c: Fix a\n> > > \tmessage error in utf_to_local\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://www.postgresql.org/search.mpl\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 8 Aug 2001 18:34:22 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "On Mi� 08 Ago 2001 18:29, Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > The list looks good to me as far as doing a v7.1.3 ... anyone object to\n> > it?\n>\n> Okay, I'll wrap up the last couple of back-patches this evening...\n\nWhen could 7.1.3 be out? I'm very interested in the patch to the \nrewriteHandler.c file. I have just finished recompiling postgres wth the \npatched version of the rewriteHandler.c, but would like to have a stable \n7.1.3 version comiled on the main SQL server.\n\nSaludos... ;-)\n\n-- \nCualquiera administra un NT.\nEse es el problema, que cualquiera administre.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 8 Aug 2001 19:38:25 -0300",
"msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> If we decide to release 7.1.3 I'd like to see our patch for\n> contrib/intarray too.\n>> \n>> Which one?\n\n> Patch I've submitted last week. It's in current CVS\n> http://fts.postgresql.org/db/mw/msg.html?mid=1028099\n\nThat patch doesn't apply cleanly at all to REL7_1_STABLE. While I can\nfind the corresponding code, I'm not sure that making the changes is\na good idea; perhaps the patch depends on some of the previous (quite\nextensive) fixes to work correctly? I'm inclined to leave 7.1 alone.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 18:39:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "I said:\n> Okay, I'll wrap up the last couple of back-patches this evening...\n\nDone. A couple of the patches that I had my eye on turned out not to be\nrelevant to 7.1 (they were fixes in new code). So attached is the\ncurrent list of 7.1.2 -> 7.1.3 changes. Bruce, are you going to handle\nthe documentation updates and release branding?\n\n\t\t\tregards, tom lane\n\n\n2001-08-08 18:32 tgl\n\n\t* src/backend/commands/copy.c (REL7_1_STABLE): Back-patch fix to\n\tdisallow COPY TO/FROM a view (or anything else that's not a plain\n\trelation).\n\n2001-08-08 18:25 tgl\n\n\t* src/backend/utils/init/miscinit.c (REL7_1_STABLE): Back-patch fix\n\tto prevent infinite loop when $PGDATA is not writable.\n\n2001-08-07 14:36 momjian\n\n\t* src/: backend/port/beos/support.c, backend/port/dynloader/beos.c,\n\tinclude/port/beos.h (REL7_1_STABLE): Commit BEOS patch to 7.1.X.\n\n2001-08-03 16:14 tgl\n\n\t* src/bin/pg_dump/: pg_dump.c, pg_dump.h (REL7_1_STABLE):\n\tBack-patch fixes for dumping user-defined types and dumping\n\tcomments on views.\n\n2001-07-31 14:39 tgl\n\n\t* src/: backend/optimizer/path/allpaths.c,\n\tbackend/optimizer/util/clauses.c, backend/utils/adt/ruleutils.c,\n\tinclude/optimizer/clauses.h (REL7_1_STABLE): Fix optimizer to not\n\ttry to push WHERE clauses down into a sub-SELECT that has a\n\tDISTINCT ON clause, per bug report from Anthony Wood. While at it,\n\timprove the DISTINCT-ON-clause recognizer routine to not be fooled\n\tby out- of-order DISTINCT lists. Also, back-patch earlier fix to\n\tnot push down into sub-SELECT with LIMIT.\n\n2001-07-29 18:12 tgl\n\n\t* src/bin/pg_dump/pg_dump.c (REL7_1_STABLE): Arrange for\n\tGRANT/REVOKE on a view to be dumped at the right time, namely after\n\tthe view definition rather than before it. Bug introduced in 7.1\n\tby changes to dump stuff in OID ordering.\n\n2001-07-05 22:13 ishii\n\n\t* doc/src/sgml/backup.sgml (REL7_1_STABLE): Fix typo. createdb -t\n\t--> createdb -T\n\n2001-07-02 15:34 tgl\n\n\t* src/test/regress/expected/: abstime-solaris-1947.out, abstime.out\n\t(REL7_1_STABLE): In any case, it seems the REL7_1 branch needs the\n\tupdate too...\n\n2001-06-29 12:34 tgl\n\n\t* src/backend/commands/vacuum.c (REL7_1_STABLE): Fix longstanding\n\terror in VACUUM: sometimes would examine a buffer page after\n\twriting/unpinning it. An actual failure is unlikely, unless the\n\tsystem is tremendously short of buffers ... but a bug is a bug.\n\n2001-06-12 21:02 tgl\n\n\t* src/pl/plpgsql/src/pl_exec.c (REL7_1_STABLE): Back-patch fix for\n\tattempt to pfree a value that's not palloc'd (it's a field of a\n\ttuple). I see Jan has already fixed this in current sources, but\n\t7.1.* is pretty badly broken here.\n\n2001-06-12 14:54 tgl\n\n\t* src/backend/rewrite/rewriteHandler.c (REL7_1_STABLE): Repair\n\tproblem with multi-action rules in combination with any nontrivial\n\tmanipulation of rtable/jointree by planner. Rewriter was\n\tgenerating actions that shared rtable/jointree substructure, which\n\tcaused havoc when planner got to the later actions that it'd\n\talready mucked up.\n\n2001-06-06 13:18 tgl\n\n\t* src/backend/access/transam/xlog.c (REL7_1_STABLE): Back-patch\n\tchange to not keep WAL segments just for UNDO information.\n\n2001-05-31 17:50 momjian\n\n\t* doc/src/sgml/release.sgml (REL7_1_STABLE): Forgot SGML section\n\tsection id tag for 7.1.\n\n2001-05-31 13:33 tgl\n\n\t* src/backend/utils/adt/ri_triggers.c (REL7_1_STABLE): RI triggers\n\twould fail for datatypes using old-style equal function, because\n\tcached fmgr info contained reference to a shorter-lived data\n\tstructure. Also guard against possibility that fmgr_info could\n\tfail, leaving an incomplete entry present in the hash table.\n\n2001-05-27 21:01 ishii\n\n\t* src/backend/utils/mb/conv.c (REL7_1_STABLE): Fix a message error\n\tin utf_to_local\n",
"msg_date": "Wed, 08 Aug 2001 19:35:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Not me, but I'm curious as to how far away 7.2 is?\n\nI'd like to go beta by the end of the month.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 22:20:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "> I said:\n> > Okay, I'll wrap up the last couple of back-patches this evening...\n> \n> Done. A couple of the patches that I had my eye on turned out not to be\n> relevant to 7.1 (they were fixes in new code). So attached is the\n> current list of 7.1.2 -> 7.1.3 changes. Bruce, are you going to handle\n> the documentation updates and release branding?\n\nYes, just tell me when to start.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 8 Aug 2001 22:36:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> current list of 7.1.2 -> 7.1.3 changes. Bruce, are you going to handle\n>> the documentation updates and release branding?\n\n> Yes, just tell me when to start.\n\nI don't know of any reason to wait... anyone else?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 23:01:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> current list of 7.1.2 -> 7.1.3 changes. Bruce, are you going to handle\n> >> the documentation updates and release branding?\n> \n> > Yes, just tell me when to start.\n> \n> I don't know of any reason to wait... anyone else?\n> \n\nI have to fix my old fault in TID handling.\nI am able to have a cvs access now and would\ncommit the fix to 7.1 branch.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 09 Aug 2001 14:06:56 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "On Wed, 8 Aug 2001, Vince Vielhaber wrote:\n\n> On Wed, 8 Aug 2001, Marc G. Fournier wrote:\n>\n> >\n> >\n> > The list looks good to me as far as doing a v7.1.3 ... anyone object to\n> > it?\n>\n> Not me, but I'm curious as to how far away 7.2 is?\n\nOct-ish sometime ...\n\n>\n> Vince.\n>\n> >\n> > On Tue, 7 Aug 2001, Oleg Bartunov wrote:\n> >\n> > > If we decide to release 7.1.3 I'd like to see our patch for\n> > > contrib/intarray too.\n> > >\n> > > \tOleg\n> > > On Tue, 7 Aug 2001, Tom Lane wrote:\n> > >\n> > > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > > (moved to a non-high-speed neighboorhood *sigh*) ...\n> > > >\n> > > > Ugh :-(\n> > > >\n> > > > > Tom, can you resend that list of changes you sent to me earlier?\n> > > >\n> > > > Attached is the updated list. Note there are a couple of changes listed\n> > > > that aren't actually in REL7_1_STABLE yet, but if we are going to make\n> > > > a release it would be easy and profitable to back-patch them. I will\n> > > > be happy to take care of that gruntwork if we decide on a release.\n> > > >\n> > > > \t\t\tregards, tom lane\n> > > >\n> > > >\n> > > > 2001-08-03 16:14 tgl\n> > > >\n> > > > \t* src/bin/pg_dump/: pg_dump.c, pg_dump.h (REL7_1_STABLE):\n> > > > \tBack-patch fixes for dumping user-defined types and dumping\n> > > > \tcomments on views.\n> > > >\n> > > > 2001-07-31 14:39 tgl\n> > > >\n> > > > \t* src/: backend/optimizer/path/allpaths.c,\n> > > > \tbackend/optimizer/util/clauses.c, backend/utils/adt/ruleutils.c,\n> > > > \tinclude/optimizer/clauses.h (REL7_1_STABLE): Fix optimizer to\n> > > > \tnot try to push WHERE\n> > > > \tclauses down into a sub-SELECT that has a DISTINCT ON clause, per\n> > > > \tbug report from Anthony Wood. While at it, improve the\n> > > > \tDISTINCT-ON-clause recognizer routine to not be fooled by out-\n> > > > \tof-order DISTINCT lists. Also, back-patch earlier fix to not push\n> > > > \tdown into sub-SELECT with LIMIT.\n> > > >\n> > > > 2001-07-29 18:12 tgl\n> > > >\n> > > > \t* src/bin/pg_dump/: pg_dump.c (REL7_1_STABLE), pg_dump.c: Arrange\n> > > > \tfor GRANT/REVOKE on a view to be dumped at the right time, namely\n> > > > \tafter the view definition rather than before it. Bug introduced in\n> > > > \t7.1 by changes to dump stuff in OID ordering.\n> > > >\n> > > > 2001-07-16 13:57 tgl\n> > > >\n> > > > \t* src/backend/optimizer/path/allpaths.c: Do not push down quals\n> > > > \tinto subqueries that have LIMIT/OFFSET clauses, since the added\n> > > > \tqual could change the set of rows that get past the LIMIT. Per\n> > > > \tdiscussion on pgsql-sql 7/15/01.\n> > > >\n> > > > 2001-07-11 17:53 momjian\n> > > >\n> > > > \t* src/backend/commands/copy.c: Disable COPY TO/FROM on views.\n> > > >\n> > > > 2001-07-05 22:13 ishii\n> > > >\n> > > > \t* doc/src/sgml/backup.sgml (REL7_1_STABLE): Fix typo. createdb -t\n> > > > \t--> createdb -T\n> > > >\n> > > > 2001-07-03 12:49 tgl\n> > > >\n> > > > \t* src/backend/utils/init/miscinit.c: Don't go into infinite loop if\n> > > > \t/home/postgres/testversion/data directory is not writable.\n> > > >\n> > > > 2001-07-02 15:31 tgl\n> > > >\n> > > > \t* src/test/regress/expected/: abstime-solaris-1947.out,\n> > > > \tabstime.out: Update abstime expected results to match\n> > > > \tpost-30-June-2001 reality. Probably the right fix is to remove\n> > > > \t'current' special value entirely, but I don't want to see\n> > > > \tregression test failures until that happens.\n> > > >\n> > > > 2001-06-29 12:34 tgl\n> > > >\n> > > > \t* src/backend/commands/: vacuum.c (REL7_1_STABLE), vacuum.c: Fix\n> > > > \tlongstanding error in VACUUM: sometimes would examine a buffer page\n> > > > \tafter writing/unpinning it. An actual failure is unlikely, unless\n> > > > \tthe system is tremendously short of buffers ... but a bug is a bug.\n> > > >\n> > > > 2001-06-12 21:02 tgl\n> > > >\n> > > > \t* src/pl/plpgsql/src/pl_exec.c (REL7_1_STABLE): Back-patch fix for\n> > > > \tattempt to pfree a value that's not palloc'd (it's a field of a\n> > > > \ttuple). I see Jan has already fixed this in current sources, but\n> > > > \t7.1.* is pretty badly broken here.\n> > > >\n> > > > 2001-06-12 14:54 tgl\n> > > >\n> > > > \t* src/backend/rewrite/: rewriteHandler.c (REL7_1_STABLE),\n> > > > \trewriteHandler.c: Repair problem with multi-action rules in\n> > > > \tcombination with any nontrivial manipulation of rtable/jointree by\n> > > > \tplanner. Rewriter was generating actions that shared\n> > > > \trtable/jointree substructure, which caused havoc when planner got\n> > > > \tto the later actions that it'd already mucked up.\n> > > >\n> > > > 2001-06-06 14:54 wieck\n> > > >\n> > > > \t* src/pl/plpgsql/src/gram.y: Patch from Ian Lance Taylor fixing\n> > > > \tmultiple cursor arguments and buffer zero termination.\n> > > >\n> > > > \tJan\n> > > >\n> > > > 2001-06-06 13:18 tgl\n> > > >\n> > > > \t* src/backend/access/transam/xlog.c (REL7_1_STABLE): Back-patch\n> > > > \tchange to not keep WAL segments just for UNDO information.\n> > > >\n> > > > 2001-05-31 17:49 momjian\n> > > >\n> > > > \t* doc/src/sgml/: release.sgml (REL7_1_STABLE), release.sgml: Forgot\n> > > > \tSGML section section id tag for 7.1.\n> > > >\n> > > > 2001-05-31 13:32 tgl\n> > > >\n> > > > \t* src/backend/utils/adt/: ri_triggers.c (REL7_1_STABLE),\n> > > > \tri_triggers.c: RI triggers would fail for datatypes using old-style\n> > > > \tequal function, because cached fmgr info contained reference to a\n> > > > \tshorter-lived data structure. Also guard against possibility that\n> > > > \tfmgr_info could fail, leaving an incomplete entry present in the\n> > > > \thash table.\n> > > >\n> > > > 2001-05-27 21:00 ishii\n> > > >\n> > > > \t* src/backend/utils/mb/: conv.c (REL7_1_STABLE), conv.c: Fix a\n> > > > \tmessage error in utf_to_local\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > >\n> > > > http://www.postgresql.org/search.mpl\n> > > >\n> > >\n> > > \tRegards,\n> > > \t\tOleg\n> > > _____________________________________________________________\n> > > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > > Sternberg Astronomical Institute, Moscow University (Russia)\n> > > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > > phone: +007(095)939-16-83, +007(095)939-23-83\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Thu, 9 Aug 2001 08:28:39 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "\ngo for it, and I'll try and do up a v7.1.3 during the day tomorrow ...\n\nOn Wed, 8 Aug 2001, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> current list of 7.1.2 -> 7.1.3 changes. Bruce, are you going to handle\n> >> the documentation updates and release branding?\n>\n> > Yes, just tell me when to start.\n>\n> I don't know of any reason to wait... anyone else?\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Thu, 9 Aug 2001 08:29:42 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "\nI think we are on hold for Hiroshi, right?\n\n> \n> go for it, and I'll try and do up a v7.1.3 during the day tomorrow ...\n> \n> On Wed, 8 Aug 2001, Tom Lane wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> current list of 7.1.2 -> 7.1.3 changes. Bruce, are you going to handle\n> > >> the documentation updates and release branding?\n> >\n> > > Yes, just tell me when to start.\n> >\n> > I don't know of any reason to wait... anyone else?\n> >\n> > \t\t\tregards, tom lane\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Aug 2001 10:45:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think we are on hold for Hiroshi, right?\n\nYes. I believe I know which patch he's referring to --- it's the only\nchange in src/backend/utils/adt/tid.c since 7.1. If he hasn't committed\nin a few hours I can take care of back-patching it. It must be pushing\nmidnight in Japan by now...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Aug 2001 10:51:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think we are on hold for Hiroshi, right?\n> \n> Yes. I believe I know which patch he's referring to --- it's the only\n> change in src/backend/utils/adt/tid.c since 7.1. If he hasn't committed\n> in a few hours I can take care of back-patching it. It must be pushing\n> midnight in Japan by now...\n\nThat's a couple of days ago now... anything happening?\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "12 Aug 2001 21:03:43 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> current list of 7.1.2 -> 7.1.3 changes. Bruce, are you going to handle\n> > >> the documentation updates and release branding?\n> > \n> > > Yes, just tell me when to start.\n> > \n> > I don't know of any reason to wait... anyone else?\n> > \n> \n> I have to fix my old fault in TID handling.\n> I am able to have a cvs access now and would\n> commit the fix to 7.1 branch.\n\nHiroshi, are you done with changes you want in 7.1.3?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Aug 2001 11:25:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> That's a couple of days ago now... anything happening?\n\nBruce is evidently waiting on Hiroshi's confirmation that he's done\napplying his back-patches. I believe he is, though; he did apply\nwhat I thought was the patch he had in mind.\n\nBruce, have you finished the documentation updates, or is that still\nopen? You could probably get that done while waiting for Hiroshi's\nanswer...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Aug 2001 16:02:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > That's a couple of days ago now... anything happening?\n> \n> Bruce is evidently waiting on Hiroshi's confirmation that he's done\n> applying his back-patches. I believe he is, though; he did apply\n> what I thought was the patch he had in mind.\n\nYes, but I wanted him to state that.\n\n> Bruce, have you finished the documentation updates, or is that still\n> open? You could probably get that done while waiting for Hiroshi's\n> answer...\n\nAll I need to do is go through CVS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 13 Aug 2001 16:49:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> > >\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > >> current list of 7.1.2 -> 7.1.3 changes. Bruce, are you going to handle\n> > > >> the documentation updates and release branding?\n> > >\n> > > > Yes, just tell me when to start.\n> > >\n> > > I don't know of any reason to wait... anyone else?\n> > >\n> >\n> > I have to fix my old fault in TID handling.\n> > I am able to have a cvs access now and would\n> > commit the fix to 7.1 branch.\n> \n> Hiroshi, are you done with changes you want in 7.1.3?\n> \n\nOops I missed your mail sorry. Unfortunately my mail server\nis down and I could access pgsql-hackers only by the news server.\n\nMy answer is Yes. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 14 Aug 2001 06:25:35 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "\nI will get on this today.\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > That's a couple of days ago now... anything happening?\n> \n> Bruce is evidently waiting on Hiroshi's confirmation that he's done\n> applying his back-patches. I believe he is, though; he did apply\n> what I thought was the patch he had in mind.\n> \n> Bruce, have you finished the documentation updates, or is that still\n> open? You could probably get that done while waiting for Hiroshi's\n> answer...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 14 Aug 2001 11:46:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "\nlet me know once you are complete, and i'll wrap her up ...\n\nOn Tue, 14 Aug 2001, Bruce Momjian wrote:\n\n>\n> I will get on this today.\n>\n> > teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > > That's a couple of days ago now... anything happening?\n> >\n> > Bruce is evidently waiting on Hiroshi's confirmation that he's done\n> > applying his back-patches. I believe he is, though; he did apply\n> > what I thought was the patch he had in mind.\n> >\n> > Bruce, have you finished the documentation updates, or is that still\n> > open? You could probably get that done while waiting for Hiroshi's\n> > answer...\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Tue, 14 Aug 2001 11:55:04 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "On Tuesday 07 August 2001 14:56, Tom Lane wrote:\n\nOk. This is the second time I have seen this message -- but this one is \ndelayed by a week. Marc?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 14 Aug 2001 14:40:41 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "> > > I have to fix my old fault in TID handling.\n> > > I am able to have a cvs access now and would\n> > > commit the fix to 7.1 branch.\n> > \n> > Hiroshi, are you done with changes you want in 7.1.3?\n> > \n> \n> Oops I missed your mail sorry. Unfortunately my mail server\n> is down and I could access pgsql-hackers only by the news server.\n> \n> My answer is Yes. \n\nOK, 7.1.3 is packaged and ready to go, date stamped Auguest 15. Can\npeople with cvs 7.1 branches review it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 14 Aug 2001 18:17:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "> OK, 7.1.3 is packaged and ready to go, date stamped Auguest 15. Can\n> people with cvs 7.1 branches review it?\n\nCompling/regression tests seems fine on my box (Linux kernel\n2.2/egcs-2.91/glic 2.1.3). Documents are also compiled fine.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 15 Aug 2001 18:35:00 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "Is it possible to include patch for libpgtcl & tcl >8.0\nin this release?\n\nRegards,\nMikhail Terekhov\n\nBruce Momjian wrote:\n> \n> > > > I have to fix my old fault in TID handling.\n> > > > I am able to have a cvs access now and would\n> > > > commit the fix to 7.1 branch.\n> > >\n> > > Hiroshi, are you done with changes you want in 7.1.3?\n> > >\n> >\n> > Oops I missed your mail sorry. Unfortunately my mail server\n> > is down and I could access pgsql-hackers only by the news server.\n> >\n> > My answer is Yes.\n> \n> OK, 7.1.3 is packaged and ready to go, date stamped Auguest 15. Can\n> people with cvs 7.1 branches review it?\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n",
"msg_date": "Wed, 15 Aug 2001 09:10:00 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "None of my Solaris boxes have direct internet access, but if someone is\nwilling to make a snapshot tar.gz/tar.bz2 file, I can download and test\non Solaris 8 SPARC/INTEL.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\nTatsuo Ishii wrote:\n> \n> > OK, 7.1.3 is packaged and ready to go, date stamped Auguest 15. Can\n> > people with cvs 7.1 branches review it?\n> \n> Compling/regression tests seems fine on my box (Linux kernel\n> 2.2/egcs-2.91/glic 2.1.3). Documents are also compiled fine.\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 15 Aug 2001 23:14:30 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "By the way,\n\nAre we going to do the \"official\" testing-on-all-supported-platforms\nbefore making this (hopefully final 7.1.x) release? It'd be embarrasing\nif it failed on something it's supposed to work on...\n\nIf so, do we make use of Vince's Regression Test form?\nhttp://www.ca.postgresql.org/~vev/regress/\n\nRegards and best wishes,\n\nJustin Clift\n\n\nMikhail Terekhov wrote:\n> \n> Is it possible to include patch for libpgtcl & tcl >8.0\n> in this release?\n> \n> Regards,\n> Mikhail Terekhov\n> \n> Bruce Momjian wrote:\n> >\n> > > > > I have to fix my old fault in TID handling.\n> > > > > I am able to have a cvs access now and would\n> > > > > commit the fix to 7.1 branch.\n> > > >\n> > > > Hiroshi, are you done with changes you want in 7.1.3?\n> > > >\n> > >\n> > > Oops I missed your mail sorry. Unfortunately my mail server\n> > > is down and I could access pgsql-hackers only by the news server.\n> > >\n> > > My answer is Yes.\n> >\n> > OK, 7.1.3 is packaged and ready to go, date stamped Auguest 15. Can\n> > people with cvs 7.1 branches review it?\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 15 Aug 2001 23:55:47 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "> Is it possible to include patch for libpgtcl & tcl >8.0\n> in this release?\n\nMuch too risky. We don't know about compatibility with earlier tcl\nversions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Aug 2001 11:42:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "> By the way,\n> \n> Are we going to do the \"official\" testing-on-all-supported-platforms\n> before making this (hopefully final 7.1.x) release? It'd be embarrasing\n> if it failed on something it's supposed to work on...\n> \n> If so, do we make use of Vince's Regression Test form?\n> http://www.ca.postgresql.org/~vev/regress/\n\nNot usually. We didn't change that much.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 15 Aug 2001 11:43:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "So... will current 7.1.1 databases upgrade without problems to 7.1.3?\n\n",
"msg_date": "Wed, 15 Aug 2001 14:50:41 -0400",
"msg_from": "Dwayne Miller <dmiller@espgroup.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: To be 7.1.3 or not to be 7.1.3?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Are we going to do the \"official\" testing-on-all-supported-platforms\n\n> Not usually. We didn't change that much.\n\nMore to the point, I don't believe there were any changes that had any\nsignificance for portability...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Aug 2001 17:23:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: To be 7.1.3 or not to be 7.1.3? "
},
{
"msg_contents": "The current CVS tree does not compile ODBC. All sorts of failure due to\nconst and undefined variables.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 12:10:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "CVS ODBC does not compile"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The current CVS tree does not compile ODBC. All sorts of failure due to\n> const and undefined variables.\n\nI just got a ton of errors in odbc too, trying to build it with HP's cc.\nI have not tried to build ODBC at all lately, so I'm not sure\nhow new the problem is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 07 Sep 2001 13:06:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CVS ODBC does not compile "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The current CVS tree does not compile ODBC. All sorts of failure due to\n> > const and undefined variables.\n> \n> I just got a ton of errors in odbc too, trying to build it with HP's cc.\n> I have not tried to build ODBC at all lately, so I'm not sure\n> how new the problem is.\n\nDon't bother. Some are const prototype, non-const definition, but\nothers are undefined variable and possible variable used but not\ninitialized. I think we have to wait for Hiroshi.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 13:08:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CVS ODBC does not compile"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian\n>\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > The current CVS tree does not compile ODBC. All sorts of\n> failure due to\n> > > const and undefined variables.\n> >\n> > I just got a ton of errors in odbc too, trying to build it with HP's cc.\n> > I have not tried to build ODBC at all lately, so I'm not sure\n> > how new the problem is.\n>\n> Don't bother. Some are const prototype, non-const definition, but\n> others are undefined variable and possible variable used but not\n> initialized. I think we have to wait for Hiroshi.\n\nOK I removed the errors on cygwin port and will commit\nthe fix soon. However I couldn't check it on linux box now\nunfortunately. I'm very happy if you could check it on your\nenvironment.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Sat, 8 Sep 2001 11:26:30 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CVS ODBC does not compile"
},
{
"msg_contents": "> > Don't bother. Some are const prototype, non-const definition, but\n> > others are undefined variable and possible variable used but not\n> > initialized. I think we have to wait for Hiroshi.\n> \n> OK I removed the errors on cygwin port and will commit\n> the fix soon. However I couldn't check it on linux box now\n> unfortunately. I'm very happy if you could check it on your\n> environment.\n\nOK, I will wait for your commit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 22:40:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CVS ODBC does not compile"
},
{
"msg_contents": "> > -----Original Message-----\n> > From: Bruce Momjian\n> >\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > The current CVS tree does not compile ODBC. All sorts of\n> > failure due to\n> > > > const and undefined variables.\n> > >\n> > > I just got a ton of errors in odbc too, trying to build it with HP's cc.\n> > > I have not tried to build ODBC at all lately, so I'm not sure\n> > > how new the problem is.\n> >\n> > Don't bother. Some are const prototype, non-const definition, but\n> > others are undefined variable and possible variable used but not\n> > initialized. I think we have to wait for Hiroshi.\n> \n> OK I removed the errors on cygwin port and will commit\n> the fix soon. However I couldn't check it on linux box now\n> unfortunately. I'm very happy if you could check it on your\n> environment.\n\nLooks great. Compiles cleanly. I moved updateCommons() into the Win32\nblock so I don't get a \"function not used\" warning.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Sep 2001 22:52:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CVS ODBC does not compile"
}
] |
[
{
"msg_contents": "Oracle PL/SQL supports a very convenient feature in which you can say\nsomething like\n DECLARE\n CURSUR cur IS SELECT * FROM RECORD;\n BEGIN\n OPEN cur;\n UPDATE record SET field = value WHERE CURRENT OF cur;\n CLOSE cur;\n END\n\nWe have cursors in the development version of PL/pgSQL, but they don't\nsupport CURRENT OF. In the patch I wrote a few months back to add\ncursor support to PL/pgSQL, which was not adopted, I included support\nfor CURRENT OF. I did it by using OIDs. Within PL/pgSQL, I modified\nthe cursor select statement to also select the OID. Then I change\nWHERE CURRENT OF cur to oid = oidvalue. Of course this only works in\nlimited situations, and in particular doesn't work after OID\nwraparound.\n\nAnyhow, I see that there is a move afoot to eliminate mandatory OIDs.\nMy question now is: if there is no OID, is there any comparable way to\nimplement CURRENT OF cursor? Basically what is needed is some way to\nidentify a particular row between a SELECT and an UPDATE.\n\nIan\n",
"msg_date": "07 Aug 2001 08:59:13 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> Anyhow, I see that there is a move afoot to eliminate mandatory OIDs.\n> My question now is: if there is no OID, is there any comparable way to\n> implement CURRENT OF cursor? Basically what is needed is some way to\n> identify a particular row between a SELECT and an UPDATE.\n\nI'd look at using TID. Seems like that is more efficient anyway (no\nindex needed). Hiroshi has opined that TID is not sufficient for ODBC\ncursors, but it seems to me that it is sufficient for SQL cursors.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Aug 2001 14:01:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Ian Lance Taylor <ian@airs.com> writes:\n> > Anyhow, I see that there is a move afoot to eliminate mandatory OIDs.\n> > My question now is: if there is no OID, is there any comparable way to\n> > implement CURRENT OF cursor? Basically what is needed is some way to\n> > identify a particular row between a SELECT and an UPDATE.\n> \n> I'd look at using TID. Seems like that is more efficient anyway (no\n> index needed). Hiroshi has opined that TID is not sufficient for ODBC\n> cursors, but it seems to me that it is sufficient for SQL cursors.\n> \n\nYes TID is available and I introduced Tid Scan in order\nto support this kind of implementation. However there\nare some notices.\n1) Is *FOR UPDATE* cursor allowed in PL/pgSQL ?\n (It doesn't seem easy for me).\n2) If no, there could be UPDATE operations for the\n current tuple from other backends between a\n SELECT and an UPDATE and the TID may be changed.\n In that case, you couldn't find the tuple using\n saved TID but you could use the functions to\n follow the UPDATE link which I provided when I\n I introduced Tis Scan.\n There could be DELETE operations for the tuple\n from other backends also and the TID may disappear.\n Because FULL VACUUM couldn't run while the cursor\n is open, it could neither move nor remove the tuple\n but I'm not sure if the new VACUUM could remove\n the deleted tuple and other backends could re-use\n the space under such a situation. If it's possible,\n there must be another information like OID to iden-\n tify tuples.\n\nAnyway optional OIDs aren't preferable IMHO.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 08 Aug 2001 09:12:15 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n\n> > Ian Lance Taylor <ian@airs.com> writes:\n> > > Anyhow, I see that there is a move afoot to eliminate mandatory OIDs.\n> > > My question now is: if there is no OID, is there any comparable way to\n> > > implement CURRENT OF cursor? Basically what is needed is some way to\n> > > identify a particular row between a SELECT and an UPDATE.\n> > \n> > I'd look at using TID. Seems like that is more efficient anyway (no\n> > index needed). Hiroshi has opined that TID is not sufficient for ODBC\n> > cursors, but it seems to me that it is sufficient for SQL cursors.\n> > \n> \n> Yes TID is available and I introduced Tid Scan in order\n> to support this kind of implementation. However there\n> are some notices.\n> 1) Is *FOR UPDATE* cursor allowed in PL/pgSQL ?\n> (It doesn't seem easy for me).\n\nNo, it is not supported right now.\n\nConceptually, however, PL/pgSQL could pull out the FOR UPDATE clause\nand turn it into an explicit LOCK statement. The TID hack will only\nwork for a cursor which selects from a single table, so this is the\nonly case for which turning FOR UPDATE into LOCK has to work.\n\nAdmittedly, this is not the same as SELECT FOR UPDATE, because I think\nPL/pgSQL would have to lock the table in ROW EXCLUSIVE mode. But I\nthink it would work, albeit not with maximal efficiency.\n\nIan\n",
"msg_date": "07 Aug 2001 17:46:15 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Ian Lance Taylor wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> \n> > > Ian Lance Taylor <ian@airs.com> writes:\n> > > > Anyhow, I see that there is a move afoot to eliminate mandatory OIDs.\n> > > > My question now is: if there is no OID, is there any comparable way to\n> > > > implement CURRENT OF cursor? Basically what is needed is some way to\n> > > > identify a particular row between a SELECT and an UPDATE.\n> > >\n> > > I'd look at using TID. Seems like that is more efficient anyway (no\n> > > index needed). Hiroshi has opined that TID is not sufficient for ODBC\n> > > cursors, but it seems to me that it is sufficient for SQL cursors.\n> > >\n> >\n> > Yes TID is available and I introduced Tid Scan in order\n> > to support this kind of implementation. However there\n> > are some notices.\n> > 1) Is *FOR UPDATE* cursor allowed in PL/pgSQL ?\n> > (It doesn't seem easy for me).\n> \n> No, it is not supported right now.\n> \n> Conceptually, however, PL/pgSQL could pull out the FOR UPDATE clause\n> and turn it into an explicit LOCK statement.\n\nIt's impossible to realize *FOR UPDATE* using LOCK statement.\nEach row must be locked individually to prevent UPDATE/DELETE\noperations for the row. You could acquire an EXCLUSIVE\nLOCK on the table but it doesn't seem preferable.\n\nI'm planning to implement updatable cursors with no lock\nusing TID and OID. TID is for the fast access and OID is\nto verify the identity. OID doesn't provide a specific\naccess method in the first place and the access would be\nveeery slow for large tables unless there's an index on OID.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 08 Aug 2001 10:04:10 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n\n> > > > Ian Lance Taylor <ian@airs.com> writes:\n> > > > > Anyhow, I see that there is a move afoot to eliminate mandatory OIDs.\n> > > > > My question now is: if there is no OID, is there any comparable way to\n> > > > > implement CURRENT OF cursor? Basically what is needed is some way to\n> > > > > identify a particular row between a SELECT and an UPDATE.\n> > > >\n> > > > I'd look at using TID. Seems like that is more efficient anyway (no\n> > > > index needed). Hiroshi has opined that TID is not sufficient for ODBC\n> > > > cursors, but it seems to me that it is sufficient for SQL cursors.\n> > > >\n> > >\n> > > Yes TID is available and I introduced Tid Scan in order\n> > > to support this kind of implementation. However there\n> > > are some notices.\n> > > 1) Is *FOR UPDATE* cursor allowed in PL/pgSQL ?\n> > > (It doesn't seem easy for me).\n> > \n> > No, it is not supported right now.\n> > \n> > Conceptually, however, PL/pgSQL could pull out the FOR UPDATE clause\n> > and turn it into an explicit LOCK statement.\n> \n> It's impossible to realize *FOR UPDATE* using LOCK statement.\n> Each row must be locked individually to prevent UPDATE/DELETE\n> operations for the row. You could acquire an EXCLUSIVE\n> LOCK on the table but it doesn't seem preferable.\n\nIt's definitely not preferable, but how else can it be done?\n\n> I'm planning to implement updatable cursors with no lock\n> using TID and OID. TID is for the fast access and OID is\n> to verify the identity. OID doesn't provide a specific\n> access method in the first place and the access would be\n> veeery slow for large tables unless there's an index on OID.\n\nI apologize if I've missed something, but how will that work when OIDs\nbecome optional?\n\nIan\n",
"msg_date": "07 Aug 2001 18:05:02 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Ian Lance Taylor wrote:\n> \n\n[snip]\n\n> > > >\n> > > > Yes TID is available and I introduced Tid Scan in order\n> > > > to support this kind of implementation. However there\n> > > > are some notices.\n> > > > 1) Is *FOR UPDATE* cursor allowed in PL/pgSQL ?\n> > > > (It doesn't seem easy for me).\n> > >\n> > > No, it is not supported right now.\n> > >\n> > > Conceptually, however, PL/pgSQL could pull out the FOR UPDATE clause\n> > > and turn it into an explicit LOCK statement.\n> >\n> > It's impossible to realize *FOR UPDATE* using LOCK statement.\n> > Each row must be locked individually to prevent UPDATE/DELETE\n> > operations for the row. You could acquire an EXCLUSIVE\n> > LOCK on the table but it doesn't seem preferable.\n> \n> It's definitely not preferable, but how else can it be done?\n> \n> > I'm planning to implement updatable cursors with no lock\n> > using TID and OID. TID is for the fast access and OID is\n> > to verify the identity. OID doesn't provide a specific\n> > access method in the first place and the access would be\n> > veeery slow for large tables unless there's an index on OID.\n> \n> I apologize if I've missed something, but how will that work when OIDs\n> become optional?\n> \n\nSo I've objected optional OIDs.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 08 Aug 2001 10:11:24 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> 2) If no, there could be UPDATE operations for the\n> current tuple from other backends between a\n> SELECT and an UPDATE and the TID may be changed.\n> In that case, you couldn't find the tuple using\n> saved TID but you could use the functions to\n> follow the UPDATE link which I provided when I\n> I introduced Tis Scan.\n\nYes, you could either declare an error (if serializable mode) or follow\nthe TID links to find the latest version of the tuple, and update that\n(if read-committed mode). This is no different from the situation for\nany other UPDATE, AFAICS.\n\n> There could be DELETE operations for the tuple\n> from other backends also and the TID may disappear.\n> Because FULL VACUUM couldn't run while the cursor\n> is open, it could neither move nor remove the tuple\n> but I'm not sure if the new VACUUM could remove\n> the deleted tuple and other backends could re-use\n> the space under such a situation.\n\nOf course not. Concurrent VACUUM has to follow the same rules as\nold-style VACUUM: it must never remove or move any tuple that is still\nvisible to any open transaction. (Actually, it never moves tuples at\nall, but the point is that it cannot remove any tuple that the open\ncursor could have seen.) So, the fact that SQL cursors don't survive\nacross transactions is enough to guarantee that a TID returned by a\ncursor is good as long as the cursor is open.\n\nThe reason you have a harder time with ODBC cursors is that you aren't\nrestricting them to be good only within a transaction (or at least\nthat's how I interpreted what you said earlier).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 12:29:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs "
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>\n> > There could be DELETE operations for the tuple\n> > from other backends also and the TID may disappear.\n> > Because FULL VACUUM couldn't run while the cursor\n> > is open, it could neither move nor remove the tuple\n> > but I'm not sure if the new VACUUM could remove\n> > the deleted tuple and other backends could re-use\n> > the space under such a situation.\n> \n> Of course not. Concurrent VACUUM has to follow the same rules as\n> old-style VACUUM: it must never remove or move any tuple that is still\n> visible to any open transaction. (Actually, it never moves tuples at\n> all, but the point is that it cannot remove any tuple that the open\n> cursor could have seen.) So, the fact that SQL cursors don't survive\n> across transactions is enough to guarantee that a TID returned by a\n> cursor is good as long as the cursor is open.\n> \n> The reason you have a harder time with ODBC cursors is that you aren't\n> restricting them to be good only within a transaction (or at least\n> that's how I interpreted what you said earlier).\n> \n\nYes mainly but I want the verification by OID even in\n*inside a transaction* cases. For example,\n\n1) A backend tx1 fetch a row using cursor.\n2) Very old backend tx_old deletes the row and commits.\n3) The new VACUUM starts to run and find the row to be\n completely dead.\n\nThe page is pinned by tx1, so the new VACUUM refuses\nto change the page ? I there could be another story.\n\n2)' Very old backend tx_old updated the row and deletes\n the updated row and commits.\n3)' The new VACUUM starts to run and find the updated\n row to be completely dead but the page may not be\n pinned.\n\nBoth seems to be detected by FULL VACUUM as \n'NOTICE: Child itemid in update-chain marked as unused - can't\ncontinue repair_frag' though it may be too late.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 09 Aug 2001 09:20:55 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Yes mainly but I want the verification by OID even in\n> *inside a transaction* cases. For example,\n\n> 1) A backend tx1 fetch a row using cursor.\n> 2) Very old backend tx_old deletes the row and commits.\n> 3) The new VACUUM starts to run and find the row to be\n> completely dead.\n\nThis cannot happen. If VACUUM thought that, VACUUM would be completely\nbroken. Although the row is committed dead, it is still visible to the\ntransaction using the cursor, so it must not be deleted. This is true\n*whether or not the row has been fetched yet*, or ever will be fetched,\nby the cursor.\n\nIf cursors had this risk then ordinary UPDATE would be equally broken.\nWhat is a cursor except an externally-accessible scan-in-progress?\nThere is no difference.\n\n> The page is pinned by tx1, so the new VACUUM refuses\n> to change the page ? I there could be another story.\n\nThe pin stuff doesn't have anything to do with whether TIDs remain\nvalid. A pin guarantees that a *physical pointer* into a shared buffer\nwill remain valid --- it protects against VACUUM reshuffling the page\ndata to compact free space after it's deleted completely-dead tuples.\nBut reshuffling doesn't invalidate non-dead TIDs. A TID remains valid\nuntil there are no open transactions that could possibly consider the\ntuple visible.\n\n> Both seems to be detected by FULL VACUUM as \n> 'NOTICE: Child itemid in update-chain marked as unused - can't\n> continue repair_frag' though it may be too late.\n\nAFAICS, that code cannot be executed unless someone has violated the\nupdate protocol (or the on-disk tuple status bits have gotten trashed\nsomehow). We are never supposed to update a tuple that has been\ninserted or deleted by another, not-yet-committed transaction.\nTherefore the child tuple should have been inserted by a\nlater-committing transaction. There is no way that VACUUM can see the\nchild tuple as dead and the parent tuple as not dead.\n\nOr have I missed something?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 21:22:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Yes mainly but I want the verification by OID even in\n> > *inside a transaction* cases. For example,\n> \n> > 1) A backend tx1 fetch a row using cursor.\n> > 2) Very old backend tx_old deletes the row and commits.\n> > 3) The new VACUUM starts to run and find the row to be\n> > completely dead.\n> \n> This cannot happen. If VACUUM thought that, VACUUM would be completely\n> broken. Although the row is committed dead, it is still visible to the\n> transaction using the cursor, so it must not be deleted.\n\nYes it should be but it could happen.\nGetXmaxRecent() ignores the backend tx_old because it had been\ncommitted when VACUUM started and may return the xid > the\nvery old xid of tx_old. As far as I see, the current VACUUM\nconsiders the row completely dead.\n\n> This is true\n> *whether or not the row has been fetched yet*, or ever will be fetched,\n> by the cursor.\n> \n\nI must apologize for leaving the bug unsolved.\nUnfortunately VACUUM and MVCC are ill-suited.\nFor example, complicated update chain handling wasn't\nneeded before MVCC. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 09 Aug 2001 11:04:02 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> GetXmaxRecent() ignores the backend tx_old because it had been\n> committed when VACUUM started and may return the xid > the\n> very old xid of tx_old.\n\nAbsolutely not; things would never work if that were true.\nGetXmaxRecent() returns the oldest TID that was running *when any\ncurrent transaction started*, not just VACUUM's transaction. Thus,\nno transaction that could be considered live by the cursor-holding\ntransaction will be considered dead by VACUUM.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 22:09:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > GetXmaxRecent() ignores the backend tx_old because it had been\n> > committed when VACUUM started and may return the xid > the\n> > very old xid of tx_old.\n> \n> Absolutely not; things would never work if that were true.\n> GetXmaxRecent() returns the oldest TID that was running *when any\n> current transaction started*, not just VACUUM's transaction. Thus,\n> no transaction that could be considered live by the cursor-holding\n> transaction will be considered dead by VACUUM.\n> \n\nOops I've misunderstood GetXmaxRecent() until now.\nNow I'm checking the current source.\nHmm is there any place setting proc->xmin other than\nthe following ?\n\n[in storage/ipc/sinval.c]\n if (serializable)\n MyProc->xmin = snapshot->xmin;\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 09 Aug 2001 11:36:13 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Hmm is there any place setting proc->xmin other than\n> the following ?\n\n> [in storage/ipc/sinval.c]\n> if (serializable)\n> MyProc->xmin = snapshot->xmin;\n\nAFAICT that's the only place that sets it. It's cleared to zero during\ntransaction commit or abort in xact.c.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Aug 2001 22:59:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Hmm is there any place setting proc->xmin other than\n> > the following ?\n> \n> > [in storage/ipc/sinval.c]\n> > if (serializable)\n> > MyProc->xmin = snapshot->xmin;\n> \n> AFAICT that's the only place that sets it. It's cleared to zero during\n> transaction commit or abort in xact.c.\n> \n\nYou are right.\nNow I understand I've completely misunderstood\n 'NOTICE: Child itemid in update-chain marked as unused - can't\n continue repair_frag'.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 09 Aug 2001 13:15:33 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: CURRENT OF cursor without OIDs"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.