threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nI cannot decide if this is a serious bug or not --- some queries from\ncomplex views may give strange results. The next few days I will try to\nfind the point where the problem is but now I can only include the data\nstructure and the SELECT statements which don't give the correct result. A\nlot of rows (contained by the database) should be downloaded from\n\nhttp://www.math.u-szeged.hu/~kovzol/rows.pgsql.gz (25K, uncompressed 305K)\n\nif you want to check this error.\n\nHere are the definitions (rels-views.pgsql) and a RUNME.pgsql file (which\nmust be loaded with \\i in psql), it contains the SELECTs.\n\nI tried it with 7.1beta4 and 7.1.\n\nThere ARE workarounds. I am using SQL functions instead of subSELECTs now.\n\nRegards,\nZoltan", "msg_date": "Mon, 7 May 2001 18:37:18 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "incorrect query result using complex structures (views?)" }, { "msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> I cannot decide if this is a serious bug or not --- some queries from\n> complex views may give strange results. The next few days I will try to\n> find the point where the problem is but now I can only include the data\n> structure and the SELECT statements which don't give the correct result.\n\nSo ... um ... what do you consider incorrect about the results?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 10:58:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: incorrect query result using complex structures (views?) " }, { "msg_contents": "On Tue, 8 May 2001, Tom Lane wrote:\n\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > I cannot decide if this is a serious bug or not --- some queries from\n> > complex views may give strange results. The next few days I will try to\n> > find the point where the problem is but now I can only include the data\n> > structure and the SELECT statements which don't give the correct result.\n> \n> So ... um ... what do you consider incorrect about the results?\n> \n> \t\t\tregards, tom lane\n\nThe SELECTs give something like this:\n\ntir=> select az, (select cikk from szallitolevel_tetele_ervenyes where\ncikk = c.az) from cikk c limit 20;\n az|?column?\n------+--------\n100191| \n100202| \n100203| \n100006| \n100016| \n100027| \n100028| \n100039| \n100080| \n100099| \n100100| \n100102| \n100105| \n100106| \n100107| \n100108| \n100109| \n100110| \n100111| \n100112| \n(20 rows)\n\nBut cikk.az and szallitolevel_tetele_ervenyes.cikk should be the same, so\nthe correct output for this query would be like this:\n\n\ntir=> select c.az, cikk from cikk c, szallitolevel_tetele_ervenyes s where\nc.az=s.cikk limit 20;\n az| cikk\n------+------\n100743|100743\n100742|100742\n101080|101080\n101075|101075\n101084|101084\n100124|100124\n100467|100467\n101080|101080\n101163|101163\n100517|100517\n101080|101080\n101163|101163\n100719|100719\n100406|100406\n101080|101080\n100286|100286\n100367|100367\n100406|100406\n101080|101080\n100546|100546\n(20 rows)\n\n\nThanks in advance. Zoltan\n\n", "msg_date": "Tue, 8 May 2001 19:30:19 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "Re: incorrect query result using complex structures (views?)" }, { "msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> Thanks in advance. Zoltan\n\nYou're welcome ;-)\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/executor/nodeAppend.c.orig\tThu Mar 22 01:16:12 2001\n--- src/backend/executor/nodeAppend.c\tTue May 8 15:48:02 2001\n***************\n*** 8,14 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/executor/nodeAppend.c,v 1.40 2001/03/22 06:16:12 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 8,14 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/executor/nodeAppend.c,v 1.40.2.1 2001/05/08 19:48:02 tgl Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 362,375 ****\n \n \tfor (i = 0; i < nplans; i++)\n \t{\n! \t\tPlan\t *rescanNode;\n \n! \t\tappendstate->as_whichplan = i;\n! \t\trescanNode = (Plan *) nth(i, node->appendplans);\n! \t\tif (rescanNode->chgParam == NULL)\n \t\t{\n \t\t\texec_append_initialize_next(node);\n! \t\t\tExecReScan((Plan *) rescanNode, exprCtxt, (Plan *) node);\n \t\t}\n \t}\n \tappendstate->as_whichplan = 0;\n--- 362,386 ----\n \n \tfor (i = 0; i < nplans; i++)\n \t{\n! \t\tPlan\t *subnode;\n \n! \t\tsubnode = (Plan *) nth(i, node->appendplans);\n! \t\t/*\n! \t\t * ExecReScan doesn't know about my subplans, so I have to do\n! \t\t * changed-parameter signaling myself.\n! \t\t */\n! \t\tif (node->plan.chgParam != NULL)\n! \t\t\tSetChangedParamList(subnode, node->plan.chgParam);\n! \t\t/*\n! \t\t * if chgParam of subnode is not null then plan will be re-scanned by\n! \t\t * first ExecProcNode.\n! \t\t */\n! \t\tif (subnode->chgParam == NULL)\n \t\t{\n+ \t\t\t/* make sure estate is correct for this subnode (needed??) */\n+ \t\t\tappendstate->as_whichplan = i;\n \t\t\texec_append_initialize_next(node);\n! \t\t\tExecReScan(subnode, exprCtxt, (Plan *) node);\n \t\t}\n \t}\n \tappendstate->as_whichplan = 0;\n*** src/backend/executor/nodeSubqueryscan.c.orig\tThu Mar 22 01:16:13 2001\n--- src/backend/executor/nodeSubqueryscan.c\tTue May 8 15:48:02 2001\n***************\n*** 12,18 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/executor/nodeSubqueryscan.c,v 1.6 2001/03/22 06:16:13 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 12,18 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/executor/nodeSubqueryscan.c,v 1.6.2.1 2001/05/08 19:48:02 tgl Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 267,273 ****\n \t\treturn;\n \t}\n \n! \tExecReScan(node->subplan, NULL, node->subplan);\n \n \tsubquerystate->csstate.css_ScanTupleSlot = NULL;\n }\n--- 267,284 ----\n \t\treturn;\n \t}\n \n! \t/*\n! \t * ExecReScan doesn't know about my subplan, so I have to do\n! \t * changed-parameter signaling myself.\n! \t */\n! \tif (node->scan.plan.chgParam != NULL)\n! \t\tSetChangedParamList(node->subplan, node->scan.plan.chgParam);\n! \t/*\n! \t * if chgParam of subnode is not null then plan will be re-scanned by\n! \t * first ExecProcNode.\n! \t */\n! \tif (node->subplan->chgParam == NULL)\n! \t\tExecReScan(node->subplan, NULL, node->subplan);\n \n \tsubquerystate->csstate.css_ScanTupleSlot = NULL;\n }\n", "msg_date": "Tue, 08 May 2001 15:51:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: incorrect query result using complex structures (views?) " }, { "msg_contents": "> You're welcome ;-)\nMarvellous, it works! How much time did it take for you to find what have\nto be changed?\n\nThank you very much.\n\nRegards, Zoltan\n\n", "msg_date": "Wed, 9 May 2001 17:03:00 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "Re: incorrect query result using complex structures (views?)" } ]
[ { "msg_contents": "Peter E. posted his proposal for the revamping of the \nauthentication/security system a few weeks ago. There was a \ndiscussion, but I don't know if he came to any definitive \nconclusions, such as implementing System Privileges as well as Object \nPrivileges. If he does, then the dba (or anyone who has been granted \nGRANT ANY PRIVILEGE system privilege & CREATE USER system privilege) \nshould be able to do:\n\nCREATE USER mascarm IDENTIFIED BY manager;\nGRANT CREATE TABLE to mascarm;\n\nIt would also be good if PostgreSQL came with 2 groups by default - \nconnect and dba.\n\nThe connect group would be granted these System Privileges:\n\nCREATE AGGREGATE privilege\nCREATE INDEX privilege\nCREATE FUNCTION privilege\nCREATE OPERATOR privilege\nCREATE RULE privilege\nCREATE SESSION privilege\nCREATE SYNONYM privilege\nCREATE TABLE privilege\nCREATE TRIGGER privilege\nCREATE TYPE privilege\nCREATE VIEW privilege\n\nThese allow the user to create the above objects in their own schema \nonly. We're getting schemas in 7.2, right? ;-).\n\nThe dba group would be granted the rest, like these:\n\nCREATE ANY AGGREGATE privilege\nCREATE ANY INDEX privilege...\n(and so on)\n\nas well as:\n\nCREATE/ALTER/DROP USER\nGRANT ANY PRIVILEGE\nCOMMENT ANY TABLE\nINSERT ANY TABLE\nUPDATE ANY TABLE\nDELETE ANY TABLE\nSELECT ANY TABLE\nANALYZE ANY TABLE\nLOCK ANY TABLE\nCREATE PUBLIC SYNONYM (needed when schemas roll around)\nDROP PUBLIC SYNONYM\n(and so on)\n\nThen, the dba could do a:\n\nGRANT connect TO mascarm;\n\nOr a:\n\nCREATE USER mascarm\nIDENTIFIED BY manager\nIN GROUP connect;\n\nIt seems Karel's patch is a solution to the problem of people who \nwant to create separate PostgreSQL user accounts, but want to ensure \nthat a user can't create tables. In Oracle, I would just do a:\n\nCREATE USER mascarm\nIDENTIFIED BY manager;\n\nGRANT CREATE SESSION TO mascarm;\n\nNow mascarm has the ability to connect, but that's it.\n\nCurrently, if I know for instance that a background process DROPS a \ntable, CREATES a new one, and then imports some data, I can create my \nown table by the same name, in between the DROP and CREATE and can \ncause havoc (if its not done in a single transaction). Hopefully \nPeter E's ACL design will allow for Oracle-like System Privileges to \ntake place. That would allow for a much finer granularity of \npermissions then everyone either being the Unix equivalent of 'root' \nor 'user'.\n\nJust my humble opinion though,\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tBruce Momjian [SMTP:pgman@candle.pha.pa.us]\n\nCan someone remind me what we are going to do with this?\n\n\n[ Charset ISO-8859-2 unsupported, converting... ]\n>\n> On Fri, 26 Jan 2001, [koi8-r] ______ _. _______ wrote:\n>\n> > Good Day, Dear Karel Zak!\n> >\n> > Please, forgive me for my bad english and if i do not right with \nyour\n> > day time.\n>\n> my English is more poor :-)\n>\n> You are right, it is (was?) in TODO and it will implemented - I \nhope -\n> in some next release (may be in 7.2 during ACL overhaul, Peter?).\n>\n> Before some time I wrote patch that resolve it for 7.0.2 (anyone -\n> I forgot his name..) port it to 7.0.2, my original patch was for \n7.0.0.\n> May be will possible use it for last stable 7.0.3 too.\n>\n> The patch is at:\n> \t ftp://ftp2.zf.jcu.cz/users/zakkr/pg/7.0.2-user.patch.gz\n>\n> This patch add to 7.0.2 code NOCREATETABLE and NOLOCKTABLE feature:\n>\n> CREATE USER username\n> [ WITH\n> [ SYSID uid ]\n> [ PASSWORD 'password' ] ]\n> [ CREATEDB | NOCREATEDB ] [ CREATEUSER | NOCREATEUSER ]\n> -> [ CREATETABLE | NOCREATETABLE ] [ LOCKTABLE | NOLOCKTABLE ]\n> ...etc.\n>\n> If CREATETABLE or LOCKTABLE is not specific in CREATE USER \ncommand,\n> as default is set CREATETABLE or LOCKTABLE (true).\n>\n>\n> But, don't forget - it's temporarily solution, I hope that some \nnext\n> release resolve it more systematic. More is in the \npatche@postgresql.org\n> archive where was send original patch.\n>\n> Because you are not first person that ask me, I re-post (CC:) it \nto\n> hackers@postgresql.org, more admins happy with this :-)\n>\n> \t\t\t\tKarel\n>\n> > I want to ask You about \"access control over who can create \ntables and\n> > use locks in PostgreSQL\". This message was placed in PostgreSQL \nsite\n> > TODO list. But now it was deleted. I so need help about this \nquestion,\n> > becouse i'll making a site witch will give hosting for our users. \n> > And i want to make a PostgreSQL access to their own databases. \nBut there\n> > is (how You now) one problem. Anyone user may to connect to the \ndifferent\n> > user database and he may to create himself tables.\n> > I don't like it.\n>\n>\n>\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania \n19026\n\n\n", "msg_date": "Mon, 7 May 2001 15:55:52 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: NOCREATETABLE patch (was: Re: Please, help!(about Postgres))" }, { "msg_contents": "\nAdded to TODO.detail/privileges.\n\n> Peter E. posted his proposal for the revamping of the \n> authentication/security system a few weeks ago. There was a \n> discussion, but I don't know if he came to any definitive \n> conclusions, such as implementing System Privileges as well as Object \n> Privileges. If he does, then the dba (or anyone who has been granted \n> GRANT ANY PRIVILEGE system privilege & CREATE USER system privilege) \n> should be able to do:\n> \n> CREATE USER mascarm IDENTIFIED BY manager;\n> GRANT CREATE TABLE to mascarm;\n> \n> It would also be good if PostgreSQL came with 2 groups by default - \n> connect and dba.\n> \n> The connect group would be granted these System Privileges:\n> \n> CREATE AGGREGATE privilege\n> CREATE INDEX privilege\n> CREATE FUNCTION privilege\n> CREATE OPERATOR privilege\n> CREATE RULE privilege\n> CREATE SESSION privilege\n> CREATE SYNONYM privilege\n> CREATE TABLE privilege\n> CREATE TRIGGER privilege\n> CREATE TYPE privilege\n> CREATE VIEW privilege\n> \n> These allow the user to create the above objects in their own schema \n> only. We're getting schemas in 7.2, right? ;-).\n> \n> The dba group would be granted the rest, like these:\n> \n> CREATE ANY AGGREGATE privilege\n> CREATE ANY INDEX privilege...\n> (and so on)\n> \n> as well as:\n> \n> CREATE/ALTER/DROP USER\n> GRANT ANY PRIVILEGE\n> COMMENT ANY TABLE\n> INSERT ANY TABLE\n> UPDATE ANY TABLE\n> DELETE ANY TABLE\n> SELECT ANY TABLE\n> ANALYZE ANY TABLE\n> LOCK ANY TABLE\n> CREATE PUBLIC SYNONYM (needed when schemas roll around)\n> DROP PUBLIC SYNONYM\n> (and so on)\n> \n> Then, the dba could do a:\n> \n> GRANT connect TO mascarm;\n> \n> Or a:\n> \n> CREATE USER mascarm\n> IDENTIFIED BY manager\n> IN GROUP connect;\n> \n> It seems Karel's patch is a solution to the problem of people who \n> want to create separate PostgreSQL user accounts, but want to ensure \n> that a user can't create tables. In Oracle, I would just do a:\n> \n> CREATE USER mascarm\n> IDENTIFIED BY manager;\n> \n> GRANT CREATE SESSION TO mascarm;\n> \n> Now mascarm has the ability to connect, but that's it.\n> \n> Currently, if I know for instance that a background process DROPS a \n> table, CREATES a new one, and then imports some data, I can create my \n> own table by the same name, in between the DROP and CREATE and can \n> cause havoc (if its not done in a single transaction). Hopefully \n> Peter E's ACL design will allow for Oracle-like System Privileges to \n> take place. That would allow for a much finer granularity of \n> permissions then everyone either being the Unix equivalent of 'root' \n> or 'user'.\n> \n> Just my humble opinion though,\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:pgman@candle.pha.pa.us]\n> \n> Can someone remind me what we are going to do with this?\n> \n> \n> [ Charset ISO-8859-2 unsupported, converting... ]\n> >\n> > On Fri, 26 Jan 2001, [koi8-r] ______ _. _______ wrote:\n> >\n> > > Good Day, Dear Karel Zak!\n> > >\n> > > Please, forgive me for my bad english and if i do not right with \n> your\n> > > day time.\n> >\n> > my English is more poor :-)\n> >\n> > You are right, it is (was?) in TODO and it will implemented - I \n> hope -\n> > in some next release (may be in 7.2 during ACL overhaul, Peter?).\n> >\n> > Before some time I wrote patch that resolve it for 7.0.2 (anyone -\n> > I forgot his name..) port it to 7.0.2, my original patch was for \n> 7.0.0.\n> > May be will possible use it for last stable 7.0.3 too.\n> >\n> > The patch is at:\n> > \t ftp://ftp2.zf.jcu.cz/users/zakkr/pg/7.0.2-user.patch.gz\n> >\n> > This patch add to 7.0.2 code NOCREATETABLE and NOLOCKTABLE feature:\n> >\n> > CREATE USER username\n> > [ WITH\n> > [ SYSID uid ]\n> > [ PASSWORD 'password' ] ]\n> > [ CREATEDB | NOCREATEDB ] [ CREATEUSER | NOCREATEUSER ]\n> > -> [ CREATETABLE | NOCREATETABLE ] [ LOCKTABLE | NOLOCKTABLE ]\n> > ...etc.\n> >\n> > If CREATETABLE or LOCKTABLE is not specific in CREATE USER \n> command,\n> > as default is set CREATETABLE or LOCKTABLE (true).\n> >\n> >\n> > But, don't forget - it's temporarily solution, I hope that some \n> next\n> > release resolve it more systematic. More is in the \n> patche@postgresql.org\n> > archive where was send original patch.\n> >\n> > Because you are not first person that ask me, I re-post (CC:) it \n> to\n> > hackers@postgresql.org, more admins happy with this :-)\n> >\n> > \t\t\t\tKarel\n> >\n> > > I want to ask You about \"access control over who can create \n> tables and\n> > > use locks in PostgreSQL\". This message was placed in PostgreSQL \n> site\n> > > TODO list. But now it was deleted. I so need help about this \n> question,\n> > > becouse i'll making a site witch will give hosting for our users. \n> > > And i want to make a PostgreSQL access to their own databases. \n> But there\n> > > is (how You now) one problem. Anyone user may to connect to the \n> different\n> > > user database and he may to create himself tables.\n> > > I don't like it.\n> >\n> >\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania \n> 19026\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 15:22:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: NOCREATETABLE patch (was: Re: Please, help!(about Postgres))" } ]
[ { "msg_contents": "I just checked out CVS this morning the REL7_1_STABLE branch. I \nconfigured it with:\n\n ./configure --enable-debug\n\nand ran the regression test fine on OpenBSD 2.8 (AMD processor) (The \nsame problem has been reproduced by someone else on RH6.2) \n\nI then proceed to load the OpenACS datamodel and had the backend crash.\nThis datamodel loads fine on 7.1.\n\nI can send the datamodel and core file if needed. I loaded GDB with the\ncore file and got the following:\n\n$ gdb /usr/local/pgsql/bin/postmaster postgres.core \nGNU gdb 4.16.1\n :: snip ::\nProgram terminated with signal 11, Segmentation fault.\n :: snip ::\n#0 SPI_gettypeid (tupdesc=0x0, fnumber=1) at spi.c:501\n501 if (tupdesc->natts < fnumber || fnumber <= 0)\n(gdb) where\n#0 SPI_gettypeid (tupdesc=0x0, fnumber=1) at spi.c:501\n#1 0x402946bf in exec_move_row (estate=0xdfbfcddc, rec=0x0, row=0x186420, \n tup=0x0, tupdesc=0x0) at pl_exec.c:2640\n#2 0x40292b71 in exec_stmt_select (estate=0xdfbfcddc, stmt=0x186600)\n at pl_exec.c:1455\n#3 0x40292252 in exec_stmt (estate=0xdfbfcddc, stmt=0x186600) at pl_exec.c:978\n#4 0x402920ea in exec_stmts (estate=0xdfbfcddc, stmts=0x276410)\n at pl_exec.c:920\n#5 0x40292044 in exec_stmt_block (estate=0xdfbfcddc, block=0x186660)\n at pl_exec.c:876\n#6 0x402914c1 in plpgsql_exec_function (func=0x27b500, fcinfo=0x22b65c)\n at pl_exec.c:381\n#7 0x4028edb6 in plpgsql_call_handler (fcinfo=0x22b65c) at pl_handler.c:128\n#8 0x83be1 in ExecMakeFunctionResult (fcache=0x22b648, arguments=0x22b058, \n econtext=0x22b0f8, isNull=0xdfbfcf2b \"\", isDone=0xdfbfcf2c)\n at execQual.c:807\n#9 0x83c9a in ExecEvalFunc (funcClause=0x22b140, econtext=0x22b0f8, \n isNull=0xdfbfcf2b \"\", isDone=0xdfbfcf2c) at execQual.c:901\n#10 0x840e9 in ExecEvalExpr (expression=0x22b140, econtext=0x22b0f8, \n isNull=0xdfbfcf2b \"\", isDone=0xdfbfcf2c) at execQual.c:1226\n#11 0x843e1 in ExecTargetList (targetlist=0x22b1b0, nodomains=1, \n :: snip ::\n#24 0x9314e in main (argc=3, argv=0xdfbfdc3c) at main.c:171\n(gdb) p tupdesc\n$1 = 0x0\n(gdb) \n\n\n", "msg_date": "Mon, 7 May 2001 17:14:48 -0500", "msg_from": "Robert Hentosh <hentosh@io.com>", "msg_from_op": true, "msg_subject": "backend dies on 7.1.1 loading large datamodel." }, { "msg_contents": "On Mon, May 07, 2001 at 05:14:48PM -0500, Robert Hentosh wrote:\n :: snip ::\n> I can send the datamodel and core file if needed. I loaded GDB with the\n> core file and got the following:\n\nI just put the datamodel at http://www.io.com/~hentosh/sql.tar.gz\n", "msg_date": "Mon, 7 May 2001 17:23:18 -0500", "msg_from": "Robert Hentosh <hentosh@io.com>", "msg_from_op": true, "msg_subject": "Re: backend dies on 7.1.1 loading large datamodel." }, { "msg_contents": "Robert Hentosh <hentosh@io.com> writes:\n> I then proceed to load the OpenACS datamodel and had the backend crash.\n> This datamodel loads fine on 7.1.\n\nUgh.\n\n> I can send the datamodel and core file if needed.\n\nDatamodel please, corefile no.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 20:05:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend dies on 7.1.1 loading large datamodel. " }, { "msg_contents": "Robert Hentosh <hentosh@io.com> writes:\n> I just put the datamodel at http://www.io.com/~hentosh/sql.tar.gz\n\nHm. I notice that postgres.sql hardwires the location of the plpgsql\nhandler:\n\ncreate function plpgsql_call_handler() RETURNS opaque\nas '/usr/local/pgsql/lib/plpgsql.so' language 'c';\n\ncreate trusted procedural language 'plpgsql'\nHANDLER plpgsql_call_handler\nLANCOMPILER 'PL/pgSQL';\n\nIf this were to suck in a wrong-version copy of plpgsql.so (and yes,\nI think 7.1 vs 7.1.1 could be wrong version) then that could cause\nfailures.\n\npostgres-pgtcl.sql is equally unwise about the pltcl handler.\n\nThis is *not* the source of your problem, since I was able to\nreproduce the crash even with a proper \"createlang plpgsql\" used\ninstead of the bogus commands. But you might want to pass on the\nobservation to the OpenACS guys.\n\nOn with debugging ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 20:28:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: backend dies on 7.1.1 loading large datamodel. " }, { "msg_contents": "Robert Hentosh <hentosh@io.com> writes:\n> I then proceed to load the OpenACS datamodel and had the backend crash.\n> This datamodel loads fine on 7.1.\n\nSigh. Looks like I managed to break plpgsql's handling of SELECT with\nno data found. Mea maxima culpa (though the regression tests perhaps\ndeserve a share of the blame too, for not covering that case).\n\nPatch to appear shortly. I guess there will be a 7.1.2 sooner than\nI thought, also :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 20:54:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend dies on 7.1.1 loading large datamodel. " }, { "msg_contents": "On Mon, May 07, 2001 at 08:28:33PM -0400, Tom Lane wrote:\n> Robert Hentosh <hentosh@io.com> writes:\n> > I just put the datamodel at http://www.io.com/~hentosh/sql.tar.gz\n> \n> Hm. I notice that postgres.sql hardwires the location of the plpgsql\n> handler:\n> \n> create function plpgsql_call_handler() RETURNS opaque\n> as '/usr/local/pgsql/lib/plpgsql.so' language 'c';\n> \n> create trusted procedural language 'plpgsql'\n> HANDLER plpgsql_call_handler\n> LANCOMPILER 'PL/pgSQL';\n> \n> If this were to suck in a wrong-version copy of plpgsql.so (and yes,\n> I think 7.1 vs 7.1.1 could be wrong version) then that could cause\n> failures.\n\nI played with this a little. What would be the proper solution? \nDoesn't the backend go and cd to the data directory. I blindly \ntried:\n\tas 'plpgsql.so' language 'c';\nand\n\tas 'lib/plpgsql.so' language 'c'; \n\nand it can't find the file. Is there a way to correctly reference the \nlib directory associated with the execuables directory structure?\n\nOne of the examples in the docs shows the full path, too. At the bottom\nof this URL:\n\nhttp://postgresql.readysetnet.com/users-lounge/docs/7.0/postgres/sql-createlanguage.htm\n\n", "msg_date": "Mon, 7 May 2001 20:05:34 -0500", "msg_from": "Robert Hentosh <hentosh@io.com>", "msg_from_op": true, "msg_subject": "Re: Re: backend dies on 7.1.1 loading large datamodel." }, { "msg_contents": "Tom Lane wrote:\n> \n> Robert Hentosh <hentosh@io.com> writes:\n> > I then proceed to load the OpenACS datamodel and had the backend crash.\n> > This datamodel loads fine on 7.1.\n> \n> Ugh.\n> \n\nThere was a bug report in Japan that plpgsql\ncrashes if select returns no row. This seems\na bug introduced by your latest change on \npl_exec.c.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 08 May 2001 10:05:55 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: backend dies on 7.1.1 loading large datamodel." }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> There was a bug report in Japan that plpgsql\n> crashes if select returns no row. This seems\n> a bug introduced by your latest change on \n> pl_exec.c.\n\nIndeed. Too bad that report didn't arrive on Friday.\nI am mightily embarrassed :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 21:08:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend dies on 7.1.1 loading large datamodel. " }, { "msg_contents": "Robert Hentosh <hentosh@io.com> writes:\n>> If this were to suck in a wrong-version copy of plpgsql.so (and yes,\n>> I think 7.1 vs 7.1.1 could be wrong version) then that could cause\n>> failures.\n\n> I played with this a little. What would be the proper solution? \n\nAt the moment, the solution is to use the createlang script rather than\nissuing the commands directly.\n\nIn the long run I think we should abandon the notion that full path\nspecifications are the preferred way to locate dynamic libraries.\nIt would be a lot better for portability if C function libraries could\nbe referred to like this:\n\ncreate function pltcl_call_handler() returns opaque\n\tas 'pltcl' language 'C';\n\nwhere the backend automatically assumes that a relative path is relative\nto $PGLIB. I'd like to see the backend adding the file extension too,\nto avoid platform dependencies (\".so\" is not universal). A function\ndefinition like the above could be dumped and reloaded without fear,\nwhereas the existing approach is pretty much guaranteed to break\nwhenever you change machines or install directories.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 21:22:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Paths for C functions (was Re: Re: backend dies on 7.1.1 loading\n\tlarge datamodel.)" }, { "msg_contents": "> ... (though the regression tests perhaps\n> deserve a share of the blame too, for not covering that case).\n> Patch to appear shortly...\n\nWill the patch include a case for the regression test? Or could someone\n(other than me??!!) volunteer to cover that?\n\n - Thomas\n", "msg_date": "Tue, 08 May 2001 01:57:29 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: backend dies on 7.1.1 loading large datamodel." }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> Will the patch include a case for the regression test? Or could someone\n> (other than me??!!) volunteer to cover that?\n\nSeems like a good idea. As the embarrassee, I'm perhaps too close\nto the problem to write a good addition to the regress tests; any\nvolunteers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 22:01:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend dies on 7.1.1 loading large datamodel. " }, { "msg_contents": "> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > There was a bug report in Japan that plpgsql\n> > crashes if select returns no row. This seems\n> > a bug introduced by your latest change on \n> > pl_exec.c.\n> \n> Indeed. Too bad that report didn't arrive on Friday.\n> I am mightily embarrassed :-(\n\nHumbled, I would say. Happens to us all.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 22:37:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend dies on 7.1.1 loading large datamodel." }, { "msg_contents": "Tom Lane writes:\n\n> In the long run I think we should abandon the notion that full path\n> specifications are the preferred way to locate dynamic libraries.\n> It would be a lot better for portability if C function libraries could\n> be referred to like this:\n>\n> create function pltcl_call_handler() returns opaque\n> \tas 'pltcl' language 'C';\n>\n> where the backend automatically assumes that a relative path is relative\n> to $PGLIB. I'd like to see the backend adding the file extension too,\n> to avoid platform dependencies (\".so\" is not universal).\n\nWe could have a run-time parameter that sets a path where to look for\nmodules. Eventually, we might also want to take a look at libtool's\nlibltdl, the portable interface to dynamic loading, which would do this\nand a number of other things for us.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 8 May 2001 18:07:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Paths for C functions (was Re: Re: backend dies on 7.1.1\n\tloading large datamodel.)" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> where the backend automatically assumes that a relative path is relative\n>> to $PGLIB. I'd like to see the backend adding the file extension too,\n>> to avoid platform dependencies (\".so\" is not universal).\n\n> We could have a run-time parameter that sets a path where to look for\n> modules.\n\nFor obvious security reasons, the library path must only be settable by\nthe dbadmin, and I see no good reason that it should be changeable\non-the-fly. But we could treat it as a postmaster-start-only GUC\nparameter...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 12:21:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Paths for C functions (was Re: Re: backend dies on 7.1.1 loading\n\tlarge datamodel.)" }, { "msg_contents": "Tom Lane writes:\n\n> Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> > Will the patch include a case for the regression test? Or could someone\n> > (other than me??!!) volunteer to cover that?\n>\n> Seems like a good idea. As the embarrassee, I'm perhaps too close\n> to the problem to write a good addition to the regress tests; any\n> volunteers?\n\nThe query that showed the bug would serve just fine.\n\nActually, this practice should be much more widely deployed. For each\nbug, a test case should be added to guard against the bug coming back.\nAt least when a suitable testing infrastructure exists. For instance,\nthis would probably apply to each of the backend bug fixes that came in\nthe last few days.\n\nMaybe it's too cumbersome to update the regression tests? Should the\nfiles be split into smaller pieces?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 8 May 2001 22:17:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "New tests for new bugs (was Re: [BUGS] Re: backend dies on 7.1.1\n\tloading large datamodel.)" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The query that showed the bug would serve just fine.\n\nMost of the bug reports we get are far too bulky to be appropriate to\nadd to the regress tests as-is. IMHO anyway.\n\nWe do need more extensive regress tests, but I don't think that slapping\nbug-report samples into them is the right way to get there ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 16:35:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New tests for new bugs (was Re: [BUGS] Re: backend dies on 7.1.1\n\tloading large datamodel.)" }, { "msg_contents": "Tom Lane writes:\n\n> For obvious security reasons, the library path must only be settable by\n> the dbadmin, and I see no good reason that it should be changeable\n> on-the-fly. But we could treat it as a postmaster-start-only GUC\n> parameter...\n\nOn the fly could be useful for trying alternative sets of modules. Also,\nyou could include such a SET command into your function loading script.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 10 May 2001 20:32:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Paths for C functions (was Re: Re: backend dies on 7.1.1\n\tloading large datamodel.)" } ]
[ { "msg_contents": "Hello, All\n\nHow I prevent a new user to create new tables in a Data Base ?\n\nThe Data Base is owned by \"postgres\" and I need that only the \"postgres\"\nuser can create new tables ...\n\n\n-------\n\nWhere are the default messages thats appears when the Referentian\nIntegrity is violated ? I need change this message to portuguese...\n\n\nregards,\n\n\ntulio oliveira\n\n-- \nUm velho homem s�bio disse uma vez: \"Quando voc� atualiza um exploit,\nvoc� � \nbom. Quando voc� � o primeiro a hackear cada sucessiva vers�o de um\nproduto \nque roda em milh�es de computadores pela Internet, voc� cria uma\nDinastia\".\n", "msg_date": "Mon, 07 May 2001 20:17:35 -0300", "msg_from": "Tulio Oliveira <mestredosmagos@marilia.com>", "msg_from_op": true, "msg_subject": "Denying user to create new tables" } ]
[ { "msg_contents": "I have ALLOW_ABSOLUTE_DBPATHS enabled. but it does not do what one would\nassume.\n\ntemplate1=# create database fubar with location = '/tmp' ;\nERROR: CREATE DATABASE: could not link '/postgres/data/base/12523613' to\n'/tmp/base/12523613': Operation not permitted\n\nIs this telling me it is creating the database as it always does, but is\nlinking it to the specified location ?\n\nIf so, what's the point of this?\n", "msg_date": "Mon, 07 May 2001 19:44:00 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "create database name with location = 'path';" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> template1=# create database fubar with location = '/tmp' ;\n> ERROR: CREATE DATABASE: could not link '/postgres/data/base/12523613' to\n> '/tmp/base/12523613': Operation not permitted\n\nTry using a filesystem that supports symbolic links ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 21:42:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: create database name with location = 'path'; " }, { "msg_contents": "mlw wrote:\n\n> I have ALLOW_ABSOLUTE_DBPATHS enabled. but it does not do what one would\n> assume.\n>\n> template1=# create database fubar with location = '/tmp' ;\n> ERROR: CREATE DATABASE: could not link '/postgres/data/base/12523613' to\n> '/tmp/base/12523613': Operation not permitted\n>\n\nDo you have /tmp/base directory ?\n\n> Is this telling me it is creating the database as it always does, but is\n> linking it to the specified location ?\n>\n> If so, what's the point of this?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Tue, 08 May 2001 17:28:13 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: create database name with location = 'path';" } ]
[ { "msg_contents": "Hello:\n\nI can bet that in about a year's time, PostgreSQL user base will\nexplode.\n(there is broadband connection everywhere in USA and Europe!!! 63% of\namerican households\nhave internet connection compare that to India, where 0.002% of Indian\nhomes have internet\nconnection and there are 1.1 billion indians!!)\n\nPlease read the PostgreSQL HOWTO v42.0 is at -\n http://aldev.8m.com\nMirrors:\n http://aldev.webjump.com\n http://www.angelfire.com/nv/aldev\n http://www.geocities.com/alavoor/index.html\n http://aldev.virtualave.net\n http://aldev.50megs.com\n http://aldev.bizland.com\n http://members.theglobe.com/aldev1/index.html\n http://members.nbci.com/alavoor\n http://aldev.terrashare.com\n http://members.fortunecity.com/aldev\n http://aldev.freewebsites.com\n http://members.tripod.lycos.com/aldev\n\n http://members.spree.com/technology/aldev\n http://homepages.infoseek.com/~aldev1/index.html\n http://www3.bcity.com/aldev\n\nSee also the Benchmarks of Postgresql.\n\n", "msg_date": "Mon, 07 May 2001 23:44:20 GMT", "msg_from": "Al Dev <alavoor@yahoo.com>", "msg_from_op": true, "msg_subject": "PostgreSQL HOWTO Version 42.0 is available for public..." } ]
[ { "msg_contents": "> I don't mind contributing the script and schema that I used, but one thing\nI\n> failed to mention in my first post is that the first thing the script does\n> is open connections to 256 databases (all on this same machine), and the\n> transactions are relatively evenly dispersed among the 256 connections.\nThe\n> test was originally written to try out an idea to allow scalability by\n> partitioning the data into seperate databases (which could eventually each\n> live on its own server). If you are interested I can modify the test to\nuse\n> only one database and rerun the same tests this weekend.\n>\n\nI modified my test script to use just one (instead of 256) databases to be\nmore representative of a common installation. Then I ran more tests under\nboth ext2 and reiserfs. The summary follows. Short answer is that the\ndifferences are much smaller than under the first test, but ext2 is still\nfaster.\n\n-- Joe\n\ncase rfs_fdatasync ext_fdatasync rfs_fdatasync\next_fdatasync rfs_fdatasync ext_fdatasync\nfstab sync,noatime sync,noatime noatime noatime\ndefaults defaults\nstarting # tup 70k 70k 70k 70k\n70k 70k\ntotal time (min) 12.10 11.77 11.83 11.43\n11.88 11.42\ncpu util % 90-94% 95-98% 90-95% 95-99%\n90-95% 95-99%\nram - stable cpu 42M 42M 42M 42M\n42M 42M\nram - final 52M 52M 52M 52M\n52M 52M\navg trans/sec\n10000 tup 13.77 14.16 14.08 14.58\n14.03 14.60\n5000 tup 13.70 14.08 13.97 14.71\n13.93 14.75\n1000 tup 11.36 11.63 11.63 13.33\n11.63 13.51\n\n\nNotes:\n1. rfs_fdatasync: data and wal on rieserfs with wal_sync_method = fdatasync\n\n2. ext_fdatasync: data and wal on ext2 with wal_sync_method = fdatasync\n\n3. starting # tup: the database was pre-seeded with 70k tuples. I made a\ntarball of the starting database and refreshed the pgsql/data filestructure\nbefore each test to ensure a good comparison.\n\n4. cpu utilization + ram - stable cpu + ram - final: I eyeballed top while\nthe test was running. In general cpu % increased steadily through the first\n1500 or so transactions, along with ram usage. At the point when cpu\nutilization stabilized, ram was pretty consistently at 42M. From there, cpu\nutil % varied in the ranges noted, while ram usage slowly increased to 52M.\nIt seemed pretty linear in that I could estimate the number of transactions\nalready processes based on ram usage.\n\n5. avg trans/sec: These represent the total transactions/total elapsed time\nat the given number of transactions (as opposed to some instantaneous value\nat that point in time).\n\n\n", "msg_date": "Mon, 7 May 2001 23:03:35 -0700", "msg_from": "\"Joe Conway\" <joe@conway-family.com>", "msg_from_op": true, "msg_subject": "Re: New Linux xfs/reiser file systems" } ]
[ { "msg_contents": "I've posted RPMs for Mandrake, but could not put them in the obvious\nplace on the FTP site because the permissions do not allow group write\naccess. Lamar, could you open up that part of the tree to allow group\nwrite permissions? In the meantime, I've placed the files in\n/pub/binary/v7.1-Mandrake/.\n\nI've built RPMs for 7.1.1, but perhaps we should wait until 7.1.2 to\npost them given the pgtcl problem? Lamar, what are you planning for\n7.1.1?\n\n - Thomas\n", "msg_date": "Tue, 08 May 2001 07:06:42 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Posted 7.1 RPMs for Mandrake 7.2" }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> I've built RPMs for 7.1.1, but perhaps we should wait until 7.1.2 to\n> post them given the pgtcl problem? Lamar, what are you planning for\n> 7.1.1?\n\nGiven my plpgsql screwup, and the dump-7.0-views thing that Philip wants\nto fix in pg_dump, I'd say there certainly will be a 7.1.2 pretty soon.\nBut I think we should wait a couple more days and see if any other bug\nreports turn up. Maybe we should plan for the end of the week?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 10:16:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "7.1.2 schedule (was Re: Posted 7.1 RPMs for Mandrake 7.2)" }, { "msg_contents": "> HOWEVER, I _do_ have 7.1.1 RPMs built (minus some minor modifications) for\n> RedHat 7.1. Thomas, would you mind e-mailing me any changes you made to\n> anything (other than the version diff)? I have another patch from Trond to\n> apply to the initscript, and more testing would be nice.\n\nNo changes were necessary :))\n\n> Thomas, which pgtcl problem are you referring to?\n\nThe plpgsql one, of course. Got the name wrong...\n\n> As to the group write permissions, Thomas...... The perms on the RPMS subdir\n> now set g+w. Sorry. I'll need to set my umask a little more appropriately.\n\nGreat. I'll move things around. btw, I've found that things like \"scp\"\ndon't respect a .cshrc umask setting, so you will likely need to check\npermissions when you are working in those directories anyway.\n\n - Thomas\n", "msg_date": "Tue, 08 May 2001 15:14:58 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Re: 7.1.2 schedule (was Re: Posted 7.1 RPMs for Mandrake 7.2)" }, { "msg_contents": "On Tuesday 08 May 2001 10:16, Tom Lane wrote:\n> Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> > I've built RPMs for 7.1.1, but perhaps we should wait until 7.1.2 to\n> > post them given the pgtcl problem? Lamar, what are you planning for\n> > 7.1.1?\n\n> Given my plpgsql screwup, and the dump-7.0-views thing that Philip wants\n> to fix in pg_dump, I'd say there certainly will be a 7.1.2 pretty soon.\n> But I think we should wait a couple more days and see if any other bug\n> reports turn up. Maybe we should plan for the end of the week?\n\nGiven a quick 7.1.2, I would rather go through the release pain once. Is it \njust me, or do we have terrible luck with .1 subreleases? IIRC, 6.2.1 was \nthe last good x.y.1 release. I'm not going to beat a dead horse, here, \nthough. :-)\n\nHOWEVER, I _do_ have 7.1.1 RPMs built (minus some minor modifications) for \nRedHat 7.1. Thomas, would you mind e-mailing me any changes you made to \nanything (other than the version diff)? I have another patch from Trond to \napply to the initscript, and more testing would be nice.\n\nIf 7.1.2 is over a week away, I'll go ahead and release 7.1.1 RPMs -- but I \nwould really like to incorporate any patch to the plpgsql code, Tom, being \nthat I am a member of the OpenACS team :-O. I can easily patch and release \n7.1.1 RPMs that don't have the bug -- not that that is the best idea, by any \nmeans, but it is for me just about a showstopper.\n\nOr I need to release a 7.1-2 set that includes RPM-specific bugfixes to the \ninitscript and files list.\n\nThomas, which pgtcl problem are you referring to?\n\nFWIW, my extant CHANGELOG entry for the 7.1.1 RPMs currently reads:\n* Mon May 07 2001 Lamar Owen <lamar@postgresql.org> <lamar.owen@wgcr.org>\n- 7.1.1\n- 7.1.1-1 RPM release\n- Changes to initscript courtesy Karl DeBisschop\n- pg_restore was not in 7.1-1\n- pl's back into /usr/lib/pgsql\n- use groupadd's -o and -r switches.\n\nAs to the group write permissions, Thomas...... The perms on the RPMS subdir \nnow set g+w. Sorry. I'll need to set my umask a little more appropriately.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Fri, 11 May 2001 12:30:46 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 schedule (was Re: Posted 7.1 RPMs for Mandrake 7.2)" }, { "msg_contents": "On Tuesday 08 May 2001 11:14, Thomas Lockhart wrote:\n> > As to the group write permissions, Thomas...... The perms on the RPMS\n> > subdir now set g+w. Sorry. I'll need to set my umask a little more\n> > appropriately.\n\n> Great. I'll move things around. btw, I've found that things like \"scp\"\n> don't respect a .cshrc umask setting, so you will likely need to check\n> permissions when you are working in those directories anyway.\n\nAh. Of course, Idon't use the csh :-). But I _do_ use scp exclusively to \ncopy stuff.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Fri, 11 May 2001 12:31:52 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 schedule (was Re: Posted 7.1 RPMs for Mandrake 7.2)" } ]
[ { "msg_contents": "\n> > Right now anyone can look in pg_statistic and discover the min/max/most\n> > common values of other people's tables. That's not a lot of info, but\n> > it might still be more than you want them to find out. And the\n> > statistical changes that I'm about to commit will allow a couple dozen\n> > values to be exposed, not only three values per column.\n> > \n> > It seems to me that only superusers should be allowed to read the\n> > pg_statistic table. Or am I overreacting? Comments?\n> \n> You are not overreacting. Imagine a salary column. I can imagine\n> max/min being quite interesting.\n> \n> I doubt it is worth letting non-super users see values in that table. \n> Their only value is in debugging the optimizer, which seems like a\n> super-user job anyway.\n\nHow about letting them see all statistics where they have select permission \non the base table (if that is possible with the new permission table) ?\n\nAndreas\n", "msg_date": "Tue, 8 May 2001 10:03:25 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Isn't pg_statistic a security hole?" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> How about letting them see all statistics where they have select permission \n> on the base table (if that is possible with the new permission table) ?\n\nYeah, I was thinking the same thing. If we restrict the view on the\nbasis of current_user being the owner, then we'd have the annoying\nproblem that superusers *couldn't* use the view for tables they didn't\nown.\n\nTo implement this, we'd need a SQL function that answers the question\n\"does user A have read permission on table B?\", which is something that\npeople have asked for in the past anyway. (The existing SQL functions\nfor manipulating ACLs are entirely unhelpful for determining this.)\n\nSomeone needs to come up with a spec for such a function --- do we\nspecify user and table by names or by OIDs, how is the interesting\npermission represented, etc. Is there anything comparable defined by\nSQL99 or in other DBMSes?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 10:25:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Isn't pg_statistic a security hole? " }, { "msg_contents": "I can say what oracle does in this regard. For information like this \nOracle will generally have three views in the data dictionary:\n\n1) USER_XXX - shows records where the current user is the owner of the \nitem in question\n2) ALL_XXX - shows records for all items accessible by the current user\n3) DBA_XXX - shows records for all items, only available for DBA's or \nsuperusers\n\nWhere XXX are things like: TABLES, VIEWS, TAB_COL_STATISTICS, INDEXES, \nTRIGGERS, etc (about 120 in all).\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> \n>> How about letting them see all statistics where they have select permission \n>> on the base table (if that is possible with the new permission table) ?\n> \n> \n> Yeah, I was thinking the same thing. If we restrict the view on the\n> basis of current_user being the owner, then we'd have the annoying\n> problem that superusers *couldn't* use the view for tables they didn't\n> own.\n> \n> To implement this, we'd need a SQL function that answers the question\n> \"does user A have read permission on table B?\", which is something that\n> people have asked for in the past anyway. (The existing SQL functions\n> for manipulating ACLs are entirely unhelpful for determining this.)\n> \n> Someone needs to come up with a spec for such a function --- do we\n> specify user and table by names or by OIDs, how is the interesting\n> permission represented, etc. Is there anything comparable defined by\n> SQL99 or in other DBMSes?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n", "msg_date": "Tue, 08 May 2001 10:35:42 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: AW: Isn't pg_statistic a security hole?" }, { "msg_contents": "The recent discussions on pg_statistic got me started thinking about how to\nimplement a secure form of the view. Based on the list discussion, and a\nsuggestion from Tom, I did some research regarding how SQL92 and some of the\nlarger commercial database systems allow access to system privilege\ninformation.\n\nI reviewed the ANSI SQL 92 specification, Oracle, MSSQL, and IBM DB2\n(documentation only). Here's what I found:\n\nANSI SQL 92 does not have any functions defined for retrieving privilege\ninformation. It does, however define an \"information schema\" and \"definition\nschema\" which among other things includes a TABLE_PRIVILEGES view.\n\nWith this view available, it is possible to discern what privileges the\ncurrent user has using a simple SQL statement. In Oracle, I found this view,\nand some other variations. According to the Oracle DBA I work with, there is\nno special function, and a SQL statement on the view is how he would gather\nthis kind of information when needed.\n\nMSSQL Server 7 also has this same view. Additionally, SQL7 has a T-SQL\nfunction called PERMISSIONS with the following description:\n\"Returns a value containing a bitmap that indicates the statement, object,\nor column permissions for the current user.\nSyntax PERMISSIONS([objectid [, 'column']])\".\n\nI only looked briefly at the IBM DB2 documentation, but could find no\nmention of TABLE_PRIVILEGES or any privilege specific function. I imagine\nTABLE_PRIVILEGES might be there somewhere since it seems to be standard\nSQL92.\n\nBased on all of the above, I concluded that there is nothing compelling in\nterms of a specific function to be compatible with. I do think that in the\nlonger term it makes sense to implement the SQL 92 information schema views\nin PostgreSQL.\n\nSo, now for the proposal. I created a function (attached) which will allow\nany privilege type to be probed, called has_privilege. It is used like this:\n\n select relname from pg_class where has_privilege(current_user, relname,\n'update');\n\nor\n\n select has_privilege('postgres', 'pg_shadow', 'select');\n\nwhere\n the first parameter is any valid user name\n the second parameter can be a table, view, or sequence\n the third parameter can be 'select', 'insert', 'update', 'delete', or\n'rule'\n\nThe function is currently implemented as an external c function and designed\nto be built under contrib. This function should really be an internal\nfunction. If the proposal is acceptable, I would like to take on the task of\nturning the function into an internal one (with guidance, pointers,\nsuggestions greatly appreciated). This would allow a secure view to be\nimplemented over pg_statistic as:\n\ncreate view pg_userstat as (\n select\n s.starelid\n ,s.staattnum\n ,s.staop\n ,s.stanullfrac\n ,s.stacommonfrac\n ,s.stacommonval\n ,s.staloval\n ,s.stahival\n ,c.relname\n ,a.attname\n ,sh.usename\n from\n pg_statistic as s\n ,pg_class as c\n ,pg_shadow as sh\n ,pg_attribute as a\n where\n has_privilege(current_user,c.relname,'select')\n and sh.usesysid = c.relowner\n and a.attrelid = c.oid\n and c.oid = s.starelid\n);\n\nThen restrict pg_statistic from public viewing. This view would allow the\ncurrent user to view statistics only on relations for which they already\nhave 'select' granted.\n\nComments?\n\nRegards,\n-- Joe\n\ninstallation:\n\nplace in contrib\ntar -xzvf has_priv.tgz\ncd has_priv\n./install.sh\nNote: installs the function into template1 by default. Edit install.sh to\nchange.", "msg_date": "Sun, 13 May 2001 20:12:01 -0700", "msg_from": "\"Joe Conway\" <joe@conway-family.com>", "msg_from_op": false, "msg_subject": "Isn't pg_statistic a security hole - Solution Proposal" } ]
[ { "msg_contents": "\n> > >From a portability standpoint, I think if we go anywhere, it would be to\n> > write directly into device files representing sections of a disk.\n> \n> That makes sense to me. On \"traditional\" Unices, we could use the raw \n> character device for a partition (eg /dev/rdsk/* on Solaris),\n\nOn Solaris this is (imho) the exact wrong way to do it. On Solaris you would \ntypically use logical volumes created with Veritas LVM.\nImho the times where you think of a raw device beeing one physical disk partition \nare fortunately over, since LVM's are widespread enough.\n\n> and on Linux we'd use /dev/raw*, which is a mapping to a specific partition\n> established before PG startup.\n\nUsually you would use a symlink to such a raw device if such a device needs to be \nin /dev.\n\n> I guess there would need to be a system table that keeps track of \n> (dev, offset, size) tuples for each WAL file.\n\nThis would need to be hidden in a new smgr layer.\n\nAndreas\n", "msg_date": "Tue, 8 May 2001 10:21:18 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: File system performance and pg_xlog (More info)" } ]
[ { "msg_contents": "First pleasu use a subject line, since I almost discarded this message.\nSecond please send your questions to an appropriate list like pgsql-general.\n \nSee if adding the next row is fast again.\nIf it is not, I do not have a clue.\n \nAndreas\n\n-----Ursprüngliche Nachricht-----\nVon: Saju [mailto:saju@apnaguide.com]\nGesendet: Dienstag, 08. Mai 2001 10:38\nAn: ZeugswetterA@wien.spardat.at\nBetreff: \n\n\nhi,\n \n I am saju working with Pereefi software ,Mumbai,India....i do have some probelm while working with large table in postgres. i am using a java program to get value from client table and in insert those data in to our user table . When the row no is increased to more than 100000 it takes nearly 2.5 minutes to insert another one more row. The table to which i am inserting data is not indexed .i have droped all the index in it .\n \nplease acknwoledge to it fast\nthanks in advance \nSaju( saju@apnaguide.com <mailto:saju@apnaguide.com> )\n\n\n\n\n\n\n\n\nFirst pleasu use a subject line, since I almost discarded this \nmessage.\nSecond please send your questions to an appropriate list like \npgsql-general.\n \nSee if adding the next row is fast again.\nIf it is not, I do not have a clue.\n \nAndreas\n\n-----Ursprüngliche Nachricht-----Von: Saju \n [mailto:saju@apnaguide.com]Gesendet: Dienstag, 08. Mai 2001 \n 10:38An: ZeugswetterA@wien.spardat.atBetreff:\n\nhi,\n \n I am  saju  working  with \n Pereefi software ,Mumbai,India....i do have  some  probelm  \n while  working  with  large  table in postgres. i am using \n a java program  to get  value  from client table  \n and  in insert those data in to our user table . When the  row no is \n increased to more than  100000 it takes nearly 2.5 minutes to insert \n another one more row. The table to which i am inserting  data is not \n indexed .i have droped all the index in it .\n \nplease acknwoledge to it fast\nthanks in advance \nSaju(saju@apnaguide.com)", "msg_date": "Tue, 8 May 2001 11:00:36 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: " } ]
[ { "msg_contents": "A fellow NetBSD developer has sent me some changes needed to build 7.1\ncleanly on NetBSD through its package system. I am asking him a few\nquestions about it but I thought I would just commit them then if no one\nhas a problem. They are mostly related to building cleanly on NetBSD.\nPerhaps someone could watch source changes and just review the changes.\nI am asking because I generally don't work on stuff outside of PyGreSQL.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 8 May 2001 06:55:19 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Changes needed to build on NetBSD" }, { "msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> A fellow NetBSD developer has sent me some changes needed to build 7.1\n> cleanly on NetBSD through its package system. I am asking him a few\n> questions about it but I thought I would just commit them then if no one\n> has a problem. They are mostly related to building cleanly on NetBSD.\n> Perhaps someone could watch source changes and just review the changes.\n> I am asking because I generally don't work on stuff outside of PyGreSQL.\n\nIf you're not sure of them, I suggest posting the diffs on -patches\nrather than committing them yourself.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 10:29:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changes needed to build on NetBSD " }, { "msg_contents": "D'Arcy J.M. Cain writes:\n\n> A fellow NetBSD developer has sent me some changes needed to build 7.1\n> cleanly on NetBSD through its package system. I am asking him a few\n> questions about it but I thought I would just commit them then if no one\n> has a problem. They are mostly related to building cleanly on NetBSD.\n\nGiven that we had several successful reports for NetBSD for the 7.1\nrelease I am suspicious about the definition of \"clean\". Changes required\nfor binary(?) packaging generally do not count as bug fixes. At least I'd\nlike to see a description of the problems first.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 8 May 2001 17:00:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Changes needed to build on NetBSD" }, { "msg_contents": "If the changes relate only to NetBSD-specific parts of the code, I\nusually apply the patch it if looks OK. Shoot it to patches and we can\ngive it a quick review.\n\n\n> A fellow NetBSD developer has sent me some changes needed to build 7.1\n> cleanly on NetBSD through its package system. I am asking him a few\n> questions about it but I thought I would just commit them then if no one\n> has a problem. They are mostly related to building cleanly on NetBSD.\n> Perhaps someone could watch source changes and just review the changes.\n> I am asking because I generally don't work on stuff outside of PyGreSQL.\n> \n> -- \n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 11:14:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changes needed to build on NetBSD" }, { "msg_contents": "Thus spake Peter Eisentraut\n> D'Arcy J.M. Cain writes:\n> > A fellow NetBSD developer has sent me some changes needed to build 7.1\n> > cleanly on NetBSD through its package system. I am asking him a few\n> > questions about it but I thought I would just commit them then if no one\n> > has a problem. They are mostly related to building cleanly on NetBSD.\n> \n> Given that we had several successful reports for NetBSD for the 7.1\n> release I am suspicious about the definition of \"clean\". Changes required\n> for binary(?) packaging generally do not count as bug fixes. At least I'd\n> like to see a description of the problems first.\n\nNote the reference to \"through its package system\" above. The package\nbuilds fine on NetBSD. This is just to make it easier to build it in\nthe more automated way that the package system does.\n\nAnyway, I have sent the patches to the patches mailing list.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 9 May 2001 05:52:38 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: Changes needed to build on NetBSD" } ]
[ { "msg_contents": "Hi.\n\nAfter `configure --enable-depend' I try `make' and got\ngmake[3]: Entering directory\n`/tmp_mnt/hosts/wisdom/NewSoftware/Ask/build/pgsql/src/backend/port'\ngcc -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../src/include -c -o ../../utils/strdup.o ../../utils/strdup.c\n-MMD\ncp: ../../utils/strdup.d: No such file or directory\ngmake[3]: *** [../../utils/strdup.o] Error 1\ngmake[3]: *** Deleting file `../../utils/strdup.o'\ngmake[3]: Leaving directory\n`/tmp_mnt/hosts/wisdom/NewSoftware/Ask/build/pgsql/src/backend/port'\n\nThe reason is that gcc has bug (or changed feature) in determing there to\nput .d files: `gcc -MMD x/a.c -c -o x/a.o' puts a.d to the current\ndirectory if its version is `2.92.2 19991024', and in x, if its version is\n`3.1 20010430'. It is looks like the makefile assumes the new\nbehaviour. (Actually, I guess, nobody faced the issue, because developer's\nsystems has `strdup')\n\nIt is looks like the following patch solves the problem:\nIndex: src/Makefile.global.in\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/Makefile.global.in,v\nretrieving revision 1.123\ndiff -C2 -r1.123 Makefile.global.in\n*** src/Makefile.global.in 2001/03/10 10:38:59 1.123\n--- src/Makefile.global.in 2001/05/08 10:07:30\n***************\n*** 367,375 ****\n # subdirectory, with the dummy targets as explained above.\n define postprocess-depend\n! @if test ! -d $(DEPDIR); then mkdir $(DEPDIR); fi\n! @cp $*.d $(df).P\n! @sed -e 's/#.*//' -e 's/^[^:]*: *//' -e 's/ *\\\\$$//' \\\n! -e '/^$$/ d' -e 's/$$/ :/' < $*.d >> $(df).P\n! @rm -f $*.d\n endef\n \n--- 367,377 ----\n # subdirectory, with the dummy targets as explained above.\n define postprocess-depend\n! @if test ! -d $(DEPDIR); then mkdir $(DEPDIR); fi; \\\n! if test -f $*.d; then dfile=$*.d ; \\\n! else dfile=`basename $*.d` ; fi; \\\n! cp $$dfile $(df).P; \\\n! sed -e 's/#.*//' -e 's/^[^:]*: *//' -e 's/ *\\\\$$//' \\\n! -e '/^$$/ d' -e 's/$$/ :/' < $$dfile >> $(df).P; \\\n! rm -f $$dfile\n endef\n\n \nBTW: Is there documentation about build process (makefiles structure,\netc.)?\n\nAnother tiny (for quick computer) thing: is it necessary for `make\ndistclean' to call configure, or something is wrong with my environment?\n\nAnd yet another: is it OK for `make depend' to produce\ngcc -MM -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error *.c>depend\nanalyze.c:14: postgres.h: No such file or directory\nanalyze.c:16: access/heapam.h: No such file or directory\n...\n\nRegards,\nASK\n\nP.S. Search in the mailing lists for `-MMD' failed with \nOutput from Glimpse: \nglimpse: error in options or arguments to `agrep'\n\n\n\n\n\n", "msg_date": "Tue, 8 May 2001 14:53:27 +0300 (IDT)", "msg_from": "Alexander Klimov <ask@wisdom.weizmann.ac.il>", "msg_from_op": true, "msg_subject": "Where `gcc -MMD' puts .d files" }, { "msg_contents": "Alexander Klimov writes:\n\n> The reason is that gcc has bug (or changed feature) in determing there to\n> put .d files: `gcc -MMD x/a.c -c -o x/a.o' puts a.d to the current\n> directory if its version is `2.92.2 19991024', and in x, if its version is\n> `3.1 20010430'.\n\nI have a patch for (better) dependency tracking with gcc >= 3 which will\nhit CVS soon. Until then, use released compilers or don't use dependency\ntracking. :-( (The last couple of times I tried gcc >= 3 with PostgreSQL\nit was a complete disaster anyway, so the former is a good idea in any\ncase ;-).)\n\n> BTW: Is there documentation about build process (makefiles structure,\n> etc.)?\n\nNope. But some should perhaps be written.\n\n> Another tiny (for quick computer) thing: is it necessary for `make\n> distclean' to call configure, or something is wrong with my environment?\n\nHard to tell. Make sure the config.cache isn't messing you up.\n\n> And yet another: is it OK for `make depend' to produce\n\nmake depend doesn't officially exist anymore.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 8 May 2001 17:15:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Where `gcc -MMD' puts .d files" }, { "msg_contents": "On Tue, 8 May 2001, Peter Eisentraut wrote:\n> I have a patch for (better) dependency tracking with gcc >= 3 which will\n> hit CVS soon. Until then, use released compilers or don't use dependency\n> tracking. :-( (The last couple of times I tried gcc >= 3 with PostgreSQL\n> it was a complete disaster anyway, so the former is a good idea in any\n> case ;-).)\nThe point is that I do use gcc 2.95.2, and it puts .d file in the curent\ndirectory, not to the directory there files come from. Anyway, my patch\nsolve the problem at least for me.\n\nRegards,\nASK\n\n", "msg_date": "Tue, 8 May 2001 18:22:09 +0300 (IDT)", "msg_from": "Alexander Klimov <ask@wisdom.weizmann.ac.il>", "msg_from_op": true, "msg_subject": "Re: Where `gcc -MMD' puts .d files" }, { "msg_contents": "Alexander Klimov writes:\n\n> The point is that I do use gcc 2.95.2, and it puts .d file in the curent\n> directory, not to the directory there files come from. Anyway, my patch\n> solve the problem at least for me.\n\nI see the problem, the port/Makefile needs some changes because it's\ntrying to put output files outside the current directory. Try this patch:\n\ndiff -c -r1.28 Makefile.in\n*** Makefile.in 2000/12/11 00:49:54 1.28\n--- Makefile.in 2001/05/08 16:42:20\n***************\n*** 22,29 ****\n include $(top_builddir)/src/Makefile.global\n\n OBJS = dynloader.o @INET_ATON@ @STRERROR@ @MISSING_RANDOM@ @SRANDOM@\n! OBJS+= @GETHOSTNAME@ @GETRUSAGE@ @STRCASECMP@ @STRDUP@ @TAS@ @ISINF@\n OBJS+= @STRTOL@ @STRTOUL@ @SNPRINTF@\n ifeq ($(PORTNAME), qnx4)\n OBJS += getrusage.o qnx4/SUBSYS.o\n endif\n--- 22,32 ----\n include $(top_builddir)/src/Makefile.global\n\n OBJS = dynloader.o @INET_ATON@ @STRERROR@ @MISSING_RANDOM@ @SRANDOM@\n! OBJS+= @GETHOSTNAME@ @GETRUSAGE@ @STRCASECMP@ @TAS@ @ISINF@\n OBJS+= @STRTOL@ @STRTOUL@ @SNPRINTF@\n+ ifdef STRDUP\n+ OBJS += $(top_builddir)/src/utils/strdup.o\n+ endif\n ifeq ($(PORTNAME), qnx4)\n OBJS += getrusage.o qnx4/SUBSYS.o\n endif\n***************\n*** 56,61 ****\n--- 59,68 ----\n\n tas.o: tas.s\n $(CC) $(CFLAGS) -c $<\n+\n+ $(top_builddir)/src/utils/strdup.o:\n+ $(MAKE) -C $(top_builddir)/src/utils strdup.o\n+\n\n distclean clean:\n rm -f SUBSYS.o $(OBJS)\n===snip\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 8 May 2001 18:57:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Where `gcc -MMD' puts .d files" } ]
[ { "msg_contents": "Fiddling with userlock stuff for the purposes of setting up an action\nqueue. Having the lock in the where clause causes the lock code to\nactually lock 2 rows, not just the one that is being returned. 0's in\nthe last section means it could not be locked. This is with 7.1.1.\nThe function itself is pretty simple, so I'm wondering that the\nfunction isn't being evaluated for 2 rows where only 1 was wanted.\n\nUserlock code is in the contrib. section.\n\n\nCREATE TABLE testlock (\n id SERIAL PRIMARY KEY\n);\n\nINSERT INTO testlock DEFAULT VALUES:\nINSERT INTO testlock DEFAULT VALUES:\nINSERT INTO testlock DEFAULT VALUES:\nINSERT INTO testlock DEFAULT VALUES:\nINSERT INTO testlock DEFAULT VALUES:\n\nSELECT id FROM testlock WHERE user_write_lock_oid(oid) = '1' LIMIT 1;\n\n-- From another connection\n\nSELECT user_write_lock_oid(oid) FROM testlock;\n\n\n--\nRod Taylor\n BarChord Entertainment Inc.", "msg_date": "Tue, 8 May 2001 10:52:51 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": true, "msg_subject": "UserLock oddity with Limit" }, { "msg_contents": "\"Rod Taylor\" <rbt@barchord.com> writes:\n> Fiddling with userlock stuff for the purposes of setting up an action\n> queue. Having the lock in the where clause causes the lock code to\n> actually lock 2 rows, not just the one that is being returned.\n\nA WHERE clause should *never* contain function calls with side effects.\nI do not regard this behavior as a bug. Put the function call in the\nSELECT's output list if you want to know exactly which rows it is\nevaluated at.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 11:35:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: UserLock oddity with Limit " }, { "msg_contents": "As a general rule I don't. But I'm having a hard time trying to find\nout if there is a lock on a given item without attempting to lock it.\nSeems to work that way with all locks but most delay until it can\nobtain it. Userlocks don't wait.\n\n\n--\nRod Taylor\n BarChord Entertainment Inc.\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@barchord.com>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Tuesday, May 08, 2001 11:35 AM\nSubject: Re: [HACKERS] UserLock oddity with Limit\n\n\n> \"Rod Taylor\" <rbt@barchord.com> writes:\n> > Fiddling with userlock stuff for the purposes of setting up an\naction\n> > queue. Having the lock in the where clause causes the lock code\nto\n> > actually lock 2 rows, not just the one that is being returned.\n>\n> A WHERE clause should *never* contain function calls with side\neffects.\n> I do not regard this behavior as a bug. Put the function call in\nthe\n> SELECT's output list if you want to know exactly which rows it is\n> evaluated at.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Tue, 8 May 2001 11:55:44 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": true, "msg_subject": "Re: UserLock oddity with Limit " } ]
[ { "msg_contents": "Hi.\n\nOn some systems /bin/sh is not Burne Shell, e.g. /bin/sh is tcsh, but\nthere is /bin/sh5. It is looks like there is already knowledge about it in\nthe system: Makefile.ultrix4 has `SHELL=/bin/sh5' in it, but configure\nthinks something else: config.status has `s%@SHELL@%/bin/sh%g'. (This is\nreally unrelated, because `src/bin/initdb/initdb.sh' has `#! /bin/sh'\nhardcoded in it)\n\nThe result of the mess is that scripts like initdb are installed with\n`#!/bin/sh', but they has function definition and tcsh complain about\nusage of '('. \n\nBTW: After hand substitution I reach the point of \nIpcSemaphoreCreate: semget(key=4, num=17, 03600) failed: No space left on\ndevice\nThe problem is that I have no idea how to enlarge the parameters on\n`ULTRIX black 4.3 1 RISC', and it is looks like PG has no FAQ for\nit. Anybody knows how to do it?\n\nRegards,\nASK\n\n\n", "msg_date": "Tue, 8 May 2001 18:29:38 +0300 (IDT)", "msg_from": "Alexander Klimov <ask@wisdom.weizmann.ac.il>", "msg_from_op": true, "msg_subject": "Is `#!/bin/sh' configurable?" }, { "msg_contents": "Alexander Klimov <ask@wisdom.weizmann.ac.il> writes:\n\n> Hi.\n> \n> On some systems /bin/sh is not Burne Shell, e.g. /bin/sh is tcsh, but\n\n*violent retching sounds*\n\nIMHO, any system where /bin/sh doesn't point to an at-least-somewhat\nBourne-compatible shell is broken by definition... Who perpetrated\nthis atrocity?\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "08 May 2001 12:14:03 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Is `#!/bin/sh' configurable?" }, { "msg_contents": "> Hi.\n> \n> On some systems /bin/sh is not Burne Shell, e.g. /bin/sh is tcsh, but\n> there is /bin/sh5. It is looks like there is already knowledge about it in\n> the system: Makefile.ultrix4 has `SHELL=/bin/sh5' in it, but configure\n> thinks something else: config.status has `s%@SHELL@%/bin/sh%g'. (This is\n> really unrelated, because `src/bin/initdb/initdb.sh' has `#! /bin/sh'\n> hardcoded in it)\n\nActually, Makefile.ultrix will override what is in config.status, so\nthat part is OK.\n\n> \n> The result of the mess is that scripts like initdb are installed with\n> `#!/bin/sh', but they has function definition and tcsh complain about\n> usage of '('. \n\nIt is hard to feel sorry for OS's that have /bin/sh as something that is\nnot at least moderately compatible with the Bourne sh.\n\nHowever, I am applying the following patch to allow SHELL set in\nMakefile.* to control what is used by initdb. I have not changed any\nother commands because I don't want to start making this change all over\nwhen it is not necessary.\n\n> BTW: After hand substitution I reach the point of \n> IpcSemaphoreCreate: semget(key=4, num=17, 03600) failed: No space left on\n> device\n> The problem is that I have no idea how to enlarge the parameters on\n> `ULTRIX black 4.3 1 RISC', and it is looks like PG has no FAQ for\n> it. Anybody knows how to do it?\n\nSorry, I don't know.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/bin/initdb/Makefile\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/initdb/Makefile,v\nretrieving revision 1.25\ndiff -c -r1.25 Makefile\n*** src/bin/initdb/Makefile\t2001/02/18 18:33:59\t1.25\n--- src/bin/initdb/Makefile\t2001/05/08 16:16:54\n***************\n*** 18,23 ****\n--- 18,24 ----\n initdb: initdb.sh $(top_builddir)/src/Makefile.global\n \tsed -e 's/@MULTIBYTE@/$(MULTIBYTE)/g' \\\n \t -e 's/@VERSION@/$(VERSION)/g' \\\n+ \t -e 's,@SHELL@,$(SHELL),g' \\\n \t -e 's,@bindir@,$(bindir),g' \\\n \t -e 's,@datadir@,$(datadir),g' \\\n \t $< >$@\nIndex: src/bin/initdb/initdb.sh\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/initdb/initdb.sh,v\nretrieving revision 1.123\ndiff -c -r1.123 initdb.sh\n*** src/bin/initdb/initdb.sh\t2001/03/27 05:45:50\t1.123\n--- src/bin/initdb/initdb.sh\t2001/05/08 16:16:54\n***************\n*** 1,4 ****\n! #! /bin/sh\n #-------------------------------------------------------------------------\n #\n # initdb creates (initializes) a PostgreSQL database cluster (site,\n--- 1,4 ----\n! #!@SHELL@\n #-------------------------------------------------------------------------\n #\n # initdb creates (initializes) a PostgreSQL database cluster (site,", "msg_date": "Tue, 8 May 2001 12:28:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is `#!/bin/sh' configurable?" }, { "msg_contents": "Alexander Klimov writes:\n\n> On some systems /bin/sh is not Burne Shell, e.g. /bin/sh is tcsh, but\n> there is /bin/sh5.\n\n/bin/sh is a Bourne shell on Ultrix, just a particularly old and wacky\none.\n\n> It is looks like there is already knowledge about it in\n> the system: Makefile.ultrix4 has `SHELL=/bin/sh5' in it, but configure\n> thinks something else: config.status has `s%@SHELL@%/bin/sh%g'.\n\nAutoconf inserts this automatically because some makes inherit the value\nof SHELL from the environment, which is a silly thing to do. We don't use\nthis because we use GNU make.\n\n> The result of the mess is that scripts like initdb are installed with\n> `#!/bin/sh', but they has function definition and tcsh complain about\n> usage of '('.\n\nNo, the Ultrix shell simply doesn't support shell functions. AFAIK it's\nthe only Bourne shell still on the planet that does that. The short\nanswer might be not to use shell functions. The one in initdb could\nprobably be replaced by a trap. But the Ultrix /bin/sh is broken in\nsubtle and weird ways beyond that and it's too much effort to work around\nthis. Even the autoconf guys think so these days.\n\nA better answer would probably be making the #! /bin/sh substitutable.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 8 May 2001 18:55:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Is `#!/bin/sh' configurable?" }, { "msg_contents": "On 8 May 2001, Doug McNaught wrote:\n\n> Alexander Klimov <ask@wisdom.weizmann.ac.il> writes:\n> \n> > Hi.\n> > \n> > On some systems /bin/sh is not Burne Shell, e.g. /bin/sh is tcsh, but\n> \n> *violent retching sounds*\n> \n> IMHO, any system where /bin/sh doesn't point to an at-least-somewhat\n> Bourne-compatible shell is broken by definition... Who perpetrated\n> this atrocity?\n\nSorry, I was misleaded by\n>sh -c 'echo $SHELL'\n/bin/tcsh\n\nThe /bin/sh is sh, but not SysV compatible -- there is /bin/sh5 for that.\n\nRegards,\nASK\n\n", "msg_date": "Thu, 10 May 2001 12:51:08 +0300 (IDT)", "msg_from": "Alexander Klimov <ask@wisdom.weizmann.ac.il>", "msg_from_op": true, "msg_subject": "Re: Is `#!/bin/sh' configurable?" }, { "msg_contents": "\nWe worked around the Ultrix /bin/sh problem for initdb. If that is the\nonly place there is a problem, we can keep the fix for 7.2.\n\n\n> On 8 May 2001, Doug McNaught wrote:\n> \n> > Alexander Klimov <ask@wisdom.weizmann.ac.il> writes:\n> > \n> > > Hi.\n> > > \n> > > On some systems /bin/sh is not Burne Shell, e.g. /bin/sh is tcsh, but\n> > \n> > *violent retching sounds*\n> > \n> > IMHO, any system where /bin/sh doesn't point to an at-least-somewhat\n> > Bourne-compatible shell is broken by definition... Who perpetrated\n> > this atrocity?\n> \n> Sorry, I was misleaded by\n> >sh -c 'echo $SHELL'\n> /bin/tcsh\n> \n> The /bin/sh is sh, but not SysV compatible -- there is /bin/sh5 for that.\n> \n> Regards,\n> ASK\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 09:39:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Is `#!/bin/sh' configurable?" } ]
[ { "msg_contents": "\nOK, now that we have started 7.2 development, I am going to go through\nthe outstanding patches and start to apply them or reject them. They\nare at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI could use help in identifying which patches are a problem. Most of\nthe ones there now have been reviewed by me or have received the\nrecommendation of another developer.\n\nI particularly need JDBC help because I have many JDBC patches.\n\nIf you would send email stating which patches should _not_ be applied, I\nwould appreciate it.\n\nOf course, patches can always be backed out.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 15:57:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Outstanding patches" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, now that we have started 7.2 development, I am going to go through\n> the outstanding patches and start to apply them or reject them. They\n> are at:\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> I could use help in identifying which patches are a problem. Most of\n> the ones there now have been reviewed by me or have received the\n> recommendation of another developer.\n\nOkay, I looked through these ...\n\nI do not like Ian Taylor's plpgsql cursor patch; trying to do cursors\ninside plpgsql with no SPI-level support is too much of a kluge. We\nshould first add cursor support to SPI, then fix plpgsql. Much of the\nparsing work he's done could be salvaged, but the implementation can't\nbe. (And I don't want to apply it now and back it out later --- it adds\ntoo many warts.)\n\nFernando Nasser's ANALYZE patch is superseded by already-applied work,\nthough if he wants to do the promised test additions I would be happy.\n\nThe PAM support patch concerns me --- it looks like yet another chunk\nof code that will tie up the postmaster in a single-threaded\nconversation with a remote daemon that may or may not respond promptly.\nI recommend holding off on this until we think about whether we\nshouldn't restructure the postmaster to do all authentication work in\nper-client subprocesses.\n\nWe need to discuss whether we like the %TYPE feature proposed by Ian\nTaylor. It seems awfully nonstandard to me, and I'm not sure that the\nvalue is great enough to be worth inventing a nonstandard feature.\nISTM that people don't normally tie functions to tables so tightly that\nit's better to define a function in terms of \"the type of column foo\nof table bar\" than just in terms of the type itself. Ian claims that\nthis is helpful, but is it really likely that you can change that column\ntype without making *any* other mods to the function? Moreover, in\nexchange for this possible benefit you are opening yourself to breaking\nthe function if you choose to rename either the column or the table.\nThe potential net gain seems really small. (If we do like the\nfunctionality, then the patch itself seems OK with the exception of the\ngram.y definition of func_type; the table name should be TokenId not\njust IDENT.)\n\nI did not look at any of the JDBC or libpq++ patches. The other stuff\nseemed OK on a first glance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 17:20:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches " }, { "msg_contents": "> Okay, I looked through these ...\n\nThanks.\n\n> I do not like Ian Taylor's plpgsql cursor patch; trying to do cursors\n> inside plpgsql with no SPI-level support is too much of a kluge. We\n> should first add cursor support to SPI, then fix plpgsql. Much of the\n> parsing work he's done could be salvaged, but the implementation can't\n> be. (And I don't want to apply it now and back it out later --- it adds\n> too many warts.)\n\nI know Jan is talking about SPI support fpr plpgsql. I will keep the\npatch but not apply it.\n\n> Fernando Nasser's ANALYZE patch is superseded by already-applied work,\n> though if he wants to do the promised test additions I would be happy.\n\nI have already emailed him to say you did it already. Removed.\n\n> The PAM support patch concerns me --- it looks like yet another chunk\n> of code that will tie up the postmaster in a single-threaded\n> conversation with a remote daemon that may or may not respond promptly.\n> I recommend holding off on this until we think about whether we\n> shouldn't restructure the postmaster to do all authentication work in\n> per-client subprocesses.\n\nI have not idea what PAM is. If it is a valuable feature, we can\ninstall it. But if it is yet another authentication scheme, it could\nadd more confusion to our already complicated setup. Seems you are\nsaying it is the latter, which is fine with me.\n\n\n> We need to discuss whether we like the %TYPE feature proposed by Ian\n> Taylor. It seems awfully nonstandard to me, and I'm not sure that the\n> value is great enough to be worth inventing a nonstandard feature.\n> ISTM that people don't normally tie functions to tables so tightly that\n> it's better to define a function in terms of \"the type of column foo\n> of table bar\" than just in terms of the type itself. Ian claims that\n> this is helpful, but is it really likely that you can change that column\n> type without making *any* other mods to the function? Moreover, in\n> exchange for this possible benefit you are opening yourself to breaking\n> the function if you choose to rename either the column or the table.\n> The potential net gain seems really small. (If we do like the\n> functionality, then the patch itself seems OK with the exception of the\n> gram.y definition of func_type; the table name should be TokenId not\n> just IDENT.)\n\nI thought it was valuable. I know in Informix 4gl you can set variables\nto track column types, and it helps, especially when you make a column\nlonger or something. It also better documents the variable. I remember\nsomeone else stating this was a nice feature, so I am inclinded to apply\nit, with your suggested changes.\n\n> I did not look at any of the JDBC or libpq++ patches. The other stuff\n> seemed OK on a first glance.\n\nDitto. JDBC will need comment from JDBC people. The libpq++ stuff\nlooks pretty good to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 17:35:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> We need to discuss whether we like the %TYPE feature proposed by Ian\n>> Taylor. It seems awfully nonstandard to me, and I'm not sure that the\n>> value is great enough to be worth inventing a nonstandard feature.\n>> ISTM that people don't normally tie functions to tables so tightly that\n>> it's better to define a function in terms of \"the type of column foo\n>> of table bar\" than just in terms of the type itself. Ian claims that\n>> this is helpful, but is it really likely that you can change that column\n>> type without making *any* other mods to the function? Moreover, in\n>> exchange for this possible benefit you are opening yourself to breaking\n>> the function if you choose to rename either the column or the table.\n>> The potential net gain seems really small. (If we do like the\n>> functionality, then the patch itself seems OK with the exception of the\n>> gram.y definition of func_type; the table name should be TokenId not\n>> just IDENT.)\n\n> I thought it was valuable. I know in Informix 4gl you can set variables\n> to track column types, and it helps, especially when you make a column\n> longer or something. It also better documents the variable.\n\nBut it's not really tracking the variable; with Ian's proposed\nimplementation, after\n\n\t\tcreate table foo(bar int4);\n\n\t\tcreate function fooey(foo.bar%type) ...;\n\n\t\tdrop table foo;\n\n\t\tcreate table foo(bar int8);\n\nyou would still have fooey declared as taking int4 not int8, because\nthe type meant by %type is resolved and frozen immediately upon being\nseen.\n\nMoreover, because of our function-name-overloading feature, fooey(int4)\nand fooey(int8) are two different functions. IMHO it would be a bad\nthing if we *did* try to change the signature. We'd break existing\ncallers of the function, not to mention possibly run into a naming\nconflict with a pre-existing fooey(int8).\n\nI presume that Ian is not thinking about such a scenario, but only about\nusing %type in a schema file that he will reload into a freshly created\ndatabase each time he edits it. That avoids the issue of whether %type\ndeclarations can or should track changes on the fly, but I think he's\nstill going to run into problems with function naming: do\nfooey(foo.bar%type) and fooey(foo.baz%type) conflict, or not? Maybe\ntoday the schema works and tomorrow you get an error.\n\nBasically I think that this feature does not coexist well with function\noverloading, and that it's likely to create as much or more grief as it\navoids.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 17:49:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches " }, { "msg_contents": "> I presume that Ian is not thinking about such a scenario, but only about\n> using %type in a schema file that he will reload into a freshly created\n> database each time he edits it. That avoids the issue of whether %type\n> declarations can or should track changes on the fly, but I think he's\n> still going to run into problems with function naming: do\n> fooey(foo.bar%type) and fooey(foo.baz%type) conflict, or not? Maybe\n> today the schema works and tomorrow you get an error.\n> \n> Basically I think that this feature does not coexist well with function\n> overloading, and that it's likely to create as much or more grief as it\n> avoids.\n\nBut don't we already have problems with changing functions that use\ntables or does this open a new type of problem? Seems if you define a\nfunction to be based on a column, and the column changes, the person\nshould expect errors.\n\nIf we define things as %TYPE in plpgsql, do we handle cases when the\ncolumn type changes?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 17:55:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> But don't we already have problems with changing functions that use\n> tables or does this open a new type of problem?\n\nBut this feature isn't about functions that use tables internally;\nit's about tying the fundamental signature of the function to a table.\nI doubt that that's a good idea. It definitely does introduce potential\nfor problems that weren't there before, per the illustrations I already\ngave.\n\nYou commented earlier that it's easy to \"change the width of a column\"\nwith this approach, but that's irrelevant, because atttypmod is not part\nof a function's signature anyhow.\n\n> If we define things as %TYPE in plpgsql, do we handle cases when the\n> column type changes?\n\nSort of, because we just need to drop the cached precompiled version of\nthe function --- you can do that by starting a fresh backend if nothing\nelse, and we have speculated about making it happen automatically.\nChanging a function's signature automatically is a MUCH bigger and\nnastier can of worms, because it affects things outside the function.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 18:08:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches " }, { "msg_contents": "On Tue, May 08, 2001 at 05:49:16PM -0400, Tom Lane wrote:\n> I presume that Ian is not thinking about such a scenario, but only about\n> using %type in a schema file that he will reload into a freshly created\n> database each time he edits it. That avoids the issue of whether %type\n> declarations can or should track changes on the fly, but I think he's\n> still going to run into problems with function naming: do\n> fooey(foo.bar%type) and fooey(foo.baz%type) conflict, or not? Maybe\n> today the schema works and tomorrow you get an error.\n\nHow about a feature in psql which would read something like '%type' and\nconvert it to the appropriate thing before it passed it to the backend?\nThen you could use it without thinking about it in a script which you\nwould \\i into psql. That would do what's wanted here without having\nany backend nasties. I'm not offering to implement it myself - at least\nnot right now - but does it seem like a sensible idea?\n\nRichard\n", "msg_date": "Tue, 8 May 2001 23:15:53 +0100", "msg_from": "Richard Poole <richard.poole@vi.net>", "msg_from_op": false, "msg_subject": "Re: Re: Outstanding patches" }, { "msg_contents": "Richard Poole <richard.poole@vi.net> writes:\n> How about a feature in psql which would read something like '%type' and\n> convert it to the appropriate thing before it passed it to the backend?\n\nThat's just about what Ian's patch does, only it does it during backend\nparsing instead of in the client. It seems to me that most of the\narguments against it apply either way.\n\nIf we are going to have it, the backend is certainly the right place to\ndo it, rather than adding a huge amount of new smarts to psql.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 18:24:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Outstanding patches " }, { "msg_contents": "> But this feature isn't about functions that use tables internally;\n> it's about tying the fundamental signature of the function to a table.\n> I doubt that that's a good idea. It definitely does introduce potential\n> for problems that weren't there before, per the illustrations I already\n> gave.\n> \n> You commented earlier that it's easy to \"change the width of a column\"\n> with this approach, but that's irrelevant, because atttypmod is not part\n> of a function's signature anyhow.\n\nYea, that is more an Informix issue.\n\n> > If we define things as %TYPE in plpgsql, do we handle cases when the\n> > column type changes?\n> \n> Sort of, because we just need to drop the cached precompiled version of\n> the function --- you can do that by starting a fresh backend if nothing\n> else, and we have speculated about making it happen automatically.\n> Changing a function's signature automatically is a MUCH bigger and\n> nastier can of worms, because it affects things outside the function.\n\nOK, one idea is to throw a elog(NOTICE) when they use this feature,\nstating that it will not track column changes. Another option is to\njust forget about the feature entirely. Do we have people who like this\nfeature? Speak up now. If not, we will drop it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 18:43:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Outstanding patches" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I do not like Ian Taylor's plpgsql cursor patch; trying to do cursors\n> inside plpgsql with no SPI-level support is too much of a kluge. We\n> should first add cursor support to SPI, then fix plpgsql. Much of the\n> parsing work he's done could be salvaged, but the implementation can't\n> be. (And I don't want to apply it now and back it out later --- it adds\n> too many warts.)\n\nI think most of the cursor patch will stand even after SPI-level\nsupport for cursors is added. But it's up to you, of course. 7.2 is\na long way away in any case. I would be happy to rework the patch\nwhen SPI supports cursors.\n\n> We need to discuss whether we like the %TYPE feature proposed by Ian\n> Taylor. It seems awfully nonstandard to me, and I'm not sure that the\n> value is great enough to be worth inventing a nonstandard feature.\n\nOracle PL/SQL supports this, and PL/SQL code that I've seen uses it\nextensively. PL/pgSQL supports %TYPE in all places a type may be\nused, except parameter and return types.\n\n> ISTM that people don't normally tie functions to tables so tightly that\n> it's better to define a function in terms of \"the type of column foo\n> of table bar\" than just in terms of the type itself. Ian claims that\n> this is helpful, but is it really likely that you can change that column\n> type without making *any* other mods to the function?\n\nSure. I've seen code in which all access to the database is done via\nstored procedures. It's natural to write that sort of code using\n%TYPE. Otherwise any change you make to the schema, you have to make\ntwo or three times.\n\nAdmittedly, this may apply mostly to what Postgres calls type\nmodifiers. But it's still a natural way to write the procedure. Why\nduplicate information?\n\n> Moreover, in\n> exchange for this possible benefit you are opening yourself to breaking\n> the function if you choose to rename either the column or the table.\n\nIf you do that you've most likely broken the function anyhow, since\nyou probably wouldn't use %TYPE if you weren't referring to the\ncolumn. Anyhow, if you don't use %TYPE you can break the function in\nthe other way, by changing the type of the column. So I think it's\nsix of one, half-dozen of the other.\n\n> (If we do like the\n> functionality, then the patch itself seems OK with the exception of the\n> gram.y definition of func_type; the table name should be TokenId not\n> just IDENT.)\n\nI think I tried that, and I think it led to lots of reduce/reduce\nerrors. But maybe that was only in 7.0.3.\n\nThe problem that the function type does not change when the schema\nchanges is problematical. I would have been happier if I could have\nleft the %TYPE as a string to be interpreted at execution time. But\nof course that does not work with the current system for function\noverloading.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 483: In Lowes Crossroads, Delaware, it is a violation of local law\nfor any pilot or passenger to carry an ice cream cone in their pocket\nwhile either flying or waiting to board a plane.\n", "msg_date": "08 May 2001 18:02:17 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Re: Outstanding patches" }, { "msg_contents": "Tom Lane wrote:\n\n> But it's not really tracking the variable; with Ian's proposed\n> implementation, after\n> \n> create table foo(bar int4);\n> \n> create function fooey(foo.bar%type) ...;\n> \n> drop table foo;\n> \n> create table foo(bar int8);\n> \n> you would still have fooey declared as taking int4 not int8, because\n> the type meant by %type is resolved and frozen immediately upon being\n> seen.\n\nOk, this is a more general point: in Oracle (which, as Ian points out,\nuses this feature extensively) if you recreate table foo, function fooey\nis tagged as 'dirty' and recompiled on the spot next time is used. This\nis also true for VIEWs and other objects, so you don't have the problem\nwe have when a view breaks because you've updated the underlining table.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Wed, 09 May 2001 10:18:12 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "\nIs anybody planning to fix the problem with ALTER TABLE ADD CONSTRAINT...\nin which the constraints are not applied to child tables?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 09 May 2001 20:17:13 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "On Tue, 8 May 2001, Bruce Momjian wrote:\n\n> > The PAM support patch concerns me --- it looks like yet another chunk\n> > of code that will tie up the postmaster in a single-threaded\n> > conversation with a remote daemon that may or may not respond promptly.\n> > I recommend holding off on this until we think about whether we\n> > shouldn't restructure the postmaster to do all authentication work in\n> > per-client subprocesses.\n>\n> I have not idea what PAM is. If it is a valuable feature, we can\n> install it. But if it is yet another authentication scheme, it could\n> add more confusion to our already complicated setup. Seems you are\n> saying it is the latter, which is fine with me.\n\nPAM is a universal interface to many authentication schemes. If PostgreSQL\nsupports PAM properly, it can instantly support many different types of\nauthentication, such as UNIX, Kerberos, RADIUS, LDAP, or even\nWindows NT domain authentication. Solaris and most modern Linux\ndistributions (certainly Red Hat) support PAM:\n\nhttp://www.sun.com/solaris/pam/\nhttp://www.kernel.org/pub/linux/libs/pam/\n\nPAM modules are very flexible -- they are even stackable. I've used PAM to\nallow the UW IMAP server running on Red Hat Linux to get its passwords\neither from UNIX authentication or from a Windows NT server, for example.\n\nGiven that this has the potential to reduce the number of places that\nsystem administrators have to maintain passwords, I'd call it a win\noverall, except for that pesky single-threaded issue. You should keep in\nmind, though, that some PAM calls won't involve calls to daemons that\nmight not be responsive. Let's say PAM is configured to check\nUNIX authentication (/etc/passwd and /etc/shadow) for passwords -- there\nis no daemon involved, just calls to C libraries that will return\npromptly. If the PAM config file had something like LDAP authentication\nindicated, you would have a potential issue if the LDAP server did not\nrespond.\n\nAs long as this limitation was documented, though, this would be a very\nvaluable addition. A release note saying that the feature was\nexperimental, and outlining the limitations in the face of choosing an\nauthentication scheme that may fail to answer might be appropriate.\n\n --\n Richard Bullington-McGuire <rbulling@microstate.com>\n Chief Technology Officer, The Microstate Corporation\n Phone: 703-796-6446 URL: http://www.microstate.com/\n PGP key IDs: RSA: 0x93862305 DH/DSS: 0xDAC3028E\n\n\n", "msg_date": "Wed, 9 May 2001 08:03:18 -0400 (EDT)", "msg_from": "Richard Bullington-McGuire <rbulling@microstate.com>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "> \n> Is anybody planning to fix the problem with ALTER TABLE ADD CONSTRAINT...\n> in which the constraints are not applied to child tables?\n\nI thought we had not figured out how to inherit those, or at least\ncertain constraints like UNIQUE. We do have on TODO:\n\n\t* Allow inherited tables to inherit index, UNIQUE constraint, and\n\tprimary key [inheritance]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 09:36:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "At 09:36 9/05/01 -0400, Bruce Momjian wrote:\n>> \n>> Is anybody planning to fix the problem with ALTER TABLE ADD CONSTRAINT...\n>> in which the constraints are not applied to child tables?\n>\n>I thought we had not figured out how to inherit those, or at least\n>certain constraints like UNIQUE. We do have on TODO:\n>\n>\t* Allow inherited tables to inherit index, UNIQUE constraint, and\n>\tprimary key [inheritance]\n>\n\naaa=# create table t1(f1 integer check(f1<>0),primary key (f1));\naaa=# create table t1c() inherits (t1);\naaa=# \\d t1c\n Table \"t1c\"\n Attribute | Type | Modifier\n-----------+---------+----------\n f1 | integer | not null\nConstraint: (f1 <> 0)\n\nSo PK is not inherited, but CHECK (and implied NOT NULL) seem to be.\n\nWhereas,\n\naaa=# create table t1(f1 integer);\naaa=# create table t1c() inherits (t1);\naaa=# alter table t1 add constraint aaa check(f1<>0);\naaa=# \\d t1c\n Table \"t1c\"\n Attribute | Type | Modifier\n-----------+---------+----------\n f1 | integer |\n\nie. The CHECK constraints inherit only at the time of table creation. I\nthink this is a bug in ALTER TABLE for CHECK constraints.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 10 May 2001 00:29:41 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Is anybody planning to fix the problem with ALTER TABLE ADD CONSTRAINT...\n> in which the constraints are not applied to child tables?\n\nAFAIK no one is looking at it presently (although Stephan Szabo has\nprobably thought about it). If you want to tackle it, step right up,\nbut coordinate with Stephan.\n\nI was just in the vicinity of ALTER TABLE, and noted that that routine\ndidn't have the same loop-over-children superstructure that most of the\nother ALTER code does. Should be a relatively simple matter to graft\nthat logic onto it, unless there are semantic funnies that come up with\npropagating the new constraint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 May 2001 11:00:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches " }, { "msg_contents": "Alessio Bragadini <alessio@albourne.com> writes:\n> Tom Lane wrote:\n>> But it's not really tracking the variable; with Ian's proposed\n>> implementation, after\n>> \n>> create table foo(bar int4);\n>> \n>> create function fooey(foo.bar%type) ...;\n>> \n>> drop table foo;\n>> \n>> create table foo(bar int8);\n>> \n>> you would still have fooey declared as taking int4 not int8, because\n>> the type meant by %type is resolved and frozen immediately upon being\n>> seen.\n\n> Ok, this is a more general point: in Oracle (which, as Ian points out,\n> uses this feature extensively) if you recreate table foo, function fooey\n> is tagged as 'dirty' and recompiled on the spot next time is used. This\n> is also true for VIEWs and other objects, so you don't have the problem\n> we have when a view breaks because you've updated the underlining table.\n\nIndeed, and we have plans to do something similar sometime soon. My\nreal objection to this proposed feature is that there is no way to\nhandle the update as a local matter within the function, because\nchanging the function's input datatypes actually means it's a different\nfunction. This creates all sorts of problems at both the definitional\nand implementation levels...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 May 2001 11:06:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Outstanding patches " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> ie. The CHECK constraints inherit only at the time of table creation. I\n> think this is a bug in ALTER TABLE for CHECK constraints.\n\nMore like an \"unimplemented feature\" ;-).\n\nAfter thinking for a moment, I believe the only real gotcha that could\narise here is to make sure that the constraint is adjusted for the\npossibly-different column numbers in each child table. There is code\navailable to make this happen, or it might happen for free if you can\npostpone parse analysis of the raw constraint tree until you are looking\nat each child table. Just something to keep in mind and test while\nyou're doing it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 May 2001 11:21:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches " }, { "msg_contents": "> > Ok, this is a more general point: in Oracle (which, as Ian points out,\n> > uses this feature extensively) if you recreate table foo, function fooey\n> > is tagged as 'dirty' and recompiled on the spot next time is used. This\n> > is also true for VIEWs and other objects, so you don't have the problem\n> > we have when a view breaks because you've updated the underlining table.\n> \n> Indeed, and we have plans to do something similar sometime soon. My\n> real objection to this proposed feature is that there is no way to\n> handle the update as a local matter within the function, because\n> changing the function's input datatypes actually means it's a different\n> function. This creates all sorts of problems at both the definitional\n> and implementation levels...\n\nDoes this relate to allowing functions to be recreated with the same OID\nas the original function? I think we need that badly for 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 11:53:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Outstanding patches" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Does this relate to allowing functions to be recreated with the same OID\n> as the original function? I think we need that badly for 7.2.\n\nNo, I don't think that's very related; that's a simple matter of\nimplementing an ALTER FUNCTION command. The other thing will require\nfiguring out how to do dependency tracking.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 May 2001 12:01:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Outstanding patches " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Does this relate to allowing functions to be recreated with the same OID\n> > as the original function? I think we need that badly for 7.2.\n> \n> No, I don't think that's very related; that's a simple matter of\n> implementing an ALTER FUNCTION command. The other thing will require\n> figuring out how to do dependency tracking.\n\nGot it. Let me ask, if they change the column type, would they use\nALTER FUNCTION to then update to match the new column type. As I\nunderstand it, the problem is that this does not happen automatically,\nright?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 12:03:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Outstanding patches" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> No, I don't think that's very related; that's a simple matter of\n>> implementing an ALTER FUNCTION command. The other thing will require\n>> figuring out how to do dependency tracking.\n\n> Got it. Let me ask, if they change the column type, would they use\n> ALTER FUNCTION to then update to match the new column type. As I\n> understand it, the problem is that this does not happen automatically,\n> right?\n\nMy vision of ALTER FUNCTION is that it would let you change the function\nbody, and perhaps also the function language and attributes (isCachable,\nisStrict). It would NOT allow you to change the function's parameter\ntypes or return type, because that potentially breaks things that depend\non the function. To do that, you should have to create a new function.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 May 2001 12:07:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Outstanding patches " }, { "msg_contents": "\nOn Wed, 9 May 2001, Philip Warner wrote:\n\n> \n> Is anybody planning to fix the problem with ALTER TABLE ADD CONSTRAINT...\n> in which the constraints are not applied to child tables?\n> \n\nI'm working on the check constraint case (didn't realize that\nthose inherited since unique, primary key and foreign key\ncan't right now). We need to really figure out what we're going to\ndo with the other constraints to make them inherit.\n\n\n", "msg_date": "Wed, 9 May 2001 09:13:37 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "Bruce Momjian writes:\n\n> I could use help in identifying which patches are a problem. Most of\n> the ones there now have been reviewed by me or have received the\n> recommendation of another developer.\n\nI have a few stylistic issues with the createlang patch, but I can work on\ninstalling it myself.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 9 May 2001 20:59:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Outstanding patches" }, { "msg_contents": "I'll have a look-see since it's in the vicinity of the code I'm messing with\nat the moment. Time is my problem, though...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Wednesday, 9 May 2001 11:00 PM\n> To: Philip Warner\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Outstanding patches\n>\n>\n> Philip Warner <pjw@rhyme.com.au> writes:\n> > Is anybody planning to fix the problem with ALTER TABLE ADD\n> CONSTRAINT...\n> > in which the constraints are not applied to child tables?\n>\n> AFAIK no one is looking at it presently (although Stephan Szabo has\n> probably thought about it). If you want to tackle it, step right up,\n> but coordinate with Stephan.\n>\n> I was just in the vicinity of ALTER TABLE, and noted that that routine\n> didn't have the same loop-over-children superstructure that most of the\n> other ALTER code does. Should be a relatively simple matter to graft\n> that logic onto it, unless there are semantic funnies that come up with\n> propagating the new constraint.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Thu, 10 May 2001 09:50:21 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Outstanding patches " }, { "msg_contents": "[ Charset ISO-8859-15 unsupported, converting... ]\n> Bruce Momjian wrote:\n> > \n> > OK, now that we have started 7.2 development, I am going to go through\n> > the outstanding patches and start to apply them or reject them. They\n> > are at:\n> > \n> > http://candle.pha.pa.us/cgi-bin/pgpatches\n> > \n> \n> Has the patch that makes MOVE return number of rows actually moved\n> (analoguous \n> to UPDATE and DELETE) been properly submitted to patches ?\n\nI know MOVE had fixes in 7.1. I don't know of any outstanding MOVE\nbugs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 10:13:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Outstanding patches" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, now that we have started 7.2 development, I am going to go through\n> the outstanding patches and start to apply them or reject them. They\n> are at:\n> \n> http://candle.pha.pa.us/cgi-bin/pgpatches\n> \n\nHas the patch that makes MOVE return number of rows actually moved\n(analoguous \nto UPDATE and DELETE) been properly submitted to patches ?\n\n--------------------\nHannu\n", "msg_date": "Thu, 10 May 2001 16:14:28 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Outstanding patches" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Has the patch that makes MOVE return number of rows actually moved\n>> (analoguous to UPDATE and DELETE) been properly submitted to patches ?\n\n> I know MOVE had fixes in 7.1. I don't know of any outstanding MOVE\n> bugs.\n\nIt wasn't a bug, it was a feature ;-)\n\nBruce did not have that patch on his list of things-to-apply, so either\nit was never properly submitted or it slipped through the cracks.\nAnyone want to dig it up and verify it against 7.1?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 11:07:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Outstanding patches " }, { "msg_contents": "> Here are the patches. Please look at them, and if you think \n> it's a good idea, then please let me know where and how should\n> I post them, and approximately when will you finish with the\n> beta testing, so it can be really considered seriously.\n> \n> I included them also as an attachment, because my silly pine\n> insists to break the lines...\n\nLooks fine to me. I don't remember ever seeing this before.\n\n> p.s.: I read a page on your homepage, called \"unapplied patches\".\n> I would like to know if it means \"still unapplied patches\", or\n> it means \"bad, and not accepted ideas\".\n\nIt means it is probably good, waiting for approval from others, or in\nthe standard one-delay before applying patches.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 12:12:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: [HACKERS] Outstanding patches" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Has the patch that makes MOVE return number of rows actually moved\n> >> (analoguous to UPDATE and DELETE) been properly submitted to patches ?\n> \n> > I know MOVE had fixes in 7.1. I don't know of any outstanding MOVE\n> > bugs.\n> \n> It wasn't a bug, it was a feature ;-)\n> \n> Bruce did not have that patch on his list of things-to-apply, so either\n> it was never properly submitted or it slipped through the cracks.\n> Anyone want to dig it up and verify it against 7.1?\n\nI forward it here so you don't have to dig it up:\n-----------------------------------------------------------------------\nHi.\n\nA few weeks (months?) ago I made a patch to the postgres\nbackend to get back the number of realized moves after\na MOVE command. So if I issue a \"MOVE 100 IN cusrorname\",\nbut there was only 66 rows left, I get back not only \"MOVE\",\nbut \"MOVE 66\". If the 100 steps could be realized, then\n\"MOVE 100\" would come back. \n\nI send this info to you, because I would like to ask you if\nit could be OK to include in future versions. I think you\nare in a beta testing phase now, so it is trivially not the\ntime to include it now...\n\nThe solution is 2 code lines into command.c, and then the\nmessage of move cames with the number into for example psql.\n1 other word into the jdbc driver, and this number is\navailable at update_count.\n\nI made the patch to the latest (one day old) CVS version.\n\nHere are the patches. Please look at them, and if you think \nit's a good idea, then please let me know where and how should\nI post them, and approximately when will you finish with the\nbeta testing, so it can be really considered seriously.\n\nI included them also as an attachment, because my silly pine\ninsists to break the lines...\n\n--- ./src/backend/commands/command.c.orig\tFri Mar 23 05:49:52 2001\n+++ ./src/backend/commands/command.c\tSat Apr 7 10:24:27 2001\n@@ -174,6 +174,12 @@\n \t\tif (!portal->atEnd)\n \t\t{\n \t\t\tExecutorRun(queryDesc, estate, EXEC_FOR, (long) count);\n+\n+\t\t\t/* I use CMD_UPDATE, because no CMD_MOVE or the like\n+\t\t\t exists, and I would like to provide the same\n+\t\t\t kind of info as CMD_UPDATE */\n+\t\t\tUpdateCommandInfo(CMD_UPDATE, 0, estate->es_processed);\n+\n \t\t\tif (estate->es_processed > 0)\n \t\t\t\tportal->atStart = false;\t\t/* OK to back up now */\n \t\t\tif (count <= 0 || (int) estate->es_processed < count)\n@@ -185,6 +191,12 @@\n \t\tif (!portal->atStart)\n \t\t{\n \t\t\tExecutorRun(queryDesc, estate, EXEC_BACK, (long) count);\n+\n+\t\t\t/* I use CMD_UPDATE, because no CMD_MOVE or the like\n+\t\t\t exists, and I would like to provide the same\n+\t\t\t kind of info as CMD_UPDATE */\n+\t\t\tUpdateCommandInfo(CMD_UPDATE, 0, -1*estate->es_processed);\n+\n \t\t\tif (estate->es_processed > 0)\n \t\t\t\tportal->atEnd = false;\t/* OK to go forward now */\n \t\t\tif (count <= 0 || (int) estate->es_processed < count)\n\n\n\nHere is the patch for the jdbc driver. >! I couldn't test it\nwith the current version, because it needs ant, and I didn't\nhave time and money today to download it... !< However, it\nis a trivial change, and if Peter T. Mount reads it, I ask\nhim to check if he likes it... Thanks for any kind of answer. \n\n--- ./src/interfaces/jdbc/org/postgresql/Connection.java.orig\tWed Jan 31\n09:26:01 2001\n+++ ./src/interfaces/jdbc/org/postgresql/Connection.java\tSat Apr 7\n16:42:04 2001\n@@ -490,7 +490,7 @@\n \t\t\t recv_status =\npg_stream.ReceiveString(receive_sbuf,8192,getEncoding());\n \n \t\t\t\t// Now handle the update count correctly.\n-\t\t\t\tif(recv_status.startsWith(\"INSERT\") ||\nrecv_status.startsWith(\"UPDATE\") || recv_status.startsWith(\"DELETE\")) {\n+\t\t\t\tif(recv_status.startsWith(\"INSERT\") ||\nrecv_status.startsWith(\"UPDATE\") || recv_status.startsWith(\"DELETE\") ||\nrecv_status.startsWith(\"MOVE\")) {\n \t\t\t\t\ttry {\n \t\t\t\t\t\tupdate_count =\nInteger.parseInt(recv_status.substring(1+recv_status.lastIndexOf(' ')));\n \t\t\t\t\t} catch(NumberFormatException nfe) {\n\n\n-------------------\n(This last looks a bit complex, but the change is really a new \n\"|| recv_status.startsWith(\"MOVE\")\" only...)\n\n\nThank you for having read this, \n\nBaldvin\n\np.s.: I read a page on your homepage, called \"unapplied patches\".\nI would like to know if it means \"still unapplied patches\", or\nit means \"bad, and not accepted ideas\".\n", "msg_date": "Thu, 10 May 2001 18:14:03 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Outstanding patches" }, { "msg_contents": "> +\t\t\t/* I use CMD_UPDATE, because no CMD_MOVE or the like\n> +\t\t\t exists, and I would like to provide the same\n> +\t\t\t kind of info as CMD_UPDATE */\n> +\t\t\tUpdateCommandInfo(CMD_UPDATE, 0, -1*estate->es_processed);\n\nI do not think it is a good idea to return a negative count for a\nbackwards move; that is too likely to break client code that parses\ncommand result strings and isn't expecting minus signs. The client\nshould know whether he issued MOVE FORWARD or MOVE BACKWARDS anyway,\nso just returning es_processed ought to be sufficient.\n\nOtherwise I think the patch is probably OK.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 16:52:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Outstanding patches " }, { "msg_contents": "[ Charset ISO-8859-15 unsupported, converting... ]\n> Bruce Momjian wrote:\n> > \n> > OK, now that we have started 7.2 development, I am going to go through\n> > the outstanding patches and start to apply them or reject them. They\n> > are at:\n> > \n> > http://candle.pha.pa.us/cgi-bin/pgpatches\n> > \n> \n> Has the patch that makes MOVE return number of rows actually moved\n> (analoguous \n> to UPDATE and DELETE) been properly submitted to patches ?\n\nYes, it is on the page to be applied, but the page also has Tom Lane's\nobjection to returning a negative value. Can you fix that and resubmit?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 17:38:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: [HACKERS] Outstanding patches" }, { "msg_contents": "Ian Lance Taylor wrote:\n>\n> Oracle PL/SQL supports this, and PL/SQL code that I've seen uses it\n> extensively. PL/pgSQL supports %TYPE in all places a type may be\n> used, except parameter and return types.\n\n It's not PL/pgSQL's fault here. The pg_proc entries are\n created by the CREATE FUNCTION utility command that's used\n for all languages. So what we're talking about affects SQL,\n C, PL/Tcl, PL/Perl, PL/Python and whatnot too. PL/pgSQL might\n live with that very well, because it has some automatic type\n conversion (using the actual values typoutput and the needed\n types typinput functions) to convert values on the fly. But\n a C function receiving a different type all of a sudden is a\n good candidate to coredump the backend.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 15 May 2001 11:41:51 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Re: Outstanding patches" }, { "msg_contents": "\nCan this patch be resubmitted with a postive-only return value?\n\n\n> > +\t\t\t/* I use CMD_UPDATE, because no CMD_MOVE or the like\n> > +\t\t\t exists, and I would like to provide the same\n> > +\t\t\t kind of info as CMD_UPDATE */\n> > +\t\t\tUpdateCommandInfo(CMD_UPDATE, 0, -1*estate->es_processed);\n> \n> I do not think it is a good idea to return a negative count for a\n> backwards move; that is too likely to break client code that parses\n> command result strings and isn't expecting minus signs. The client\n> should know whether he issued MOVE FORWARD or MOVE BACKWARDS anyway,\n> so just returning es_processed ought to be sufficient.\n> \n> Otherwise I think the patch is probably OK.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 28 May 2001 10:14:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: [HACKERS] Outstanding patches" }, { "msg_contents": "> > +\t\t\t/* I use CMD_UPDATE, because no CMD_MOVE or the like\n> > +\t\t\t exists, and I would like to provide the same\n> > +\t\t\t kind of info as CMD_UPDATE */\n> > +\t\t\tUpdateCommandInfo(CMD_UPDATE, 0, -1*estate->es_processed);\n> \n> I do not think it is a good idea to return a negative count for a\n> backwards move; that is too likely to break client code that parses\n> command result strings and isn't expecting minus signs. The client\n> should know whether he issued MOVE FORWARD or MOVE BACKWARDS anyway,\n> so just returning es_processed ought to be sufficient.\n> \n> Otherwise I think the patch is probably OK.\n\nI have applied this patch with does MOVE output for both the backend and\njdbc. I tested the JDBC patch by compiling, and changed the backend to\nonly output postitive values.\n\nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/commands/command.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.131\ndiff -c -r1.131 command.c\n*** src/backend/commands/command.c\t2001/05/30 13:00:03\t1.131\n--- src/backend/commands/command.c\t2001/06/07 00:03:44\n***************\n*** 176,181 ****\n--- 176,187 ----\n \t\tif (!portal->atEnd)\n \t\t{\n \t\t\tExecutorRun(queryDesc, estate, EXEC_FOR, (long) count);\n+ \t\t\t/*\n+ \t\t\t *\tI use CMD_UPDATE, because no CMD_MOVE or the like\n+ \t\t\t *\texists, and I would like to provide the same\n+ \t\t\t *\tkind of info as CMD_UPDATE\n+ \t\t\t */\n+ \t\t\tUpdateCommandInfo(CMD_UPDATE, 0, estate->es_processed);\n \t\t\tif (estate->es_processed > 0)\n \t\t\t\tportal->atStart = false;\t\t/* OK to back up now */\n \t\t\tif (count <= 0 || (int) estate->es_processed < count)\n***************\n*** 187,192 ****\n--- 193,204 ----\n \t\tif (!portal->atStart)\n \t\t{\n \t\t\tExecutorRun(queryDesc, estate, EXEC_BACK, (long) count);\n+ \t\t\t/*\n+ \t\t\t *\tI use CMD_UPDATE, because no CMD_MOVE or the like\n+ \t\t\t *\texists, and I would like to provide the same\n+ \t\t\t *\tkind of info as CMD_UPDATE\n+ \t\t\t */\n+ \t\t\tUpdateCommandInfo(CMD_UPDATE, 0, estate->es_processed);\n \t\t\tif (estate->es_processed > 0)\n \t\t\t\tportal->atEnd = false;\t/* OK to go forward now */\n \t\t\tif (count <= 0 || (int) estate->es_processed < count)\nIndex: src/interfaces/jdbc/org/postgresql/Connection.java\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/Connection.java,v\nretrieving revision 1.16\ndiff -c -r1.16 Connection.java\n*** src/interfaces/jdbc/org/postgresql/Connection.java\t2001/06/01 20:57:58\t1.16\n--- src/interfaces/jdbc/org/postgresql/Connection.java\t2001/06/07 00:03:56\n***************\n*** 505,511 ****\n \t\t\t recv_status = pg_stream.ReceiveString(receive_sbuf,8192,getEncoding());\n \n \t\t\t\t// Now handle the update count correctly.\n! \t\t\t\tif(recv_status.startsWith(\"INSERT\") || recv_status.startsWith(\"UPDATE\") || recv_status.startsWith(\"DELETE\")) {\n \t\t\t\t\ttry {\n \t\t\t\t\t\tupdate_count = Integer.parseInt(recv_status.substring(1+recv_status.lastIndexOf(' ')));\n \t\t\t\t\t} catch(NumberFormatException nfe) {\n--- 505,511 ----\n \t\t\t recv_status = pg_stream.ReceiveString(receive_sbuf,8192,getEncoding());\n \n \t\t\t\t// Now handle the update count correctly.\n! \t\t\t\tif(recv_status.startsWith(\"INSERT\") || recv_status.startsWith(\"UPDATE\") || recv_status.startsWith(\"DELETE\") || recv_status.startsWith(\"MOVE\")) {\n \t\t\t\t\ttry {\n \t\t\t\t\t\tupdate_count = Integer.parseInt(recv_status.substring(1+recv_status.lastIndexOf(' ')));\n \t\t\t\t\t} catch(NumberFormatException nfe) {", "msg_date": "Wed, 6 Jun 2001 20:08:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: [HACKERS] Outstanding patches" } ]
[ { "msg_contents": "If it's true that the ALTER TABLE x ADD CONSTRAINT x CHECK (x) syntax is\nsupported in 7.1.1, here is a patch to that alter_table.sgml that documents\nit.\n\nChris", "msg_date": "Wed, 9 May 2001 15:00:08 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Patch to ALTER TABLE docs" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> If it's true that the ALTER TABLE x ADD CONSTRAINT x CHECK (x) syntax is\n> supported in 7.1.1, here is a patch to that alter_table.sgml that documents\n> it.\n> \n> Chris\n\n\nThanks. Applied to 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 09:27:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to ALTER TABLE docs" } ]
[ { "msg_contents": "\n> > The connect group would be granted these System Privileges:\n\nIf we keep it like others (e.g. Informix) this System Privilege would be called\n\"resource\". I like this name better, because it more describes the detailed \npriviledges.\n\n> > \n> > CREATE AGGREGATE privilege\n> > CREATE INDEX privilege\n> > CREATE FUNCTION privilege\n> > CREATE OPERATOR privilege\n> > CREATE RULE privilege\n> > CREATE SESSION privilege\n> > CREATE SYNONYM privilege\n> > CREATE TABLE privilege\n> > CREATE TRIGGER privilege\n> > CREATE TYPE privilege\n> > CREATE VIEW privilege\n\nThe \"connect\" group would only have the priviledge to connect to the db [and\ncreate temp tables ?] and rights they where granted, or that were granted to public.\nThey would not be allowed to create anything.\n\nAndreas\n", "msg_date": "Wed, 9 May 2001 09:20:28 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: NOCREATETABLE patch (was: Re: Please, help!(about P\n\tostgres))" } ]
[ { "msg_contents": ">> We need to discuss whether we like the %TYPE feature proposed by Ian\n\n> OK, one idea is to throw a elog(NOTICE) when they use this feature,\n> stating that it will not track column changes. Another option is to\n> just forget about the feature entirely. Do we have people \n> who like this feature? Speak up now. If not, we will drop it.\n\nI say drop it. (Never used it in Informix eighter, even though we have some heavy\nstored procedure writers). In Informix the varlen is also not part of the signature, \nit only states the longest accepted value.\n\nAndreas\n", "msg_date": "Wed, 9 May 2001 09:42:05 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Outstanding patches" } ]
[ { "msg_contents": "\n> > Tom's suggestion does not sound reasonable to me. If PostgreSQL is not\n> > built with MULTIBYTE, then it means there would be no such idea\n> > \"encoding\" in PostgreSQL becuase there is no program to handle\n> > encodings. Thus it would be meaningless to assign an \"encoding\" to a\n> > database if MULTIBYTE is not enabled.\n> \n> Why? Without the MULTIBYTE code, the backend cannot perform character\n> set translations --- but it's perfectly possible that someone might not\n> need translations. A lot of European sites are probably very happy\n> as long as the server gives them back the same 8-bit characters they\n> stored.\n\nYes, that is what we do (German language). Encoding is Latin1.\nWould it not be reasonable to return the machine LC_CTYPE in the non multibyte case ?\n\nAndreas\n", "msg_date": "Wed, 9 May 2001 09:51:01 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: MULTIBYTE and SQL_ASCII (was Re: [JDBC] Re: A bug w\n\tith pgsql 7.1/jdbc and non-ascii (8-bit) chars?)" } ]
[ { "msg_contents": "Hi all!\n\nI have to convert functions and procedures from Oracle to PostgreSQL. I \nlooked at all the stuff of the Pg-Homepage and I ask me if there are any \ntools, that support the conversion. \n\nWriting PS/PGSQL tools seems to be a bit hard, because of the existing \ntool-infrastructure on linux. Are there are tools I have overseen?\n\nI have implemented the following tools for my use yet:\n\n- A WWWdb-Application for editing and testing of SQL-Procedures over a\n WEB-frontend\n- A perl-script, that does basic conversions between PL/SQL <-> XML <->\n PL/PGSQL (The Procedure-definition is converted completely, the code-block\n a little bit)\n\nWho else is working in this area? Any tips?\n\nRegards, Klaus\n\nVisit WWWdb at\nhttp://wwwdb.org\n", "msg_date": "Wed, 9 May 2001 12:24:40 +0200", "msg_from": "Klaus Reger <K.Reger@wwwdb.de>", "msg_from_op": true, "msg_subject": "Converting PL/SQL to PL/PGSQL" }, { "msg_contents": "Hello,\n\nPgAdmin http://www.greatbridge.org/project/pgadmin/projdisplay.php is the \nwindows administration interface of PostgreSQL.\nThe new upcoming version features a function, trigger and view IDE. When \nfunctions are modified, it is possible to rebuild dependencies.\nIt is the perfect tool for writing PL/PgSQL Wait a few days before it is \nready...\n\nGreetings from Jean-Michel POURE, Paris, France\n\nAt 12:24 09/05/01 +0200, you wrote:\n>Hi all!\n>\n>I have to convert functions and procedures from Oracle to PostgreSQL. I\n>looked at all the stuff of the Pg-Homepage and I ask me if there are any\n>tools, that support the conversion.\n>\n>Writing PS/PGSQL tools seems to be a bit hard, because of the existing\n>tool-infrastructure on linux. Are there are tools I have overseen?\n>\n>I have implemented the following tools for my use yet:\n>\n>- A WWWdb-Application for editing and testing of SQL-Procedures over a\n> WEB-frontend\n>- A perl-script, that does basic conversions between PL/SQL <-> XML <->\n> PL/PGSQL (The Procedure-definition is converted completely, the code-block\n> a little bit)\n>\n>Who else is working in this area? Any tips?\n>\n>Regards, Klaus\n>\n>Visit WWWdb at\n>http://wwwdb.org\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n\n", "msg_date": "Fri, 11 May 2001 22:48:08 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: Converting PL/SQL to PL/PGSQL" } ]
[ { "msg_contents": "That makes perfect sense to me. I was only going by what System \nPrivileges are granted to the Oracle roles of the same name. Oracle \nhas:\n\nCONNECT -\nALTER SESSION\t\nCREATE CLUSTER\t\nCREATE DATABASE LINK\t\nCREATE SEQUENCE\t\nCREATE SESSION\t\nCREATE SYNONYM\t\nCREATE TABLE\t\nCREATE VIEW\t\n\nRESOURCE -\nCREATE CLUSTER\t\nCREATE PROCEDURE\t\nCREATE SEQUENCE\t\nCREATE TABLE\t\nCREATE TRIGGER\t\n\nDBA -\nAll systems privileges WITH ADMIN OPTION\t\n\nBut I agree with you. When I was first learning Oracle, I thought it \nstrange that the CONNECT role had anything more than CREATE/ALTER \nSESSION privilege.\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tZeugswetter Andreas SB [SMTP:ZeugswetterA@wien.spardat.at]\nSent:\tWednesday, May 09, 2001 3:20 AM\nTo:\t'Bruce Momjian'; mascarm@mascari.com\nCc:\tKarel Zak; pgsql-hackers\nSubject:\tAW: [HACKERS] NOCREATETABLE patch (was: Re: Please, \nhelp!(about P\tostgres))\n\n\n> > The connect group would be granted these System Privileges:\n\nIf we keep it like others (e.g. Informix) this System Privilege would \nbe called\n\"resource\". I like this name better, because it more describes the \ndetailed\npriviledges.\n\n> >\n> > CREATE AGGREGATE privilege\n> > CREATE INDEX privilege\n> > CREATE FUNCTION privilege\n> > CREATE OPERATOR privilege\n> > CREATE RULE privilege\n> > CREATE SESSION privilege\n> > CREATE SYNONYM privilege\n> > CREATE TABLE privilege\n> > CREATE TRIGGER privilege\n> > CREATE TYPE privilege\n> > CREATE VIEW privilege\n\nThe \"connect\" group would only have the priviledge to connect to the \ndb [and\ncreate temp tables ?] and rights they where granted, or that were \ngranted to public.\nThey would not be allowed to create anything.\n\nAndreas\n\n", "msg_date": "Wed, 9 May 2001 09:29:01 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: NOCREATETABLE patch (was: Re: Please, help!(about P\tostgres))" } ]
[ { "msg_contents": "> Date: Fri, 16 Mar 2001 22:58:42 +1100\n> From: Philip Warner <pjw@rhyme.com.au>\n> To: kovzol@math.u-szeged.hu\n> Subject: Re: I cannot force pg_dump to disable triggers\n> \n> At 12:49 16/03/01 +0100, kovacsz wrote:\n> >I downloaded the current snapshot and realized that you changed the\n> >dumping behaviour about disabling and enabling triggers. Unfortunately I\n> >couldn't find the appropriate switches to get the same output before you\n> >made this change (I mean beta4). Could you please help? In beta4 I used\n> >-xacnDO for the desired result. Now I never get any lines in the output\n> >which contains \"-- Enable triggers\" or \"-- Disable triggers\".\n> \n> I just tried:\n> \n> pg_dump -xacnDO pjw\n> \n> and got the enable/disable stuff. Can you check that you did a 'make\n> distclean' in pg_dump before you ran the dump? The latest rule is that it\n> only does an enable/disable if the dump (or restore) is data-only.\nWell, I stopped trying it in March but I'm in a need of changing to 7.1 (I\nshould use Tom's patch). I did a 'make distclean' but no difference: there\nare no lines switching the triggers on/off. I'm using \"PostgreSQL 7.1 on\ni686-pc-linux-gnu, compiled by GCC egcs-2.91.66\".\n\nTIA, Zoltan\n\n", "msg_date": "Wed, 9 May 2001 17:23:34 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "I still cannot force pg_dump to disable triggers" }, { "msg_contents": "At 17:23 9/05/01 +0200, Kovacs Zoltan wrote:\n>Well, I stopped trying it in March but I'm in a need of changing to 7.1 (I\n>should use Tom's patch). I did a 'make distclean' but no difference: there\n>are no lines switching the triggers on/off. I'm using \"PostgreSQL 7.1 on\n>i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\".\n\nIt's because the data-only dump is (incorrectly) dumping the COMMENTs, and\ntreats SEQUENCE SET entries as schema entries, not data entries. It will be\nfixed soon.\n \n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 10 May 2001 03:09:53 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: I still cannot force pg_dump to disable triggers" } ]
[ { "msg_contents": "I had a thought just now about how to deal with the TODO item about\ncoping with deferred trigger lists that are so long as to overrun\nmain memory. This might be a bit harebrained, but I offer it for\nconsideration:\n\nWhat we need to do, at the end of a transaction in which deferred\ntriggers were fired, is to find each tuple that was inserted or\nupdated in the current transaction in each table that has such\ntriggers. Well, we know where those tuples are: to a first\napproximation, they're all near the end of the table. Perhaps instead\nof storing each and every trigger-related tuple in memory, we only need\nto store one value per affected table: the lowest CTID of any tuple\nthat we need to revisit for deferred-trigger purposes. At the end of\nthe transaction, scan forward from that point to the end of the table,\nlooking for tuples that were inserted by the current xact. Process each\none using the table's list of deferred triggers.\n\nInstead of a list of all tuples subject to deferred triggers, we now\nneed only a list of all tables subject to deferred triggers, which\nshould pose no problems for memory consumption. It might be objected\nthat this means more disk activity --- but in an xact that hasn't\ninserted very many tuples, most likely the disk blocks containing 'em\nare still in memory and won't need a physical re-read. Once we get to\ninserting so many tuples that that's not true, this approach should\nrequire less disk activity overall than the previous idea of writing\n(and re-reading) a separate disk file for the tuple list.\n\nI am not sure exactly what the \"triggered data change violation\" test\ndoes or is good for, but if we want to keep it, I *think* that in these\nterms we'd just need to signal error if we come across a tuple that was\nboth inserted and deleted by the current xact. I'm a bit fuzzy on this\nthough.\n\nAn interesting property of this approach is that if the set of triggers\nfor the table changes during the xact (which could only happen if this\nsame xact created or deleted triggers; no other xact can, since changing\ntriggers requires an exclusive lock on the table), the set of triggers\napplied to a tuple is the set that exists at the end of the xact, not\nthe set that existed when the tuple was modified. Offhand I think this\nis a good change.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 May 2001 11:38:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Coping with huge deferred-trigger lists" }, { "msg_contents": "Tom Lane wrote:\n> I had a thought just now about how to deal with the TODO item about\n> coping with deferred trigger lists that are so long as to overrun\n> main memory. This might be a bit harebrained, but I offer it for\n> consideration:\n>\n> What we need to do, at the end of a transaction in which deferred\n> triggers were fired, is to find each tuple that was inserted or\n> updated in the current transaction in each table that has such\n> triggers. Well, we know where those tuples are: to a first\n> approximation, they're all near the end of the table. Perhaps instead\n> of storing each and every trigger-related tuple in memory, we only need\n> to store one value per affected table: the lowest CTID of any tuple\n> that we need to revisit for deferred-trigger purposes. At the end of\n> the transaction, scan forward from that point to the end of the table,\n> looking for tuples that were inserted by the current xact. Process each\n> one using the table's list of deferred triggers.\n>\n> Instead of a list of all tuples subject to deferred triggers, we now\n> need only a list of all tables subject to deferred triggers, which\n> should pose no problems for memory consumption. It might be objected\n> that this means more disk activity --- but in an xact that hasn't\n> inserted very many tuples, most likely the disk blocks containing 'em\n> are still in memory and won't need a physical re-read. Once we get to\n> inserting so many tuples that that's not true, this approach should\n> require less disk activity overall than the previous idea of writing\n> (and re-reading) a separate disk file for the tuple list.\n>\n> I am not sure exactly what the \"triggered data change violation\" test\n> does or is good for, but if we want to keep it, I *think* that in these\n> terms we'd just need to signal error if we come across a tuple that was\n> both inserted and deleted by the current xact. I'm a bit fuzzy on this\n> though.\n\n The check came from my possible wrong understanding of the\n SQL3 specs. The idea I had is that the SUMMARY of all\n changes during a transaction counts. If you INSERT a row into\n a table and have immediate triggers invoked, a later DELETE\n cannot undo the triggers. So the question is did this row\n ever exist?\n\n>\n> An interesting property of this approach is that if the set of triggers\n> for the table changes during the xact (which could only happen if this\n> same xact created or deleted triggers; no other xact can, since changing\n> triggers requires an exclusive lock on the table), the set of triggers\n> applied to a tuple is the set that exists at the end of the xact, not\n> the set that existed when the tuple was modified. Offhand I think this\n> is a good change.\n>\n> Comments?\n\n Giving you have two separate, named, deferred constraints on\n one table. Now after a couple of INSERTs and UPDATEs you SET\n one of them to IMMEDIATE and back to DEFERRED. This has to\n run the triggers for one and only one of the constraints now.\n If you don't worry about the need of running the checks later\n again, it's OK.\n\n The detail I'm wondering about most is how you'd know in an\n UPDATE case which two tuples (one deleted during this XACT\n and one inserted) are the two for OLD and NEW in the call to\n the trigger. Note that the referential action ON UPDATE SET\n NULL for example doesn't have to take place if the user\n didn't change the referenced key fields.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 14 May 2001 09:57:30 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Coping with huge deferred-trigger lists" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> The detail I'm wondering about most is how you'd know in an\n> UPDATE case which two tuples (one deleted during this XACT\n> and one inserted) are the two for OLD and NEW in the call to\n> the trigger.\n\nUgh ... good point. There's no back-link from the updated tuple to\nits original on disk, is there?\n\nBack to the drawing board ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 10:45:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Coping with huge deferred-trigger lists " }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > The detail I'm wondering about most is how you'd know in an\n> > UPDATE case which two tuples (one deleted during this XACT\n> > and one inserted) are the two for OLD and NEW in the call to\n> > the trigger.\n>\n> Ugh ... good point. There's no back-link from the updated tuple to\n> its original on disk, is there?\n\n AFAIK nothing other than the Oid. And that's IMHO a weak one.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 14 May 2001 11:06:30 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Coping with huge deferred-trigger lists" } ]
[ { "msg_contents": "Hello,\n\nI am running PostgreSQL on a FreeBSD machine with 1 Gig of ram, and dual \nP3-733mhz CPUs. This server also runs Apache and is a production/web server.\n\nI frequently run into the errno:55 on my site, if I simply click refresh it \ngoes away. Anyone have any ideas what is causing this, or how to fix it ?\n\nmy shared_buffers are set to 4000, and max_connections to 300.\n\nI run postmaster with this command line:\n\n./postmaster -Si -o -F -D /usr/local/pgsql/data/\n\nAny help would be MUCH appreciated.\n\nThanks,\n\nKeith Bussey\nkbussey@wisol.com \n", "msg_date": "Wed, 9 May 2001 12:18:07 -0400", "msg_from": "Keith Bussey <kbussey@wisol.com>", "msg_from_op": true, "msg_subject": "PostgreSQL 7.1 (current release) - frequent errno:55 (buffer space\n\terror)" } ]
[ { "msg_contents": "Is anyone particularly attached to the current debug output of the\nbootstrap backend? Otherwise I'm going to change it to something I can\nread.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 9 May 2001 20:26:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "bootstrap debug output" } ]
[ { "msg_contents": "We did not bump the shared library versions before the 7.1 release.\nMaybe we should do this before 7.1.2 goes out.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 9 May 2001 20:54:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Shared library versions" }, { "msg_contents": "> We did not bump the shared library versions before the 7.1 release.\n> Maybe we should do this before 7.1.2 goes out.\n\nI thought I did that long ago for 7.1, or I should have anyway. I don't\nsee the commits either. Seems we can't do it in a minor release. Will\nhave to wait for 7.2, but since there really wasn't much API change in\n7.1, I think we are OK. Not sure if we should update them if there are\nno API changes, or were there?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 15:12:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "On Wed, 9 May 2001, Peter Eisentraut wrote:\n\n> We did not bump the shared library versions before the 7.1 release.\n> Maybe we should do this before 7.1.2 goes out.\n\nUmmm ... unless there are any changes that would require someone to\nrecompile their apps between v7.1.1 and v7.1.2, I don't think so ... they\nwe are just creating potential problems for those upgrading from\nv7.1/v7.1.1 to the latest stable, where there are no changes ...\n\nIf we were to do it, it would have to be on the v7.x, not v7.x.y ...\n\n\n", "msg_date": "Wed, 9 May 2001 16:20:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "On Wed, 9 May 2001, Bruce Momjian wrote:\n\n> > We did not bump the shared library versions before the 7.1 release.\n> > Maybe we should do this before 7.1.2 goes out.\n>\n> I thought I did that long ago for 7.1, or I should have anyway. I don't\n> see the commits either. Seems we can't do it in a minor release. Will\n> have to wait for 7.2, but since there really wasn't much API change in\n> 7.1, I think we are OK. Not sure if we should update them if there are\n> no API changes, or were there?\n\nIMHO, it should only be changed if there are incompatibilities between\nreleases ... we modify the API, mainly ... anything more then that, and\nwe're making ppl recompile to pull in libraries that only unlying code has\nchanged, but not the API ...\n\n\n", "msg_date": "Wed, 9 May 2001 16:28:31 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> We did not bump the shared library versions before the 7.1 release.\n>> Maybe we should do this before 7.1.2 goes out.\n\n> I thought I did that long ago for 7.1, or I should have anyway. I don't\n> see the commits either. Seems we can't do it in a minor release.\n\nI agree, too late now.\n\nIsn't there a checklist someplace of things to do while preparing a\nrelease? \"Check shared library version numbers\" should be on it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 May 2001 15:36:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shared library versions " }, { "msg_contents": "\nWhat we should have done is ask which API's changed for 7.1. I know I\njust changed the libpq++ API for 7.2.\n\n\n> On Wed, 9 May 2001, Bruce Momjian wrote:\n> \n> > > We did not bump the shared library versions before the 7.1 release.\n> > > Maybe we should do this before 7.1.2 goes out.\n> >\n> > I thought I did that long ago for 7.1, or I should have anyway. I don't\n> > see the commits either. Seems we can't do it in a minor release. Will\n> > have to wait for 7.2, but since there really wasn't much API change in\n> > 7.1, I think we are OK. Not sure if we should update them if there are\n> > no API changes, or were there?\n> \n> IMHO, it should only be changed if there are incompatibilities between\n> releases ... we modify the API, mainly ... anything more then that, and\n> we're making ppl recompile to pull in libraries that only unlying code has\n> changed, but not the API ...\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 15:36:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "The Hermit Hacker writes:\n\n> On Wed, 9 May 2001, Peter Eisentraut wrote:\n>\n> > We did not bump the shared library versions before the 7.1 release.\n> > Maybe we should do this before 7.1.2 goes out.\n>\n> Ummm ... unless there are any changes that would require someone to\n> recompile their apps between v7.1.1 and v7.1.2, I don't think so ... they\n> we are just creating potential problems for those upgrading from\n> v7.1/v7.1.1 to the latest stable, where there are no changes ...\n\nI'm talking about the minor number. The only thing that effects is that\nexecutables would pick up the new version if they have the old one in the\npath as well, no potential problems.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 9 May 2001 21:36:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> We did not bump the shared library versions before the 7.1 release.\n> >> Maybe we should do this before 7.1.2 goes out.\n> \n> > I thought I did that long ago for 7.1, or I should have anyway. I don't\n> > see the commits either. Seems we can't do it in a minor release.\n> \n> I agree, too late now.\n> \n> Isn't there a checklist someplace of things to do while preparing a\n> release? \"Check shared library version numbers\" should be on it...\n\nYep, it is there in tools/RELEASE_CHANGES:\n\t\n\t* Version numbers\n\t configure.in\n\t doc/src/sgml/version.sgml\n\t bump interface version numbers\n\t ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\t update src/interfaces/libpq/libpq.rc\n\t update /src/include/config.h.win32\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 15:39:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "The Hermit Hacker writes:\n\n> IMHO, it should only be changed if there are incompatibilities between\n> releases ... we modify the API, mainly ... anything more then that, and\n> we're making ppl recompile to pull in libraries that only unlying code has\n> changed, but not the API ...\n\nISTM that you should read up on shared library versioning.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 9 May 2001 21:55:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> The Hermit Hacker writes:\n> \n> > IMHO, it should only be changed if there are incompatibilities between\n> > releases ... we modify the API, mainly ... anything more then that, and\n> > we're making ppl recompile to pull in libraries that only unlying code has\n> > changed, but not the API ...\n> \n> ISTM that you should read up on shared library versioning.\n\nI second that... if new functionality is added, bump the minor. If\nfunctionality changes or is removed, bump the major.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "09 May 2001 16:20:47 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "On Wed, 9 May 2001, Peter Eisentraut wrote:\n\n> I'm talking about the minor number. The only thing that effects is\n> that executables would pick up the new version if they have the old\n> one in the path as well, no potential problems.\n\nOkay, but, what does that buy you? One overwrites the old library, the\nother creates one that will over-ride the old library ... either way, you\nare using the new library, no?\n\n\n", "msg_date": "Wed, 9 May 2001 20:20:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "> On Wed, 9 May 2001, Peter Eisentraut wrote:\n> \n> > I'm talking about the minor number. The only thing that effects is\n> > that executables would pick up the new version if they have the old\n> > one in the path as well, no potential problems.\n> \n> Okay, but, what does that buy you? One overwrites the old library, the\n> other creates one that will over-ride the old library ... either way, you\n> are using the new library, no?\n\nWhat happens when some libpq is in one directory, and another in a\ndifferent directory, both in ld.so.conf. Does it pick higher version of\nall available versions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 20:05:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > On Wed, 9 May 2001, Peter Eisentraut wrote:\n> >\n> > > I'm talking about the minor number. The only thing that effects is\n> > > that executables would pick up the new version if they have the old\n> > > one in the path as well, no potential problems.\n> >\n> > Okay, but, what does that buy you? One overwrites the old library, the\n> > other creates one that will over-ride the old library ... either way, you\n> > are using the new library, no?\n>\n> What happens when some libpq is in one directory, and another in a\n> different directory, both in ld.so.conf. Does it pick higher version of\n> all available versions?\n\nAFAIK it finds the first in order of directories listed in ld.so.conf.\n\n>\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Fri, 11 May 2001 06:32:59 -0400", "msg_from": "\"Mark L. Woodward\" <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "The Hermit Hacker writes:\n\n> Okay, but, what does that buy you? One overwrites the old library, the\n> other creates one that will over-ride the old library ... either way, you\n> are using the new library, no?\n\nThen we might as well get rid of the versions...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 12 May 2001 00:07:35 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Shared library versions" }, { "msg_contents": "Bruce Momjian writes:\n\n> What happens when some libpq is in one directory, and another in a\n> different directory, both in ld.so.conf. Does it pick higher version of\n> all available versions?\n\nIt uses the highest one with the same major version.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 12 May 2001 00:08:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Shared library versions" } ]
[ { "msg_contents": "\nOK, I have loaded PL/Python into CVS. The problem is that the Makefile\ndoes not work with our build system. It assumes someone is\nhand-configuring the Makefile for python versions and paths.\n\nHelp.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 16:53:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "PL/Python build" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, I have loaded PL/Python into CVS. The problem is that the Makefile\n> does not work with our build system. It assumes someone is\n> hand-configuring the Makefile for python versions and paths.\n\nMight be rather complex. I don't want more of the third party generated\nmakefiles in our build system because it messes up many things. As long\nas it's documented how to build it we can work with it for now.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 9 May 2001 23:49:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PL/Python build" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > OK, I have loaded PL/Python into CVS. The problem is that the Makefile\n> > does not work with our build system. It assumes someone is\n> > hand-configuring the Makefile for python versions and paths.\n> \n> Might be rather complex. I don't want more of the third party generated\n> makefiles in our build system because it messes up many things. As long\n> as it's documented how to build it we can work with it for now.\n\nI disabled the plPython Makefile until we can deal with it:\n\n\t$ gmake\n\tDisabled until merged into our Makefile system, bjm 2001-05-09\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 18:53:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PL/Python build" }, { "msg_contents": "On Wed, 9 May 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian writes:\n> > \n> > > OK, I have loaded PL/Python into CVS. The problem is that the Makefile\n> > > does not work with our build system. It assumes someone is\n> > > hand-configuring the Makefile for python versions and paths.\n> > \n> > Might be rather complex. I don't want more of the third party generated\n> > makefiles in our build system because it messes up many things. As long\n> > as it's documented how to build it we can work with it for now.\n> \n> I disabled the plPython Makefile until we can deal with it:\n> \n> \t$ gmake\n> \tDisabled until merged into our Makefile system, bjm 2001-05-09\n\nOne of the small problems of pl/python is going to similar to pl/perl...\nmany linux distro's don't come with a shared object library for python,\nbut come w/a static library only.\n\npl/python will work w/a static library (if you uncomment the lines in the\nmakefile to link all the modules against it directly); we can add a line\nto the faq about where packages, if any, are for python.so.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Wed, 9 May 2001 19:02:07 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: PL/Python build" }, { "msg_contents": "> > I disabled the plPython Makefile until we can deal with it:\n> > \n> > \t$ gmake\n> > \tDisabled until merged into our Makefile system, bjm 2001-05-09\n> \n> One of the small problems of pl/python is going to similar to pl/perl...\n> many linux distro's don't come with a shared object library for python,\n> but come w/a static library only.\n> \n> pl/python will work w/a static library (if you uncomment the lines in the\n> makefile to link all the modules against it directly); we can add a line\n> to the faq about where packages, if any, are for python.so.\n\nGee, we really only just got PL/Perl working. Hopefully we can use the\nsame method for Python.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 19:04:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PL/Python build" }, { "msg_contents": "On Thu, 10 May 2001, Mark Hollomon wrote:\n\n> On Wednesday 09 May 2001 19:02, Joel Burton wrote:\n> >\n> > One of the small problems of pl/python is going to similar to pl/perl...\n> > many linux distro's don't come with a shared object library for python,\n> > but come w/a static library only.\n> >\n> > pl/python will work w/a static library (if you uncomment the lines in the\n> > makefile to link all the modules against it directly); we can add a line\n> > to the faq about where packages, if any, are for python.so.\n> \n> Be careful. That will only work if the static library contains relocatable \n> code so that the entire resulting library 'plpython.so' can be loaded \n> dynamically. From what I can tell, this is true on Linux, but not on say HPUX.\n> _That_ was the sticky point with pl/perl.\n\nErm, well, I probably should confess that I know nothing whatsoever about\nHPUX; I'm a pretty bright boy, but mostly limited to Linux and\nFreeBSD. So, please, take most of my comments with that important grain.\n\nMark -- Would it be possible to use a static python.a to create plpython,\nw/o trying to link it to the modules that author includes (hashing, etc),\nto create a core-python-only plpython?\n\nThanks,\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Thu, 10 May 2001 14:29:59 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: Re: PL/Python build" }, { "msg_contents": "On Wednesday 09 May 2001 19:02, Joel Burton wrote:\n>\n> One of the small problems of pl/python is going to similar to pl/perl...\n> many linux distro's don't come with a shared object library for python,\n> but come w/a static library only.\n>\n> pl/python will work w/a static library (if you uncomment the lines in the\n> makefile to link all the modules against it directly); we can add a line\n> to the faq about where packages, if any, are for python.so.\n\nBe careful. That will only work if the static library contains relocatable \ncode so that the entire resulting library 'plpython.so' can be loaded \ndynamically. From what I can tell, this is true on Linux, but not on say HPUX.\n_That_ was the sticky point with pl/perl.\n\n-- \nMark Hollomon\n", "msg_date": "Thu, 10 May 2001 15:26:07 -0400", "msg_from": "Mark Hollomon <mhh@mindspring.com>", "msg_from_op": false, "msg_subject": "Re: Re: PL/Python build" }, { "msg_contents": "\n\nOn Thu, May 10, 2001 at 03:26:07PM -0400, Mark Hollomon wrote:\n> On Wednesday 09 May 2001 19:02, Joel Burton wrote:\n> >\n> > One of the small problems of pl/python is going to similar to pl/perl...\n> > many linux distro's don't come with a shared object library for python,\n> > but come w/a static library only.\n\nI've only worked with Debian and shared libraries.\n\n> >\n> > pl/python will work w/a static library (if you uncomment the lines\n> > in the makefile to link all the modules against it directly); we\n> > can add a line to the faq about where packages, if any, are for\n> > python.so.\n\nThe problem there wasn't static libraries. The problem was when\npython loaded its dynamic modules, those python modules couldn't see\nany symbols in the python shared library. They would fail to load and\npl/python would die complaining of unresolved symbols. I solved this\nproblem by changing the flags passed in pg_dlopen to include\nRTLD_GLOBAL. The ugly work around changing the pg_dlopen call is to\nexplicitly link the python modules to the postgresql python language\nmodule.\n\nAndrew\n\n-- \n\n\n\n\n\n", "msg_date": "Fri, 11 May 2001 00:31:38 -0400", "msg_from": "andrew@corvus.biomed.brown.edu (Andrew Bosma)", "msg_from_op": false, "msg_subject": "Re: Re: PL/Python build" }, { "msg_contents": "On Thursday 10 May 2001 14:29, Joel Burton wrote:\n> > On Thu, 10 May 2001, Mark Hollomon wrote:\n> >\n> > Be careful. That will only work if the static library contains\n> > relocatable code so that the entire resulting library 'plpython.so' can\n> > be loaded dynamically. From what I can tell, this is true on Linux, but\n> > not on say HPUX. _That_ was the sticky point with pl/perl.\n>\n>\n> Mark -- Would it be possible to use a static python.a to create plpython,\n> w/o trying to link it to the modules that author includes (hashing, etc),\n> to create a core-python-only plpython?\n\nNo. The problem is that the code in python.a is not (necessarily) relocatable.\nAnd if it isn't, it can't go into a shared library.\n\nAppently GCC on an i86/Elf based Linux platform, compiles _all_ code as \nrelocatable. So you can get alway with all kinds of stuff. But at least on \nHPUX, the vendor compiler does not create relocatable objects unless \nspecifically asked to do so. And as a rule no-one does unless they are \ncreating a shared library.\n\n-- \nMark Hollomon\n", "msg_date": "Sat, 12 May 2001 21:46:32 -0400", "msg_from": "Mark Hollomon <mhh@mindspring.com>", "msg_from_op": false, "msg_subject": "Re: Re: PL/Python build" }, { "msg_contents": "> Appently GCC on an i86/Elf based Linux platform, compiles _all_ code as \n> relocatable. So you can get alway with all kinds of stuff. But at least on \n> HPUX, the vendor compiler does not create relocatable objects unless \n> specifically asked to do so. And as a rule no-one does unless they are \n> creating a shared library.\n\nYes, the performance hit for relocatable code is measurable. Good\npoint. Does anyone know of a good doc/web page that talks about PIC,\nshared libs, etc that is somewhat OS-independent?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 22:30:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: PL/Python build" }, { "msg_contents": "Mark Hollomon <mhh@mindspring.com> writes:\n\n> No. The problem is that the code in python.a is not (necessarily) relocatable.\n> And if it isn't, it can't go into a shared library.\n> \n> Appently GCC on an i86/Elf based Linux platform, compiles _all_ code as \n> relocatable. So you can get alway with all kinds of stuff. But at least on \n> HPUX, the vendor compiler does not create relocatable objects unless \n> specifically asked to do so. And as a rule no-one does unless they are \n> creating a shared library.\n\nIn the interests of precision, I want to correct your terminology\nhere.\n\nAll files created by a Unix assembler, or by a Unix compiler with the\n-c option, are relocatable. That just means that the linker can\nrelocate them to run at some address chosen at link time.\n\nMany shared library systems use two types of code: normal code and\nposition independent code (PIC). Of those shared library systems\nwhich use PIC, some require that all code in a shared library be PIC.\nPosition independent code can run at any address, or more precisely,\nin many implementations, at some address chosen at run time just\nbefore the program starts.\n\n(For those shared library systems which use PIC but do not require it,\nshared libraries built with PIC are more efficient. I omit the case\nof Windows, which uses a primitive system which does not use PIC but\ninstead requires source code changes to run in a shared library.)\n\nSo we have three cases:\n\n1) Shared library system does not use PIC (e.g., SVR3, AIX).\n2) Shared library system uses PIC, but does not require it (e.g., ELF\n systems, including SVR4, Solaris, Linux and *BSD).\n3) Shared library system uses PIC, and requires it (e.g., HP/UX).\n\nWhen you say that gcc on an ix86/ELF based Linux platform compiles all\ncode as relocatable, you are correct. But that is a nearly vacuous\nstatement, as gcc on all platforms compiles all code as relocatable.\nYou might have intended to say that gcc on an ix86/ELF based Linux\nplatform compiles all code as PIC. If that is what you meant to say,\nyou were wrong.\n\nIan\n", "msg_date": "14 May 2001 00:56:34 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Re: PL/Python build" } ]
[ { "msg_contents": "> Perhaps instead\n> of storing each and every trigger-related tuple in memory, we only need\n> to store one value per affected table: the lowest CTID of any tuple\n> that we need to revisit for deferred-trigger purposes. At the end of\n> the transaction, scan forward from that point to the end of the table,\n> looking for tuples that were inserted by the current xact.\n\nI thought that this current placing of new rows at end of file is subject to \nchange soon (overwrite smgr) ?\n\nI thus think it would be better to remember all ctids per table.\nThe rest imho sounds great.\n\nAndreas\n", "msg_date": "Thu, 10 May 2001 10:24:22 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Coping with huge deferred-trigger lists" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> Perhaps instead\n>> of storing each and every trigger-related tuple in memory, we only need\n>> to store one value per affected table: the lowest CTID of any tuple\n>> that we need to revisit for deferred-trigger purposes. At the end of\n>> the transaction, scan forward from that point to the end of the table,\n>> looking for tuples that were inserted by the current xact.\n\n> I thought that this current placing of new rows at end of file is subject to \n> change soon (overwrite smgr) ?\n\nWell, the scheme would still *work* if rows were not always placed at\nthe end of file, though it might get inefficient. But you're right, the\nmerits of this trigger idea depend a lot on whether we decide to go to\nan overwriting smgr, and so we should probably wait till that's decided\nbefore we think about doing this. I just wanted to get the idea\nrecorded before I forgot about it.\n\nBTW, I don't think the overwriting-smgr idea is a done deal. We haven't\nseen any design yet for exactly how it should work. Moreover, I'm\nreally hesitant to throw away one of the fundamental design choices of\nPostgres: overwriting smgr is one of the things that got us to where we\nare today. Before we commit to that, we ought to do some serious study\nof the alternatives. ISTM the problem with VACUUM is not that you need\nto do a regular maintenance procedure, it's that it grabs an exclusive\nlock on the table for so long. We could live with VACUUM if it could be\nmade either incremental (do a few pages and release the lock) or capable\nof running in parallel with reader & writer transactions. Vadim's\nstill-not-integrated LAZY VACUUM code is an indicator that progress\nmight be made in that direction. (Actually, I suppose if you look at it\nin the right way, you might think that a backgroundable VACUUM *is* an\noverwriting smgr, just an asynchronous implementation of it...)\n\n> I thus think it would be better to remember all ctids per table.\n\nIf we do that then we still have a problem with overrunning memory\nafter a sufficiently large number of tuples. However, that'd improve\nthe constant factor by at least an order of magnitude, so it might be\nworth doing as an intermediate step. Still have to figure out whether\nthe triggered-data-change business is significant or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 10:12:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Coping with huge deferred-trigger lists " }, { "msg_contents": "> BTW, I don't think the overwriting-smgr idea is a done deal. We haven't\n> seen any design yet for exactly how it should work. Moreover, I'm\n> really hesitant to throw away one of the fundamental design choices of\n> Postgres: overwriting smgr is one of the things that got us to where we\n> are today. Before we commit to that, we ought to do some serious study\n> of the alternatives. ISTM the problem with VACUUM is not that you need\n> to do a regular maintenance procedure, it's that it grabs an exclusive\n> lock on the table for so long. We could live with VACUUM if it could be\n> made either incremental (do a few pages and release the lock) or capable\n> of running in parallel with reader & writer transactions. Vadim's\n> still-not-integrated LAZY VACUUM code is an indicator that progress\n> might be made in that direction. (Actually, I suppose if you look at it\n> in the right way, you might think that a backgroundable VACUUM *is* an\n> overwriting smgr, just an asynchronous implementation of it...)\n\nI agree overwriting storage manager is not a done deal, and I don't see\nus eliminating it entirely. We have to keep the old tuples in scope, so\nI assume we would just create new tuples, and reuse the expired tuples\nonce they were out of scope.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 10:57:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Coping with huge deferred-trigger lists" }, { "msg_contents": "Tom Lane wrote:\n> \n> BTW, I don't think the overwriting-smgr idea is a done deal. We haven't\n> seen any design yet for exactly how it should work. Moreover, I'm\n> really hesitant to throw away one of the fundamental design choices of\n> Postgres: overwriting smgr is one of the things that got us to where we\n> are today. Before we commit to that, we ought to do some serious study\n> of the alternatives. ISTM the problem with VACUUM is not that you need\n> to do a regular maintenance procedure, it's that it grabs an exclusive\n> lock on the table for so long. We could live with VACUUM if it could be\n> made either incremental (do a few pages and release the lock) or capable\n> of running in parallel with reader & writer transactions. Vadim's\n> still-not-integrated LAZY VACUUM code is an indicator that progress\n> might be made in that direction. (Actually, I suppose if you look at it\n> in the right way, you might think that a backgroundable VACUUM *is* an\n> overwriting smgr, just an asynchronous implementation of it...)\n\nAnd it allows the writes that need to be done quickly to be kept\ntogether\nand the slow part to be asynchronous. I suspect that we will never be\nable \nto get very good statistics without separate ANALYZE so we will have \nasynchronous processes anyhow.\n\nAlso, we might want to get time travel back sometime, which I guess is\nstill \ndone most effectively with current scheme + having VACUUM keeping some\nhistory \non a per-table basis.\n\nOther than that time travel only ;) needs recording wall-clock-time of\ncommits \nthat have modified data + some extended query features.\n\nthe (wall-clock-time,xid) table is naturally ordered by said\nwall-clock-time so \nit won't even need index, just a binary search access method.\n\n------------------\nHannu\n", "msg_date": "Thu, 10 May 2001 17:41:43 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: AW: Coping with huge deferred-trigger lists" }, { "msg_contents": "> If we do that then we still have a problem with overrunning memory\n> after a sufficiently large number of tuples. However, that'd improve\n> the constant factor by at least an order of magnitude, so it might be\n> worth doing as an intermediate step. Still have to figure out whether\n> the triggered-data-change business is significant or not.\n\nI think that was part of the misunderstanding of the spec. I think the\nspec means it to be within one statement (and its associated immediate\nactions) rather than rest of transaction. I think it's mostly to\nprevent loop cases A row 1 modifies B row 1 modifies A row 1 modifies ... \nHowever, I only looked at it briefly a while back.\n\n", "msg_date": "Thu, 10 May 2001 10:59:54 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: AW: Coping with huge deferred-trigger lists " }, { "msg_contents": "Tom Lane wrote:\n> \n> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> >> Perhaps instead\n> >> of storing each and every trigger-related tuple in memory, we only need\n> >> to store one value per affected table: the lowest CTID of any tuple\n> >> that we need to revisit for deferred-trigger purposes. At the end of\n> >> the transaction, scan forward from that point to the end of the table,\n> >> looking for tuples that were inserted by the current xact.\n> \n> > I thought that this current placing of new rows at end of file is subject to\n> > change soon (overwrite smgr) ?\n> \n> Well, the scheme would still *work* if rows were not always placed at\n> the end of file, though it might get inefficient.\n\nEven under current smgr, new rows aren't necessarily at the end.\n\n[snip]\n\n> \n> BTW, I don't think the overwriting-smgr idea is a done deal. We haven't\n> seen any design yet for exactly how it should work. Moreover, I'm\n> really hesitant to throw away one of the fundamental design choices of\n> Postgres: overwriting smgr is one of the things that got us to where we\n> are today. \n\nI don't think we could/should introduce an overwriting smgr\nin 7.2 unless we give up current level of stablitity/\nreliability. We don't have an UNDO functionality yet even\nunder current simple no overwrite smgr.\n\n> Before we commit to that, we ought to do some serious study\n> of the alternatives. ISTM the problem with VACUUM is not that you need\n> to do a regular maintenance procedure, it's that it grabs an exclusive\n> lock on the table for so long. We could live with VACUUM if it could be\n> made either incremental (do a few pages and release the lock) or capable\n> of running in parallel with reader & writer transactions. Vadim's\n> still-not-integrated LAZY VACUUM code is an indicator that progress\n\n> might be made in that direction. (Actually, I suppose if you look at it\n> in the right way, you might think that a backgroundable VACUUM *is* an\n> overwriting smgr, just an asynchronous implementation of it...)\n> \n\nThe backgroundable VACUUM, reuse dead space etc .. could never\nbe an overwriting smgr. When a tuple is updated corresponding\nindex tuples must always be inserted. \n\nregrads,\nHiroshi Inoue\n", "msg_date": "Fri, 11 May 2001 11:44:32 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: AW: Coping with huge deferred-trigger lists" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> I thought that this current placing of new rows at end of file is subject to\n>> change soon (overwrite smgr) ?\n\n> Even under current smgr, new rows aren't necessarily at the end.\n\nHmm ... you're right, heap_update will try to store an updated tuple on\nthe same page as its original.\n\nThat doesn't make my suggestion unworkable, however, since this case is\nnot very likely to occur except on pages at/near the end of file. One\nway to deal with it is to keep a list of pages (still not individual\ntuples) that contain tuples we need to revisit for deferred triggers.\nThe list would be of the form \"scan these individual pages plus all\npages from point X to the end of file\", where point X would be at or\nperhaps a little before the end of file as it stood at the start of the\ntransaction. We'd only need to explicitly store the page numbers for\nrelatively few pages, usually.\n\nBTW, thanks for pointing that out --- it validates my idea in another\nthread that we can avoid locking on every single call to\nRelationGetBufferForTuple, if it's OK to store newly inserted tuples\non pages that aren't necessarily last in the file.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 15:20:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Coping with huge deferred-trigger lists " } ]
[ { "msg_contents": "Hi,\nCan PostgreSQL's Stored Procedure return a ReccordSet?\nI want to send a parameter to stored procedure, and receive a recordset.\nIs that possible?\nHow to do it?\nCould I have a example? or where can I try that example?\n\nmy os: linux\nmy database: PostgreSQL\n\n\n==========================================================\n PC home 嚙皺嚙瞌嚙緬嚙締嚙瘡嚙箱嚙璀嚙諉請請佗蕭: http://www.pchome.com.tw \n PC home Online 嚙踝蕭嚙踝蕭嚙窮嚙綞嚙瑾嚙瑾 嚙罵嚙踝蕭嚙衝一嚙璀嚙綞嚙磕嚙諒大嚙踝蕭嚙皚嚙篆嚙踝蕭嚙踝蕭 \n==========================================================\n", "msg_date": "Thu, 10 May 2001 16:46:48 +0800", "msg_from": "=?big5?B?rEm7ytl5?= <danken@ms1.pchome.com.tw>", "msg_from_op": true, "msg_subject": "Can PostgreSQL's Stored Procedure return a ReccordSet?" } ]
[ { "msg_contents": "I just updated my testsystem from 7.0.3 to 7.1 and tried to insert my\nproduction data just to notice that one function uses textpos(). 7.1 does\nnot know about that function.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 10 May 2001 13:45:02 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "What happened to function textpos()?" }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> I just updated my testsystem from 7.0.3 to 7.1 and tried to insert my\n> production data just to notice that one function uses textpos(). 7.1 does\n> not know about that function.\n\nNeither does 7.0.3. I see \"position\" and \"strpos\" in pg_proc in both\nversions ... but the internal name textpos is not exported by either.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 10:59:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What happened to function textpos()? " } ]
[ { "msg_contents": "Hi all!\n\nI have to convert functions and procedures from Oracle to PostgreSQL. I \nlooked at all the stuff of the Pg-Homepage and I ask me if there are any \ntools, that support the conversion. \n\nWriting PS/PGSQL tools seems to be a bit hard, because of the existing \ntool-infrastructure on linux. Are there are tools I have overseen?\n\nI have implemented the following tools for my use yet:\n\n- A WWWdb-Application for editing and testing of SQL-Procedures over a\n WEB-frontend\n- A perl-script, that does basic conversions between PL/SQL <-> XML <->\n PL/PGSQL (The Procedure-definition is converted completely, the code-block\n a little bit)\n\nWho else is working in this area? Any tips?\n\nRegards, Klaus\n", "msg_date": "Thu, 10 May 2001 15:33:27 +0200", "msg_from": "Klaus Reger <K.Reger@gmx.de>", "msg_from_op": true, "msg_subject": "Converting PL/SQL to PL/PGSQL" }, { "msg_contents": "On Thu, May 10, 2001 at 03:33:27PM +0200, Klaus Reger wrote:\n> Hi all!\n> \n> I have to convert functions and procedures from Oracle to PostgreSQL. I \n> looked at all the stuff of the Pg-Homepage and I ask me if there are any \n> tools, that support the conversion. \n\n\tThat help you in the conversion, no.\n\tHave you looked at the \"Porting From Oracle PL/SQL\" chapter of the\nPostgreSQL Programmer's Guide? I am expanding that guide to include more\nthings, like queries. The goas is for it to become a \"Porting From\nOracle\" guide.\n \n> Writing PS/PGSQL tools seems to be a bit hard, because of the existing \n> tool-infrastructure on linux. Are there are tools I have overseen?\n\n\tHeh? What do you mean by this? There are zillions of editors, both\nconsole and graphical, where you can do this.\n\tI have found pgaccess to be vey useful in testing. In the OpenACS\nproject (www.openacs.org) we port thousands of lines of Oracle code to\nPostgreSQL, mostly using vim or Emacs.\n\tFor testing, I use pgaccess because it lets me drop/recreate a\nfunction easily, plus it escapes quotes. One thing I don't like about it\nis that it's hard to keep things indented.\n\n> - A WWWdb-Application for editing and testing of SQL-Procedures over a\n> WEB-frontend\n\n\tCool. Anywhere we can see this in action?\n\n> - A perl-script, that does basic conversions between PL/SQL <-> XML <->\n> PL/PGSQL (The Procedure-definition is converted completely, the code-block\n> a little bit)\n\t\n\tHmmm. *Very* interesting. Link? Source for this anywhere? We could\nprobably use this at OpenACS.\n\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Developer \n(D)inner not ready: (A)bort (R)etry (P)izza\n", "msg_date": "Thu, 10 May 2001 11:23:11 -0600", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": false, "msg_subject": "Re: Converting PL/SQL to PL/PGSQL" }, { "msg_contents": "Am Donnerstag, 10. Mai 2001 19:23 schrieb Roberto Mello:\n> On Thu, May 10, 2001 at 03:33:27PM +0200, Klaus Reger wrote:\n> \tHave you looked at the \"Porting From Oracle PL/SQL\" chapter of the\n> PostgreSQL Programmer's Guide? I am expanding that guide to include more\n> things, like queries. The goas is for it to become a \"Porting From\n> Oracle\" guide.\nYes I did, and it was very helpful for me. Thank you for this stuff. I made a \nlist of the differences I found for me too. If you want it, I cam send it to \nyou.\n\n\n> > Writing PS/PGSQL tools seems to be a bit hard, because of the existing\n> > tool-infrastructure on linux. Are there are tools I have overseen?\n> \tHeh? What do you mean by this? There are zillions of editors, both\n> console and graphical, where you can do this.\nAh! That is right, I use emacs too.\n> \tI have found pgaccess to be vey useful in testing. In the OpenACS\n> project (www.openacs.org) we port thousands of lines of Oracle code to\n> PostgreSQL, mostly using vim or Emacs.\n> \tFor testing, I use pgaccess because it lets me drop/recreate a\n> function easily, plus it escapes quotes. One thing I don't like about it\n> is that it's hard to keep things indented.\nThe problem for me seems, that the code is in the database. When you want to \nedit it, you do this in three steps:\n1. Get source from the database\n2. Edit the source\n3. Put it back to the database\n\nWhen there are no syntax-problems in the proc-declarations, or any wrong \nnested things step 3 is no problem. But often, when I ram my procedures I get \nruntime-errors (without konowing, where the problem exactly is). So here some \ntype of compilation would be very useful.\n\nFirst, I used pgacess too. because it is very helpful to develop \npl/pgsql-procedures. But as the maintainer of my own Web-database-frontend I \ndecided to write my own tool, which is very similar to pgaccess.\n\n>\n> > - A WWWdb-Application for editing and testing of SQL-Procedures over a\n> > WEB-frontend\n> \tCool. Anywhere we can see this in action?\nWWWdb of course. Point your browser to http://WWWdb.org. The procedure part \nis very sensible (because I don't want everybody to change my procedures :-), \nso it is not testable on my site. I may send you some screenshots, or you \ncould install WWWdb at your computer and I send you the code separately, \nbecause it is not released as OpenSource yet.\n\n> > - A perl-script, that does basic conversions between PL/SQL <-> XML <->\n> > PL/PGSQL (The Procedure-definition is converted completely, the\n> > code-block a little bit)\n>\n> \tHmmm. *Very* interesting. Link? Source for this anywhere? We could\n> probably use this at OpenACS.\nI asked my boss, if he allows me to give out the sources, I will start a \nproject at sourceforge. Stay tuned.\n\nIn this way it is called:\n------------------------------------------------------------------------------------------\nwork@pc01:SqlProc$ ConvertPlsql.pl -h\n\n\n Call:\n ConvertPlsql.pl [-DVw] [-o file] [file ...]\n\n Switches:\n -D Debugging-mode\n -V show version\n -o file\n <file> is the file where the output should be directed to.\n If <file> is a directory, one source-file will be generated\n for every procedure. When <file> is a normal file, all output\n will be generated into this single file. Default is STDOUT,\n which can be passed explicitly as '-'\n -s\n Sort functions alphabetically at output (Default is unsorted)\n -S Source-language\n This is the language of the existing script-file(s).\n Valid values are (Default is PL_SQL):\n - pl_sql\n -T Target-language\n This is the language of the generated script-file(s).\n Valid values are (Default is PL_PGSQL):\n - xml\n - pl_pgsql\n -w Display warnings, that are found in conversion-process\n \n Description:\n ConvertPlsql.pl scans PL/SQL-Procedure-definitions and tries\n to convert them to PL/PGSQL.\n\nHere is an example of the conversion between Oracle, Postgres and XML:\n\n------------------------------------------------------------------------------------------\n\n<?xml version=\"1.0\" encoding=\"iso-8859-1\"?>\n<!DOCTYPE SOURCE SYSTEM \"./SqlProc.dtd\">\n <SOURCE>\n <FUNCTION\n NAME = \"chk_ip\"\n TYPE = \"FUNCTION\"\n RESULTTYPE = \"NUMBER\">\n <PARAMETER\n NAME = \"IPADRESSp\"\n INOUT = \"IN\"\n TYPE = \"VARCHAR,\"/>\n <PARAMETER\n NAME = \"N_uid\"\n INOUT = \"IN\"\n TYPE = \"NUMBER,\"/>\n <VARIABLE\n NAME = \"N_tmp\"\n TYPE = \"NUMBER\"/>\n <CODE>\n\n\n SELECT test.NEXTVAL INTO N_uid /* FROM DUAL */ ;\n\n N_tmp := 'That''s my quoted text!';\n\n RETURN N_tmp;\n\nEXCEPTION\n WHEN others THEN\n return -100;\n </CODE>\n </FUNCTION>\n\n </SOURCE>\n\n------------------------------------------------------------------------------------------\n\nDROP FUNCTION chk_ip (VARCHAR, NUMBER,);\nCREATE FUNCTION chk_ip (VARCHAR, NUMBER,)\nRETURNS INTEGER AS '\nDECLARE\n IPADRESSp ALIAS FOR $1;\n N_uid ALIAS FOR $2;\n N_tmp INTEGER;\nBEGIN\n\n\n\n SELECT nextval(''test'') INTO N_uid /* FROM DUAL */ ;\n\n N_tmp := ''That''''s my quoted text!'';\n\n RETURN N_tmp;\n\n-- ORA -- EXCEPTION\n-- ORA -- WHEN others THEN\n-- ORA -- return -100;\nEND;\n' language 'plpgsql';\n\n------------------------------------------------------------------------------------------\n\nCREATE OR REPLACE FUNCTION chk_ip\n (\n IPADRESSp IN VARCHAR2,\n N_uid IN NUMBER\n )\n RETURN NUMBER IS\n N_tmp NUMBER;\nBEGIN\n\n SELECT test.NEXTVAL INTO N_uid FROM dual;\n\n N_tmp := 'That''s my quoted text!';\n\n RETURN N_tmp;\n\nEXCEPTION\n WHEN others THEN\n return -100;\nEND;\n/\n\n------------------------------------------------------------------------------------------\n\nRegards, Klaus\n\n", "msg_date": "Fri, 11 May 2001 10:48:38 +0200", "msg_from": "Klaus Reger <K.Reger@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Converting PL/SQL to PL/PGSQL" }, { "msg_contents": "> > > - A perl-script, that does basic conversions between PL/SQL <-> XML <->\n> > > PL/PGSQL (The Procedure-definition is converted completely, the\n> > > code-block a little bit)\n> >\n> > \tHmmm. *Very* interesting. Link? Source for this anywhere? We could\n> > probably use this at OpenACS.\n> I asked my boss, if he allows me to give out the sources, I will start a \n> project at sourceforge. Stay tuned.\n\nWith our new /contrib policy, we could put it right in our PostgreSQL\nCVS contrib.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 May 2001 07:03:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Converting PL/SQL to PL/PGSQL" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> With our new /contrib policy, we could put it right in our PostgreSQL\n> CVS contrib.\n\n?? What \"new contrib policy\"? I didn't notice any discussion of policy\nchanges ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 09:44:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Converting PL/SQL to PL/PGSQL " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > With our new /contrib policy, we could put it right in our PostgreSQL\n> > CVS contrib.\n> \n> ?? What \"new contrib policy\"? I didn't notice any discussion of policy\n> changes ...\n\nI was unsure what to do with Dbase and Oracle code recently contributed.\nVince and others said it should be in /contrib. We already have\nloadable modules and backend tools in /contrib. Seems conversion tools\nare also now to be placed in /contrib.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 17:25:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Converting PL/SQL to PL/PGSQL" } ]
[ { "msg_contents": "\nI have added a dbase conversion utility into CVS under /contrib/dbase. \nI have made a Makefile to match our normal configuration. \n\nThe only problem is that the utility needs libiconv. I have it because I\nuse wv to convert MSWord documents, but I bet most don't have it. How\ndo we handle /contrib stuff that needs special libraries?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 10:45:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Dbase conversion utility" } ]
[ { "msg_contents": "I left the ones that look importante for my needs.\n\nOn Jue 10 May 2001 20:20, Bruce Momjian wrote:\n> Here is a small list of big TODO items. I was wondering which ones\n> people were thinking about for 7.2?\n>\n> ---------------------------------------------------------------------------\n>\n> * Point-in-time data recovery using backup and write-ahead log\n> * Allow row re-use without vacuum (Vadim)\n\nYhis one is not very important for my, but I guess there are people out there \nthat have heavy updates on there DB and would be delighted with this.\n\n> * Add the concept of dataspaces/tablespaces [tablespaces]\n\nWhat would this be?\n\nWhat I'm about to write has nothing (at least I think) to do with this, but I \nwould like the database directoies to have the name of the databases, as it \nwas before, if it's posible.\nIt makes it easier to find out with database is growing from the command line.\n\n> * Allow better control over user privileges [privileges]\n\nIf this is related with the views and privileges, I'm on this one!\n\n> * Allow elog() to return error codes, module name, file name, line\n> number, not just messages [elog]\n> * Make binary/file in/out interface for TOAST columns\n> * Large object interface improvements\n> * Add ALTER TABLE DROP COLUMN feature [drop]\n> * Add ALTER TABLE ... DROP CONSTRAINT\n> * Automatically drop constraints/functions when object is dropped\n\nSaludos... :-)\n\n-- \nEl mejor sistema operativo es aquel que te da de comer.\nCuida tu dieta.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Thu, 10 May 2001 17:52:03 +0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Here is a small list of big TODO items. I was wondering which ones\npeople were thinking about for 7.2?\n\n---------------------------------------------------------------------------\n\n* Add replication of distributed databases [replication]\n\to automatic fallover\n\to load balancing\n\to master/slave replication\n\to multi-master replication\n\to partition data across servers\n\to sample implementation in contrib/rserv\n\to queries across databases or servers (two-phase commit)\n* Point-in-time data recovery using backup and write-ahead log\n* Allow row re-use without vacuum (Vadim)\n* Add the concept of dataspaces/tablespaces [tablespaces]\n* Allow better control over user privileges [privileges]\n* Allow elog() to return error codes, module name, file name, line\n number, not just messages [elog]\n* Allow international error message support and add error codes [elog]\n* Make binary/file in/out interface for TOAST columns\n* Large object interface improvements\n* Allow inherited tables to inherit index, UNIQUE constraint, and primary key\n [inheritance]\n* Add ALTER TABLE DROP COLUMN feature [drop]\n* Add ALTER TABLE ... DROP CONSTRAINT\n* Automatically drop constraints/functions when object is dropped\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 13:20:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "7.2 items" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> * Add replication of distributed databases [replication]\n> o automatic fallover\n\nShouldn't that be 'failover'? I don't know if I want automatic 'fallover'!\n\n:-)\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64(21)635-694, Fax: +64(4)499-5596, Office: +64(4)499-2267xtn709\n", "msg_date": "Fri, 11 May 2001 08:27:13 +1200", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > * Add replication of distributed databases [replication]\n> > o automatic fallover\n> \n> Shouldn't that be 'failover'? I don't know if I want automatic 'fallover'!\n\nJust one letter, but a huge difference. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 16:29:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "\n\n\n> Here is a small list of big TODO items. I was wondering which ones\n> people were thinking about for 7.2?\n\nThe need for stored procedures that return a record set.\nThis is required to migrate from MSSQL, Interbase and others.\nThis is a commonly requested item.\n\nNested Transactions. This allows the logging of the execution of a failed\nSQL\nstatement even if the rest of the transaction is rolled back.\n\nStatement Level Triggers. Useful but not critically important.\n\nFull text indexing.\n\nPre parsed queries with variable substitutions.\n\nRegards\n\n\n\n\n", "msg_date": "Fri, 11 May 2001 10:41:57 +1200", "msg_from": "<john@mwk.co.nz>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Here is a small list of big TODO items. I was wondering which ones\n> people were thinking about for 7.2?\n\nPeter E. had implied that he wanted to tackle the elog issues for 7.2,\nbut I'm not sure if he's committed to it or not.\n\nI am wanting to see SQL schemas happen, and it's possible that\ntablespaces should be dealt with in combination with that.\n\nOther than that, I'm mostly thinking about performance improvements\nfor 7.2, not features ... as far as my personal plans go, that is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 18:54:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items " }, { "msg_contents": "> > * Point-in-time data recovery using backup and write-ahead\n> log > * Allow row re-use without vacuum (Vadim)\n> \n> Yhis one is not very important for my, but I guess there are\n> people out there that have heavy updates on there DB and would\n> be delighted with this.\n\nYes, this important especially for databases that have to be up 24\nhours a day.\n\n> \n> > * Add the concept of dataspaces/tablespaces [tablespaces]\n> \n> What would this be?\n> \n> What I'm about to write has nothing (at least I think) to do\n> with this, but I would like the database directoies to have the\n> name of the databases, as it was before, if it's posible. It\n> makes it easier to find out with database is growing from the\n> command line.\n\nWe have a /contrib utility called oid2name for that.\n\n> > * Allow better control over user privileges [privileges]\n> \n> If this is related with the views and privileges, I'm on this\n> one!\n\nNot sure what the problem is there. We already implement privileges on\nviews that are separate from the base tables.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 19:54:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> Full text indexing.\n> \n\nThis one is already done using GIST. The GIST improvements are in 7.1,\nand I assume full text indexing will be more fully integrated into\nPostgreSQL in 7.2. \n\nThe PostgreSQL web search engine is using it now. Oleg and team did the\nwork.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 19:58:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Here is a small list of big TODO items. I was wondering which ones\n> > people were thinking about for 7.2?\n> \n> Peter E. had implied that he wanted to tackle the elog issues for 7.2,\n> but I'm not sure if he's committed to it or not.\n\nI put Peter E on that one with a question mark.\n\n> \n> I am wanting to see SQL schemas happen, and it's possible that\n> tablespaces should be dealt with in combination with that.\n\nUpdated TODO.\n\n> \n> Other than that, I'm mostly thinking about performance improvements\n> for 7.2, not features ... as far as my personal plans go, that is.\n\nSeems you already started.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 19:58:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> Other than that, I'm mostly thinking about performance improvements\n> for 7.2, not features ... as far as my personal plans go, that is.\n\nI saw a few juicy TODO items I will tackle, though people will\ncertainly be cleaning up after me. :-)\n\nI have reorganized the TODO list to make smaller groupings.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 20:01:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010510 17:02] wrote:\n> > > * Point-in-time data recovery using backup and write-ahead\n> > log > * Allow row re-use without vacuum (Vadim)\n> > \n> > Yhis one is not very important for my, but I guess there are\n> > people out there that have heavy updates on there DB and would\n> > be delighted with this.\n> \n> Yes, this important especially for databases that have to be up 24\n> hours a day.\n\nSorry for jumping in here, but any ideas on the expected date\nthat will become available?\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nInstead of asking why a piece of software is using \"1970s technology,\"\nstart asking why software is ignoring 30 years of accumulated wisdom.\n", "msg_date": "Thu, 10 May 2001 17:40:40 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> * Add ALTER TABLE ... DROP CONSTRAINT\n\nI am working on this function at the moment, hoping to add the dropping of\nCHECK constraints. However, it'll take me a while because I keep having to\nlook up all the functions being called to see what they do, etc.\n\nWhat I'm thinking it that I'll try and at least get the structure all done\nand even compiling then the patch will have to be reviewed. (I'm doing it to\nstretch my programming muscles after working in PHP for so long!)\n\nChris\n\n", "msg_date": "Fri, 11 May 2001 09:30:33 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: 7.2 items" }, { "msg_contents": "> > * Add ALTER TABLE ... DROP CONSTRAINT\n> \n> I am working on this function at the moment, hoping to add the dropping of\n> CHECK constraints. However, it'll take me a while because I keep having to\n> look up all the functions being called to see what they do, etc.\n> \n> What I'm thinking it that I'll try and at least get the structure all done\n> and even compiling then the patch will have to be reviewed. (I'm doing it to\n> stretch my programming muscles after working in PHP for so long!)\n\nGood idea. Certain people are great at looking at a patch and telling\nexactly how to improve it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 21:34:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "At 01:20 PM 10-05-2001 -0400, Bruce Momjian wrote:\n>* Allow row re-use without vacuum (Vadim)\n\nWill this do away with the need for a lazy vacuum?\n\n>> Full text indexing.\n>> \n>\n>This one is already done using GIST. The GIST improvements are in 7.1,\n>and I assume full text indexing will be more fully integrated into\n>PostgreSQL in 7.2. \n\nI hope it will. What will the interface be like? \n\nRight now I still don't know how to do FTI in 7.1 using _postgresql_ built\nin GIST :(. \n\nAny pointers to the relevant postgresql docs? \n\nCheerio,\nLink.\n\n\n", "msg_date": "Fri, 11 May 2001 11:52:00 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Quoting Bruce Momjian <pgman@candle.pha.pa.us>:\n\n> > > * Point-in-time data recovery using backup and write-ahead\n> > log > * Allow row re-use without vacuum (Vadim)\n> > \n> > Yhis one is not very important for my, but I guess there are\n> > people out there that have heavy updates on there DB and would\n> > be delighted with this.\n> \n> Yes, this important especially for databases that have to be up 24\n> hours a day.\n\nI always thought that VACUUM was (especially) for 2 main reasons there:\n\t- Clean de tuples marked for deletion.\n\t- Make the new statistics. (-z)\n\nLots of tuples get marked for deletion on UPDATE and DELETE, am I right?\n> > \n> > > * Add the concept of dataspaces/tablespaces [tablespaces]\n> > \n> > What would this be?\n> > \n> > What I'm about to write has nothing (at least I think) to do\n> > with this, but I would like the database directoies to have the\n> > name of the databases, as it was before, if it's posible. It\n> > makes it easier to find out with database is growing from the\n> > command line.\n> \n> We have a /contrib utility called oid2name for that.\n\nI'll check that. :-)\n\n> > > * Allow better control over user privileges [privileges]\n> > \n> > If this is related with the views and privileges, I'm on this\n> > one!\n> \n> Not sure what the problem is there. We already implement privileges on\n> views that are separate from the base tables.\n\nI personally have not had any problems, but heard on the general list. Could\nhave been bad configuration, or wrong GRANTS. I didn't follow the thread so\nclosely.\n\nSaludos... :-)\n\n-- \nEl mejor sistema operativo es aquel que te da de comer.\nCuida tu dieta.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Fri, 11 May 2001 08:19:24 +0300", "msg_from": "=?iso-8859-1?B?TWFydO1uIE1hcnF16XM=?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> At 01:20 PM 10-05-2001 -0400, Bruce Momjian wrote:\n> >* Allow row re-use without vacuum (Vadim)\n> \n> Will this do away with the need for a lazy vacuum?\n> \n> >> Full text indexing.\n> >> \n> >\n> >This one is already done using GIST. The GIST improvements are in 7.1,\n> >and I assume full text indexing will be more fully integrated into\n> >PostgreSQL in 7.2. \n> \n> I hope it will. What will the interface be like? \n\nWish I knew. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 May 2001 07:04:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Bruce Momjian wrote:\n\n> Here is a small list of big TODO items. I was wondering which ones\n> people were thinking about for 7.2?\n>\n> * Allow inherited tables to inherit index, UNIQUE constraint, and primary key\n> [inheritance]\n\n\ni was wondering if there was any thought still being given to Oliver \nElphick's post from a while back that is still in TODO.detail \n[inheritance]: \nhttp://candle.pha.pa.us/mhonarc/todo.detail/inheritance/msg00010.html\n\ni kind of feel as though the inheritance semantics for postgres at the \nmoment are not fully fleshed out, and including further features without \nhaving a full plan for the semantics doesn't seem to advance the effort \nof making postgres a true Object-Relational DBMS.\n\nfor my part, as a user, i am excited that inheritance is available even \nin a limited fashion, but where i use it, i have basically had to invent \nmy own semantics for referential integrity based on a suite of triggers. \nthis issue is addressed in Oliver's post, but i was wondering if such \nissues were still a part of the development dialogue since Oliver's post \nwas the last in TODO.detail [inheritance] and seemed to merit no \nresponse (or any that i could find in the mailing list archives).\n\n-tfo\n\n\n\n\n\n", "msg_date": "Fri, 11 May 2001 10:25:41 -0500", "msg_from": "\"Thomas F. O'Connell\" <tfo@monsterlabs.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> From: <john@mwk.co.nz>\n> Date: Fri, 11 May 2001 10:41:57 +1200\n> \n> > Here is a small list of big TODO items. I was wondering which ones\n> > people were thinking about for 7.2?\n> \n> The need for stored procedures that return a record set.\n> This is required to migrate from MSSQL, Interbase and others.\n> This is a commonly requested item.\n\nThis would be very useful, as well as the \"RETURNING\" clause that is\nsupported elsewhere with inserts.\n\n-- \nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n\n", "msg_date": "Fri, 11 May 2001 11:36:49 -0400 (EDT)", "msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Tom Lane writes:\n\n> Peter E. had implied that he wanted to tackle the elog issues for 7.2,\n> but I'm not sure if he's committed to it or not.\n\nWell...\n\n* Automatically add filename, line, function name: Easy to code, lots of\n labour. Should be lumped in with some other large change.\n\n* Error codes: I think there are only a handful of key messages that\n users (programs) need to detect cleanly, mostly constraint violations.\n The rest are \"the query you sent is wrong -- fix your application\" and\n \"something went really wrong -- manual repair needed\"\n\n So maybe this could be a smallish change.\n\n* Translation: If we want to use gettext I can get started. I don't\n think I'm interested in using any other interface.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 12 May 2001 23:21:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 items " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> * Translation: If we want to use gettext I can get started. I don't\n> think I'm interested in using any other interface.\n\nI have no objection to the gettext API, but I was and still am concerned\nabout depending on GNU gettext's code, because of license conflicts.\nThere is a BSD-license gettext clone project, but it doesn't look to be\nvery far along.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 20:00:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items " }, { "msg_contents": "> * Translation: If we want to use gettext I can get started. I don't\n> think I'm interested in using any other interface.\n> \n\nLicense?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 20:54:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "\nI think we just need someone to start a discussion then generate a patch\nto match.\n\n> Bruce Momjian wrote:\n> \n> > Here is a small list of big TODO items. I was wondering which ones\n> > people were thinking about for 7.2?\n> >\n> > * Allow inherited tables to inherit index, UNIQUE constraint, and primary key\n> > [inheritance]\n> \n> \n> i was wondering if there was any thought still being given to Oliver \n> Elphick's post from a while back that is still in TODO.detail \n> [inheritance]: \n> http://candle.pha.pa.us/mhonarc/todo.detail/inheritance/msg00010.html\n> \n> i kind of feel as though the inheritance semantics for postgres at the \n> moment are not fully fleshed out, and including further features without \n> having a full plan for the semantics doesn't seem to advance the effort \n> of making postgres a true Object-Relational DBMS.\n> \n> for my part, as a user, i am excited that inheritance is available even \n> in a limited fashion, but where i use it, i have basically had to invent \n> my own semantics for referential integrity based on a suite of triggers. \n> this issue is addressed in Oliver's post, but i was wondering if such \n> issues were still a part of the development dialogue since Oliver's post \n> was the last in TODO.detail [inheritance] and seemed to merit no \n> response (or any that i could find in the mailing list archives).\n> \n> -tfo\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 21:43:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "I'd like to have partial sorting implemented in 7.2.\nWhile it's rather narrow optimization for case ORDER BY ... LIMIT ...\nit has big (in my opinion) impact to Web application.\nWe get up to 6x performance improvement in our experiments with our very\ncrude patch for 7.1. The idea is very simple - stop sorting when we get\nrequested rows. Unfortunately, our knowledge of internals is poor and\nwe need some help.\n\n\tRegards,\n\t\tOleg\n\nOn Thu, 10 May 2001, Bruce Momjian wrote:\n\n> Here is a small list of big TODO items. I was wondering which ones\n> people were thinking about for 7.2?\n>\n> ---------------------------------------------------------------------------\n>\n> * Add replication of distributed databases [replication]\n> \to automatic fallover\n> \to load balancing\n> \to master/slave replication\n> \to multi-master replication\n> \to partition data across servers\n> \to sample implementation in contrib/rserv\n> \to queries across databases or servers (two-phase commit)\n> * Point-in-time data recovery using backup and write-ahead log\n> * Allow row re-use without vacuum (Vadim)\n> * Add the concept of dataspaces/tablespaces [tablespaces]\n> * Allow better control over user privileges [privileges]\n> * Allow elog() to return error codes, module name, file name, line\n> number, not just messages [elog]\n> * Allow international error message support and add error codes [elog]\n> * Make binary/file in/out interface for TOAST columns\n> * Large object interface improvements\n> * Allow inherited tables to inherit index, UNIQUE constraint, and primary key\n> [inheritance]\n> * Add ALTER TABLE DROP COLUMN feature [drop]\n> * Add ALTER TABLE ... DROP CONSTRAINT\n> * Automatically drop constraints/functions when object is dropped\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 13 May 2001 23:35:26 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "At 01:20 PM 10-05-2001 -0400, Bruce Momjian wrote:\n>Here is a small list of big TODO items. I was wondering which ones\n>people were thinking about for 7.2?\n>\n>---------------------------------------------------------------------------\n\nWell since you asked, here's my wish list for Postgresql 7.2.\n\n1) Full text index to be used by LIKE queries.\ne.g.\ncreate index myfti_idx on mytable ( mysoundex(story,'british english')\nfti_ops);\nUsage:\nselect * from mytable where mysoundex(story,'british english') like\n'%tomato%';\nselect * from mytable where mysoundex(story,'us english') like '%either%';\nselect * from mytable where mysynonym(story) like '%excellent%';\n\nFirst select indexed. Other selects not indexed.\n\n2) Some form of synchronous \"wait\" which blocks till an event happens (no\nneed to poll at all).\ne.g.\nWAIT('sendmessagetomain');\n\nNOTIFY('sendmessagetomain') gets things going. If not possible to reuse\nNOTIFY, then something else will do.\n\nThis allows many programs on various hosts to wait for an event before\ndoing things.\n\nThe present async-io stuff has traces of polling left, can't be done in a\ntransaction and can't be used with Perl DBI (and maybe other standard DB\ninterfaces). \n\n3) And the notorious VACUUM and VACUUM analyze :).\nHow about:\nVACUUM <table> lazy; (don't lock table)\nVACUUM <table> [hardworking];\nanalyze <table> [randomsample];\nanalyze <table> full;\n\nProbably syntax should be different so as not to increase the number of\nreserved words.\n\n4) Not really important to me but can serial be a proper type or something\nso that drop table will drop the linked sequence as well? \nMaybe:\n serial = old serial for compatibility\n serial4 = new serial\n serial8 = new serial using bigint\n(OK so 2 billion is big, but...)\n\n5) How will the various rollovers be handled e.g. OID, TID etc? What\nhappens if OIDs are not unique? As things get faster and bigger a nonunique\nOID in a table might just happen.\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 14 May 2001 11:44:39 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "james@spunkysoftware.com wrote:\n> \n> > > * Allow elog() to return error codes, module name, file name, line\n> > > number, not just messages [elog]\n> \n> I bags this one. A nice relatively easy place for me to start hacken' the\n> Postges. Which source tree do I diff and patch against? Er, I have no idea\n> how to use these diff and patch things but I know that a manual exists.\n> \n> How do I get the CVS source tree? Surely I don't have to download the whole\n> thing every day? I only have 1KB/sec of connectivity and it's extremely\n> expensive ($300/month).\n\nsee the page:\n\nhttp://www.ca.postgresql.org/devel-corner/docs/postgres/cvs.html\n\nthe lnks are near the end of Developer's Corner page\n\n---------------------\nHannu\n", "msg_date": "Mon, 14 May 2001 10:03:22 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: todo - I want the elog() thingy" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> \n> 2) Some form of synchronous \"wait\" which blocks till an event happens (no\n> need to poll at all).\n> e.g.\n> WAIT('sendmessagetomain');\n> \n> NOTIFY('sendmessagetomain') gets things going. If not possible to reuse\n> NOTIFY, then something else will do.\n> \n> This allows many programs on various hosts to wait for an event before\n> doing things.\n> \n> The present async-io stuff has traces of polling left, can't be done in a\n> transaction and can't be used with Perl DBI (and maybe other standard DB\n> interfaces).\n\nWhat do you do if you are waiting on come other message - drop it,\nreorder \nmessages, something else ?\n\n> 3) And the notorious VACUUM and VACUUM analyze :).\n> How about:\n> VACUUM <table> lazy; (don't lock table)\n> VACUUM <table> [hardworking];\n> analyze <table> [randomsample];\n> analyze <table> full;\n> \n> Probably syntax should be different so as not to increase the number of\n> reserved words.\n\nMaybe some SET variable ?\n\nSET VACUUM TO \"LAZY\";\nSET VACUUM TO \"ANALYZE EVERYTHING YOU CAN IN 15 MINUTES\";\n\n> 4) Not really important to me but can serial be a proper type or something\n> so that drop table will drop the linked sequence as well?\n> Maybe:\n> serial = old serial for compatibility\n> serial4 = new serial\n> serial8 = new serial using bigint\n> (OK so 2 billion is big, but...)\n> \n> 5) How will the various rollovers be handled e.g. OID, TID etc? What\n> happens if OIDs are not unique? As things get faster and bigger a nonunique\n> OID in a table might just happen.\n\nOID's should _not_ be allowed to be non-unique, it is like spending\nresources \non \"what if 2+2=5\" scenarios.\n\nI think that all system *IDs should be allowed to be 64 bits - XID reuse\nis \na kludge that can serve the immediate problem of DB freezing when\nrunning out \nof transaction IDs - but I don't like it as a long-term solution.\n\n-------------------\nHannu\n", "msg_date": "Mon, 14 May 2001 10:24:57 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "On Sat, May 12, 2001 at 11:21:44PM +0200, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Peter E. had implied that he wanted to tackle the elog issues for 7.2,\n> > but I'm not sure if he's committed to it or not.\n> \n> Well...\n> \n> * Automatically add filename, line, function name: Easy to code, lots of\n> labour. Should be lumped in with some other large change.\n> \n> * Error codes: I think there are only a handful of key messages that\n> users (programs) need to detect cleanly, mostly constraint violations.\n> The rest are \"the query you sent is wrong -- fix your application\" and\n> \"something went really wrong -- manual repair needed\"\n> \n> So maybe this could be a smallish change.\n> \n> * Translation: If we want to use gettext I can get started. I don't\n> think I'm interested in using any other interface.\n\n What dissect this work to two parts? First implement error codes and later\ntranslation. IMHO transaction hasn't big importance (and will encapsulate\nin elog() stuff) and is possible speculate about it later. Do you plannig \ngettext stuff as a ./configure option? \n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 14 May 2001 09:40:09 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "#ifdef ENABLE_NLS\n# include <libintl.h>\n# define _(String) gettext (String)\n# define N_(String) (String)\n#else\n/* Stubs that do something close enough. */\n# define textdomain(String)\n# define gettext(String) (String)\n# define dgettext(Domain,Message) (Message)\n# define dcgettext(Domain,Message,Type) (Message)\n# define bindtextdomain(Domain,Directory)\n# define _(String) (String)\n# define N_(String) (String)\n#endif\n\nJust add the above code to each file, and each time that you use a string \"my\nstring\" encapsulate it with _(\"my string\"). gettext will parse the code and\nextract all the strings for future translation.\n\nCheers\n\n\nKarel Zak wrote:\n\n> On Sat, May 12, 2001 at 11:21:44PM +0200, Peter Eisentraut wrote:\n> > Tom Lane writes:\n> >\n> > > Peter E. had implied that he wanted to tackle the elog issues for 7.2,\n> > > but I'm not sure if he's committed to it or not.\n> >\n> > Well...\n> >\n> > * Automatically add filename, line, function name: Easy to code, lots of\n> > labour. Should be lumped in with some other large change.\n> >\n> > * Error codes: I think there are only a handful of key messages that\n> > users (programs) need to detect cleanly, mostly constraint violations.\n> > The rest are \"the query you sent is wrong -- fix your application\" and\n> > \"something went really wrong -- manual repair needed\"\n> >\n> > So maybe this could be a smallish change.\n> >\n> > * Translation: If we want to use gettext I can get started. I don't\n> > think I'm interested in using any other interface.\n>\n> What dissect this work to two parts? First implement error codes and later\n> translation. IMHO transaction hasn't big importance (and will encapsulate\n> in elog() stuff) and is possible speculate about it later. Do you plannig\n> gettext stuff as a ./configure option?\n>\n> Karel\n>\n> --\n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n>\n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n", "msg_date": "Mon, 14 May 2001 20:24:26 +1200", "msg_from": "Franck Martin <franck@sopac.org>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "At 10:24 AM 14-05-2001 +0500, Hannu Krosing wrote:\n>Lincoln Yeoh wrote:\n>> 2) Some form of synchronous \"wait\" which blocks till an event happens (no\n>> need to poll at all).\n>> e.g.\n>> WAIT('sendmessagetomain');\n>> \n>> NOTIFY('sendmessagetomain') gets things going. If not possible to reuse\n>> NOTIFY, then something else will do.\n>> \n>> This allows many programs on various hosts to wait for an event before\n>> doing things.\n>> \n>> The present async-io stuff has traces of polling left, can't be done in a\n>> transaction and can't be used with Perl DBI (and maybe other standard DB\n>> interfaces).\n>\n>What do you do if you are waiting on come other message - drop it,\n>reorder \n>messages, something else ?\n\nSince the proposed WAIT is to block on a particular message immediately,\nyou can't really wait on other messages at the same time. Multiple WAITs\nwill just be done in the order issued. \n\nIt can be considered to be a waste of one backend/db connection, but it\nallows some things to be much simpler - each program just does one thing,\nand hopefully does it well. \n\nWAIT should return a TRUE if successful - received desired event and\nstopped blocking, and a FALSE if not - something else happened (SIGTERM,\nbackend disconnected/died), and stopped blocking.\n\nHmm hang on, what will happen if pgsql is shutdown. Tons of WAITing\nprocesses waking up at the same time? Use FreeBSD? :).\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 14 May 2001 17:24:31 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> At 01:20 PM 10-05-2001 -0400, Bruce Momjian wrote:\n> >Here is a small list of big TODO items. I was wondering which ones\n> >people were thinking about for 7.2?\n> >\n> >---------------------------------------------------------------------------\n> \n> Well since you asked, here's my wish list for Postgresql 7.2.\n> \n> 1) Full text index to be used by LIKE queries.\n> e.g.\n> create index myfti_idx on mytable ( mysoundex(story,'british english')\n> fti_ops);\n> Usage:\n> select * from mytable where mysoundex(story,'british english') like\n> '%tomato%';\n> select * from mytable where mysoundex(story,'us english') like '%either%';\n> select * from mytable where mysynonym(story) like '%excellent%';\n> \n> First select indexed. Other selects not indexed.\n\nThis is not as easy as it looks. Full text search requires one of two\napproaches, either a trigger function which updates a full text index on insert\nor update, or a system which periodically scans a database and builds a full\ntext index. The fulltextindex method that is in contrib and my FTSS system are\nexamples of both respectively.\n\nEither way it is a bit of overhead, and typically outside normal SQL. Most\npeople would not want the amount of overhead required to maintain a full text\nindex on each insert or update.\n\nAlso, I have been trying to talk the guys into doing some things with indexes,\nbut my understanding is that indexes are one of the last bastions of black\nmagic in Postgres.\n\n\n-- \n42 was the answer, 49 was too soon.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Mon, 14 May 2001 08:43:27 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> > * Allow elog() to return error codes, module name, file name, line\n> > number, not just messages [elog]\n\nI bags this one. A nice relatively easy place for me to start hacken' the\nPostges. Which source tree do I diff and patch against? Er, I have no idea\nhow to use these diff and patch things but I know that a manual exists.\n\nHow do I get the CVS source tree? Surely I don't have to download the whole\nthing every day? I only have 1KB/sec of connectivity and it's extremely\nexpensive ($300/month).\n\nCan I just download the files for elog() and do it that way, and I'll write\nsome driver function to unit test it, and send the patch when I'm done to\nthe patches list.\n\nAny developers got some tips for me?\n\n---\nJames\n\n\n\n----- Original Message -----\nFrom: \"Oleg Bartunov\" <oleg@sai.msu.su>\nTo: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nCc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Sunday, May 13, 2001 9:35 PM\nSubject: Re: [HACKERS] 7.2 items\n\n\n> I'd like to have partial sorting implemented in 7.2.\n> While it's rather narrow optimization for case ORDER BY ... LIMIT ...\n> it has big (in my opinion) impact to Web application.\n> We get up to 6x performance improvement in our experiments with our very\n> crude patch for 7.1. The idea is very simple - stop sorting when we get\n> requested rows. Unfortunately, our knowledge of internals is poor and\n> we need some help.\n>\n> Regards,\n> Oleg\n>\n> On Thu, 10 May 2001, Bruce Momjian wrote:\n>\n> > Here is a small list of big TODO items. I was wondering which ones\n> > people were thinking about for 7.2?\n> >\n>\n> --------------------------------------------------------------------------\n-\n> >\n> > * Add replication of distributed databases [replication]\n> > o automatic fallover\n> > o load balancing\n> > o master/slave replication\n> > o multi-master replication\n> > o partition data across servers\n> > o sample implementation in contrib/rserv\n> > o queries across databases or servers (two-phase commit)\n> > * Point-in-time data recovery using backup and write-ahead log\n> > * Allow row re-use without vacuum (Vadim)\n> > * Add the concept of dataspaces/tablespaces [tablespaces]\n> > * Allow better control over user privileges [privileges]\n> > * Allow elog() to return error codes, module name, file name, line\n> > number, not just messages [elog]\n> > * Allow international error message support and add error codes [elog]\n> > * Make binary/file in/out interface for TOAST columns\n> > * Large object interface improvements\n> > * Allow inherited tables to inherit index, UNIQUE constraint, and\nprimary key\n> > [inheritance]\n> > * Add ALTER TABLE DROP COLUMN feature [drop]\n> > * Add ALTER TABLE ... DROP CONSTRAINT\n> > * Automatically drop constraints/functions when object is dropped\n> >\n> >\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n>\n\n", "msg_date": "Mon, 14 May 2001 13:44:04 +0100", "msg_from": "<james@spunkysoftware.com>", "msg_from_op": false, "msg_subject": "Re: todo - I want the elog() thingy" }, { "msg_contents": "Quoting Peter Eisentraut <peter_e@gmx.net>:\n\n> Patrick Welche writes:\n> \n> > > > What's missing with it?\n> > >\n> > > * portability\n> > >\n> > > At first glance, uses strlcat and strlcpy. Didn't look further.\n> >\n> > As I said, I didn't change anything within the GNU make source to get it\n> to\n> > work.\n> \n> I am talking about the source of the thing (libintl) itself.\n\n[snip]\n\nSorry if I enter in a rush....\n\nwhats wrong with GNU gettext?\n\nSaludos... :-)\n\n-- \nEl mejor sistema operativo es aquel que te da de comer.\nCuida tu dieta.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Mon, 14 May 2001 18:13:10 +0300", "msg_from": "=?iso-8859-1?B?TWFydO1uIE1hcnF16XM=?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "On Sat, May 12, 2001 at 08:00:42PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > * Translation: If we want to use gettext I can get started. I don't\n> > think I'm interested in using any other interface.\n> \n> I have no objection to the gettext API, but I was and still am concerned\n> about depending on GNU gettext's code, because of license conflicts.\n> There is a BSD-license gettext clone project, but it doesn't look to be\n> very far along.\n\nWhat's missing with it? (eg managed to force gmake's configure to use it\nrather than its own, and didn't have to fiddle anything for it to just work)\n\n% ldd `which gmake` \n/usr/local/bin/gmake:\n -lutil.5 => /usr/lib/libutil.so.5\n -lkvm.5 => /usr/lib/libkvm.so.5\n -lintl.0 => /usr/lib/libintl.so.0 << BSD license lib\n -lc.12 => /usr/lib/libc.so.12\n% env LANGUAGE=fr gmake\ngmake: *** Pas de cibles sp�cifi�es et aucun makefile n'a �t� trouv�. Arr�t.\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 14 May 2001 18:13:35 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Please, consider a BLOB column type without having to do lo_import, \nlo_export.\n\n", "msg_date": "Mon, 14 May 2001 14:08:42 -0500", "msg_from": "Thomas Swan <tswan@olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Patrick Welche writes:\n\n> > I have no objection to the gettext API, but I was and still am concerned\n> > about depending on GNU gettext's code, because of license conflicts.\n> > There is a BSD-license gettext clone project, but it doesn't look to be\n> > very far along.\n>\n> What's missing with it?\n\n* portability\n\nAt first glance, uses strlcat and strlcpy. Didn't look further.\n\n* dedication to portability\n\nOnly plans to support *BSD.\n\n* source code availability\n\nDidn't find anything outside NetBSD CVS and the CVS rep where they got it\nfrom.\n\n* documentation\n\nRelated to above.\n\n* English support forum\n\nOnly Japanese mailing list available.\n\n\nIf you can address these things we might have a winner, otherwise we might\nhave to fork it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 14 May 2001 21:36:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> If you can address these things we might have a winner, otherwise we might\n> have to fork it.\n\nI am going to have to ask for clarification on that last point. Are\nyou suggesting we have two versions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 15:44:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> Please, consider a BLOB column type without having to do lo_import, \n> lo_export.\n\nYep, big needed item.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 15:44:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "On Mon, May 14, 2001 at 09:36:56PM +0200, Peter Eisentraut wrote:\n> Patrick Welche writes:\n> \n> > > I have no objection to the gettext API, but I was and still am concerned\n> > > about depending on GNU gettext's code, because of license conflicts.\n> > > There is a BSD-license gettext clone project, but it doesn't look to be\n> > > very far along.\n> >\n> > What's missing with it?\n> \n> * portability\n> \n> At first glance, uses strlcat and strlcpy. Didn't look further.\n\nAs I said, I didn't change anything within the GNU make source to get it to\nwork. grep strlcat on GNU make, which you must have in order to build\npostgresql, returns nothing, however grep gettext does. I chose gmake as an\nexample which is probably written with portability in mind.\n\n> * dedication to portability\n> \n> Only plans to support *BSD.\n\nWhat does this imply?\n\nHISTORY\n The functions are implemented by Citrus project, based on the documenta-\n tions for GNU gettext.\n\n> * source code availability\n> \n> Didn't find anything outside NetBSD CVS and the CVS rep where they got it\n> from.\n\n>From libintl.h\n\n/*-\n * Copyright (c) 2000 Citrus Project,\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n * notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n * notice, this list of conditions and the following disclaimer in the\n * documentation and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n\nwhich I think counts as a postgresql compatible license? Is that what you\nmeant?\n\n> * documentation\n> \n> Related to above.\n\nThe HISTORY bit was quoted from the gettext man page.. What more\ndocumentation is required? AFAIK it's meant to be a direct replacement..\n\n> * English support forum\n> \n> Only Japanese mailing list available.\n\nYes, I wondered about that to.. Luckily PostgreSQL is international!\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 14 May 2001 21:18:15 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Patrick Welche writes:\n\n> > > What's missing with it?\n> >\n> > * portability\n> >\n> > At first glance, uses strlcat and strlcpy. Didn't look further.\n>\n> As I said, I didn't change anything within the GNU make source to get it to\n> work.\n\nI am talking about the source of the thing (libintl) itself.\n\n> > * dedication to portability\n> >\n> > Only plans to support *BSD.\n>\n> What does this imply?\n\nIt implies it won't easily work on non-BSD platforms, which makes it\nunusable to many folks.\n\n> > * source code availability\n> >\n> > Didn't find anything outside NetBSD CVS and the CVS rep where they got it\n> > from.\n\n> which I think counts as a postgresql compatible license? Is that what you\n> meant?\n\nNo, I meant I can't find the source code anywhere in a polished form.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 14 May 2001 22:36:30 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010514 13:39] wrote:\n> Patrick Welche writes:\n> \n> > > I have no objection to the gettext API, but I was and still am concerned\n> > > about depending on GNU gettext's code, because of license conflicts.\n> > > There is a BSD-license gettext clone project, but it doesn't look to be\n> > > very far along.\n> >\n> > What's missing with it?\n> \n> * portability\n> \n> At first glance, uses strlcat and strlcpy. Didn't look further.\n> \n> * dedication to portability\n> \n> Only plans to support *BSD.\n> \n> * source code availability\n> \n> Didn't find anything outside NetBSD CVS and the CVS rep where they got it\n> from.\n> \n> * documentation\n> \n> Related to above.\n> \n> * English support forum\n> \n> Only Japanese mailing list available.\n> \n> \n> If you can address these things we might have a winner, otherwise we might\n> have to fork it.\n\nPlease don't fork it. If you base off the FreeBSD gettext I will\nmerge your changes into ours as long as they follow the style of\nthe existing code.\n\nI'd really like to see a \"bsd userland\" out there not tied to a\nparticular version of UNIX so this means a lot to me.\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Mon, 14 May 2001 13:55:33 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Karel Zak writes:\n\n> > * Translation: If we want to use gettext I can get started. I don't\n> > think I'm interested in using any other interface.\n>\n> What dissect this work to two parts? First implement error codes and later\n> translation. IMHO transaction hasn't big importance (and will encapsulate\n> in elog() stuff) and is possible speculate about it later.\n\nIt's important to me. And it's not contained to elog(), I want to\ntranslate the whole thing, including all the frontends.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 14 May 2001 23:09:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> On Sat, May 12, 2001 at 08:00:42PM -0400, Tom Lane wrote:\n>> There is a BSD-license gettext clone project, but it doesn't look to be\n>> very far along.\n\n> What's missing with it?\n\nWhere did you find an actual release meant for public consumption?\nI had a hard time even finding a CVS server.\n\nNo release history == not very far along, in my book.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 19:26:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items " }, { "msg_contents": "hi, there!\n\nOn Mon, 14 May 2001, Peter Eisentraut wrote:\n\n> > > I have no objection to the gettext API, but I was and still am concerned\n> > > about depending on GNU gettext's code, because of license conflicts.\n> > > There is a BSD-license gettext clone project, but it doesn't look to be\n> > > very far along.\n> >\n> > What's missing with it?\n> \n> * portability\n> \n> At first glance, uses strlcat and strlcpy. Didn't look further.\n\nyou can pull strlcat and strlcpy from *BSD source tree either\nthey are pretty portable :)\n\n/fjoe\n\n", "msg_date": "Tue, 15 May 2001 12:50:57 +0700 (NOVST)", "msg_from": "Max Khon <fjoe@newst.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Tom Lane writes:\n\n> Where did you find an actual release meant for public consumption?\n\nNetBSD is using it in production. FreeBSD too? Some people from those\ncamps offered to cooperate in adopting this for our uses, so it's worth a\ntry. I'll see if I can make a self-contained portable package out of that\ncode later this week.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 15 May 2001 19:11:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2 items " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Here is a small list of big TODO items. I was wondering which ones\n> people were thinking about for 7.2?\n\nA friend of mine wants to use PostgreSQL instead of Oracle for a large\napplication, but has run into a snag when speed comparisons looked\ngood until the Oracle folks added a couple of BITMAP indexes. I can't\nrecall seeing any discussion about that here -- are there any plans?\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n", "msg_date": "07 Jun 2001 13:08:17 +0200", "msg_from": "Tom Ivar Helbekkmo <tih@kpnQwest.no>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Tom Ivar Helbekkmo wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Here is a small list of big TODO items. I was wondering which ones\n> > people were thinking about for 7.2?\n> \n> A friend of mine wants to use PostgreSQL instead of Oracle for a large\n> application, but has run into a snag when speed comparisons looked\n> good until the Oracle folks added a couple of BITMAP indexes. I can't\n> recall seeing any discussion about that here -- are there any plans?\n\nI have tried to bring this up in several different forms, and hardly ever get a\nnibble.\n\nBitmap indexes are great for text searching. Perhaps you can use\n\"fulltextindex\" in the contrib directory. It isn't as fast as a bitmap index,\nand the syntax would be different, but it would be perform better.\n", "msg_date": "Thu, 07 Jun 2001 07:36:41 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Here is a small list of big TODO items. I was wondering which ones\n> > people were thinking about for 7.2?\n> \n> A friend of mine wants to use PostgreSQL instead of Oracle for a large\n> application, but has run into a snag when speed comparisons looked\n> good until the Oracle folks added a couple of BITMAP indexes. I can't\n> recall seeing any discussion about that here -- are there any plans?\n\nIt is not on our list and I am not sure what they do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Jun 2001 11:03:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Compare price of implementation.\n\nFor that $100k on the oracle license you can toss in a few more gigs\nof memory and a few extra CPU's and perhaps 15k drives rather than 10k\nones :)\n\nThen toss in the monthly support contracts between Oracle & Great\nBridge (or Pgsql.inc if you can get anyone on their staff) and all of\na sudden you can afford to upgrade the hardware more often and have a\ndeveloper write some work arounds for the slower parts of the program.\n\nAnyway, the point is to compare the 2 products where the price is\nsimilar (installation wide). If postgres is faster, then reduce the\nhardware till they are a similar speed level. Now you do a price\ncomparison and take it to business. 'Cheaper software, but slightly\nmore expensive hardware means a lower priced package with similar\nperformance'.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Tom Ivar Helbekkmo\" <tih@kpnQwest.no>\nCc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Thursday, June 07, 2001 11:03 AM\nSubject: Re: [HACKERS] 7.2 items\n\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >\n> > > Here is a small list of big TODO items. I was wondering which\nones\n> > > people were thinking about for 7.2?\n> >\n> > A friend of mine wants to use PostgreSQL instead of Oracle for a\nlarge\n> > application, but has run into a snag when speed comparisons looked\n> > good until the Oracle folks added a couple of BITMAP indexes. I\ncan't\n> > recall seeing any discussion about that here -- are there any\nplans?\n>\n> It is not on our list and I am not sure what they do.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n", "msg_date": "Thu, 7 Jun 2001 12:05:56 -0400", "msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >\n> > > Here is a small list of big TODO items. I was wondering which ones\n> > > people were thinking about for 7.2?\n> >\n> > A friend of mine wants to use PostgreSQL instead of Oracle for a large\n> > application, but has run into a snag when speed comparisons looked\n> > good until the Oracle folks added a couple of BITMAP indexes. I can't\n> > recall seeing any discussion about that here -- are there any plans?\n>\n> It is not on our list and I am not sure what they do.\n\nDo you have access to any Oracle Documentation? There is a good explanation\nof them.\n\nHowever, I will try to explain.\n\nIf you have a table, locations. It has 1,000,000 records.\n\nIn oracle you do this:\n\ncreate bitmap index bitmap_foo on locations (state) ;\n\nFor each unique value of 'state' oracle will create a bitmap with 1,000,000\nbits in it. With a one representing a match and a zero representing no\nmatch. Record '0' in the table is represented by bit '0' in the bitmap,\nrecord '1' is represented by bit '1', record two by bit '2' and so on.\n\nIn a table where comparatively few different values are to be indexed in a\nlarge table, a bitmap index can be quite small and not suffer the N * log(N)\ndisk I/O most tree based indexes suffer. If the bitmap is fairly sparse or\ndense (or have periods of denseness and sparseness), it can be compressed\nvery efficiently as well.\n\nWhen the statement:\n\nselect * from locations where state = 'MA';\n\nIs executed, the bitmap is read into memory in very few disk operations.\n(Perhaps even as few as one or two). It is a simple operation of rifling\nthrough the bitmap for '1's that indicate the record has the property,\n'state' = 'MA';\n\n", "msg_date": "Thu, 07 Jun 2001 14:36:59 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "I think it's possible to implement bitmap indexes with a little\neffort using GiST. at least I know one implementation\nhttp://www.it.iitb.ernet.in/~rvijay/dbms/proj/\nif you have interests you could implement bitmap indexes yourself\nunfortunately, we're very busy\n\n\tOleg\nOn Thu, 7 Jun 2001, mlw wrote:\n\n> Bruce Momjian wrote:\n>\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >\n> > > > Here is a small list of big TODO items. I was wondering which ones\n> > > > people were thinking about for 7.2?\n> > >\n> > > A friend of mine wants to use PostgreSQL instead of Oracle for a large\n> > > application, but has run into a snag when speed comparisons looked\n> > > good until the Oracle folks added a couple of BITMAP indexes. I can't\n> > > recall seeing any discussion about that here -- are there any plans?\n> >\n> > It is not on our list and I am not sure what they do.\n>\n> Do you have access to any Oracle Documentation? There is a good explanation\n> of them.\n>\n> However, I will try to explain.\n>\n> If you have a table, locations. It has 1,000,000 records.\n>\n> In oracle you do this:\n>\n> create bitmap index bitmap_foo on locations (state) ;\n>\n> For each unique value of 'state' oracle will create a bitmap with 1,000,000\n> bits in it. With a one representing a match and a zero representing no\n> match. Record '0' in the table is represented by bit '0' in the bitmap,\n> record '1' is represented by bit '1', record two by bit '2' and so on.\n>\n> In a table where comparatively few different values are to be indexed in a\n> large table, a bitmap index can be quite small and not suffer the N * log(N)\n> disk I/O most tree based indexes suffer. If the bitmap is fairly sparse or\n> dense (or have periods of denseness and sparseness), it can be compressed\n> very efficiently as well.\n>\n> When the statement:\n>\n> select * from locations where state = 'MA';\n>\n> Is executed, the bitmap is read into memory in very few disk operations.\n> (Perhaps even as few as one or two). It is a simple operation of rifling\n> through the bitmap for '1's that indicate the record has the property,\n> 'state' = 'MA';\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 7 Jun 2001 22:38:20 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "> I think it's possible to implement bitmap indexes with a little\n> effort using GiST. at least I know one implementation\n> http://www.it.iitb.ernet.in/~rvijay/dbms/proj/\n> if you have interests you could implement bitmap indexes yourself\n> unfortunately, we're very busy\n> \n\nI have added this thread to TODO.detail and TODO:\n\n\t* Add bitmap indexes [performance] \n\nVery interesting to use GIST for this. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Jun 2001 16:04:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Please, consider a BLOB column type without having to do lo_import,\n> > > lo_export.\n> > \n> > Yep, big needed item.\n> \n> as we have now and unlimited rowlength it seems to be more of an \n> interface issue than the actual implementation one (mod seek/read).\n> \n> Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n\nYes, clearly interface. Someone is working on code to insert/export\nbinary stuff using bytea and base64 encoding. Seems like a good idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 26 Jun 2001 11:36:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Please, consider a BLOB column type without having to do lo_import,\n> > lo_export.\n> \n> Yep, big needed item.\n\nas we have now and unlimited rowlength it seems to be more of an \ninterface issue than the actual implementation one (mod seek/read).\n\nIs there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n\n----------\nHannu\n", "msg_date": "Tue, 26 Jun 2001 17:45:27 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: 7.2 items" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > > Please, consider a BLOB column type without having to do lo_import,\n> > > > lo_export.\n> > >\n> > > Yep, big needed item.\n> >\n> > as we have now and unlimited rowlength it seems to be more of an\n> > interface issue than the actual implementation one (mod seek/read).\n> >\n> > Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n> \n> Yes, clearly interface. Someone is working on code to insert/export\n> binary stuff using bytea and base64 encoding. Seems like a good idea.\n\nWill there also be support current file-like BLOB ops like seek and \nread(n) ?\n\n----------------\nHannu\n", "msg_date": "Wed, 27 Jun 2001 10:11:57 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "On Wed, 27 Jun 2001, Hannu Krosing wrote:\n\n> Bruce Momjian wrote:\n> > \n> > > Bruce Momjian wrote:\n> > > >\n> > > > > Please, consider a BLOB column type without having to do lo_import,\n> > > > > lo_export.\n> > > >\n> > > > Yep, big needed item.\n> > >\n> > > as we have now and unlimited rowlength it seems to be more of an\n> > > interface issue than the actual implementation one (mod seek/read).\n> > >\n> > > Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n> > \n> > Yes, clearly interface. Someone is working on code to insert/export\n> > binary stuff using bytea and base64 encoding. Seems like a good idea.\n> \n> Will there also be support current file-like BLOB ops like seek and \n> read(n) ?\n\nSure, via substring(). Unfortunately, TOASTed tuple must be detoasted\ncompletely, and you cannot get any performance boost by, for example,\nreading first 8k out of a 500k bytea value. All 500k must be detoasted\nfirst.\n\n-alex\n\n", "msg_date": "Wed, 27 Jun 2001 06:31:58 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Alex Pilosov wrote:\n> \n> On Wed, 27 Jun 2001, Hannu Krosing wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > > Bruce Momjian wrote:\n> > > > >\n> > > > > > Please, consider a BLOB column type without having to do lo_import,\n> > > > > > lo_export.\n> > > > >\n> > > > > Yep, big needed item.\n> > > >\n> > > > as we have now and unlimited rowlength it seems to be more of an\n> > > > interface issue than the actual implementation one (mod seek/read).\n> > > >\n> > > > Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n> > >\n> > > Yes, clearly interface. Someone is working on code to insert/export\n> > > binary stuff using bytea and base64 encoding. Seems like a good idea.\n> >\n> > Will there also be support current file-like BLOB ops like seek and\n> > read(n) ?\n> \n> Sure, via substring().\n\nThat would _not_ be seek()+read() by default but can be possibly\nimplemented \nas such in cooperation with toast machinery.\n\n> Unfortunately, TOASTed tuple must be detoasted\n> completely, and you cannot get any performance boost by, for example,\n> reading first 8k out of a 500k bytea value. All 500k must be detoasted\n> first.\n\nI suspect that this can be avoided with a smarter toast-aware substring \nand possibly also disallowing compression (just using spreading overs \nmultiple toast-table records).\n\nIIRC there exists machinery (if not interface) for influencing TOAST\nprocessor \ndecisions to compress or not.\n\nAFAIK, oracle LONGs have some smart schema where you can acess them in\nsome \nsmart ways if the cursor is on current row, but you will get full\nbytestrings\nif you request more than one row at a time, i.e. some compined BLOB/LONG\nscheme.\n\nI think this is worth considering, specially for seek/read/write type\noperations.\n\n--------------\nHannu\n", "msg_date": "Wed, 27 Jun 2001 14:43:33 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Please, consider a BLOB column type without having to do lo_import,\n> > lo_export.\n>\n> Yep, big needed item.\n\nMaybe we could make the BLOB type a wrapper around the lo_* functions?\nThe BLOB value would only store the oid.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 27 Jun 2001 17:04:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Hannu Krosing writes:\n\n> as we have now and unlimited rowlength it seems to be more of an\n> interface issue than the actual implementation one (mod seek/read).\n\nunlimited = 2 GB, IIRC\n\n> Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n\nIt's basically no different from regular character strings, i.e.,\nsubstring(), position(), ||, etc.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 27 Jun 2001 17:06:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n> Please, consider a BLOB column type without having to do lo_import,\n> lo_export.\n>> \n>> Yep, big needed item.\n\n> Maybe we could make the BLOB type a wrapper around the lo_* functions?\n> The BLOB value would only store the oid.\n\nWhat for/why bother? A toastable bytea column would do just as well.\nWhat we need is an easy-to-use set of access functions, which we haven't\ngot in either case (without additional work).\n\nI'd prefer to see that work invested in access functions for toasted\ncolumns, because LOs have all sorts of administrative problems ---\nsecurity and garbage collection, to name two. We don't really want to\nencourage their use in the long term.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 11:40:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items " }, { "msg_contents": "Tom Lane writes:\n\n> > Maybe we could make the BLOB type a wrapper around the lo_* functions?\n> > The BLOB value would only store the oid.\n>\n> What for/why bother? A toastable bytea column would do just as well.\n\nThere's still a 1 or 2 GB limit for data stored in that.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 27 Jun 2001 18:29:51 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> What for/why bother? A toastable bytea column would do just as well.\n\n> There's still a 1 or 2 GB limit for data stored in that.\n\n1 Gb, I believe ... but LOs are not a lot better; they'd max out at 2 or\nat most 4 Gb, depending on whether the code always treats offsets as\nunsigned.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jun 2001 12:44:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items " }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Hannu Krosing writes:\n> \n> > as we have now and unlimited rowlength it seems to be more of an\n> > interface issue than the actual implementation one (mod seek/read).\n> \n> unlimited = 2 GB, IIRC\n\nYes, about that unlimited ;)\n\n> > Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n> \n> It's basically no different from regular character strings, i.e.,\n> substring(), position(), ||, etc.\n\nSo no standard seek/read/write type interface ?\n\n------------\nHannu\n", "msg_date": "Wed, 27 Jun 2001 23:33:35 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> What for/why bother? A toastable bytea column would do just as well.\n>\n> > There's still a 1 or 2 GB limit for data stored in that.\n>\n> 1 Gb, I believe ... but LOs are not a lot better; they'd max out at 2 or\n> at most 4 Gb, depending on whether the code always treats offsets as\n> unsigned.\n\nThat can be fixed by adding a 64-bit aware equivalent of the existing lo_*\nfunctions. I suppose it'd be a lot harder to make regular data types\nhandle long values.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 27 Jun 2001 20:54:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items " }, { "msg_contents": "Hannu Krosing writes:\n\n> > > Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n> >\n> > It's basically no different from regular character strings, i.e.,\n> > substring(), position(), ||, etc.\n>\n> So no standard seek/read/write type interface ?\n\nSQL is not a procedural language, so this has to be expected.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 27 Jun 2001 23:55:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "At 23:55 27/06/01 +0200, Peter Eisentraut wrote:\n>Hannu Krosing writes:\n>\n>> > > Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n>> >\n>> > It's basically no different from regular character strings, i.e.,\n>> > substring(), position(), ||, etc.\n>>\n>> So no standard seek/read/write type interface ?\n>\n>SQL is not a procedural language, so this has to be expected.\n>\n\nWouldn't this logic also imply that there would be no cursor positioning?\nNo update cursors etc? seek, read, write don't seem that different to\nMOVE/FETCH/UPDATE.\n\nYou also missed out mentioning the character overlay functions (which I\ndon't think we have), that allow updates of parts of BLOBs.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 28 Jun 2001 09:25:37 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Hannu Krosing writes:\n> \n> > > > Is there an ISO/ANSI SQL interface to BLOB's defined someplace ?\n> > >\n> > > It's basically no different from regular character strings, i.e.,\n> > > substring(), position(), ||, etc.\n> >\n> > So no standard seek/read/write type interface ?\n> \n> SQL is not a procedural language, so this has to be expected.\n\nSQL is about 3-5 languages which share some syntax DDL,DML,DQL,cursor\nmanipulation,...\n\nAnd we do currently have seek/read/write for LOs, possibly as a relict\nfrom postquel.\n\nWe also have PL/PGSQL and other PL's that can be used from wihin SQL, so\nfor me the \nborders between different languages seem quite blurred. \n\nWhat I hoped the standard would have is something like cursor ops on a\nfield in on \nouter cursors current record.\n\n------------------\nHannu\n", "msg_date": "Thu, 28 Jun 2001 11:33:24 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" } ]
[ { "msg_contents": "\nCan anyone suggest why this might be happening (I think it's in 7.1b4):\n\n SELECT definition as viewdef, \n (select oid from pg_rewrite where\n rulename='_RETszallitolevel_tetele_ervenyes') as view_oid \n from pg_views where viewname = 'szallitolevel_tetele_ervenyes';\n\n=> view_oid is 133652.\n\n \n SELECT definition as viewdef, \n (select oid from pg_rewrite where \n rulename='_RET' || 'szallitolevel_tetele_ervenyes') as view_oid \n from pg_views where viewname = 'szallitolevel_tetele_ervenyes';\n\n=> view_oid is NULL\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 11 May 2001 02:32:24 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Odd results in SELECT" }, { "msg_contents": "On Fri, 11 May 2001, Philip Warner wrote:\n> Can anyone suggest why this might be happening (I think it's in 7.1b4):\n> \n> SELECT definition as viewdef, \n> (select oid from pg_rewrite where\n> rulename='_RETszallitolevel_tetele_ervenyes') as view_oid \n> from pg_views where viewname = 'szallitolevel_tetele_ervenyes';\n> \n> => view_oid is 133652.\n> \n> \n> SELECT definition as viewdef, \n> (select oid from pg_rewrite where \n> rulename='_RET' || 'szallitolevel_tetele_ervenyes') as view_oid \n> from pg_views where viewname = 'szallitolevel_tetele_ervenyes';\n> \n> => view_oid is NULL\nI get the same result in 7.1 final. Tom, isn't this in relation with my\ncomplex query you solved yesterday?\n\nZoltan\n\n", "msg_date": "Thu, 10 May 2001 18:50:01 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": false, "msg_subject": "Re: Odd results in SELECT" }, { "msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> On Fri, 11 May 2001, Philip Warner wrote:\n>> Can anyone suggest why this might be happening (I think it's in 7.1b4):\n>> \n>> SELECT definition as viewdef, \n>> (select oid from pg_rewrite where\n>> rulename='_RETszallitolevel_tetele_ervenyes') as view_oid \n>> from pg_views where viewname = 'szallitolevel_tetele_ervenyes';\n>> \n>> => view_oid is 133652.\n>> \n>> \n>> SELECT definition as viewdef, \n>> (select oid from pg_rewrite where \n>> rulename='_RET' || 'szallitolevel_tetele_ervenyes') as view_oid \n>> from pg_views where viewname = 'szallitolevel_tetele_ervenyes';\n>> \n>> => view_oid is NULL\n> I get the same result in 7.1 final. Tom, isn't this in relation with my\n> complex query you solved yesterday?\n\nNot in that form --- there isn't any parameter being passed down to the\nsubquery. What plan does EXPLAIN show for the failing query?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 16:57:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Odd results in SELECT " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Can anyone suggest why this might be happening (I think it's in 7.1b4):\n\nCan't duplicate in current sources:\n\nregression=# SELECT definition as viewdef,\nregression-# (select oid from pg_rewrite where\nregression(# rulename='_RETstreet') as view_oid\nregression-# from pg_views where viewname = 'street';\n viewdef\n | view_oid\n-------------------------------------------------------------------------------------------------+----------\n SELECT r.name, r.thepath, c.cname FROM ONLY road r, real_city c WHERE (c.outline ## r.thepath); | 4001276\n(1 row)\n\nregression=# SELECT definition as viewdef,\nregression-# (select oid from pg_rewrite where\nregression(# rulename='_RET' || 'street') as view_oid\nregression-# from pg_views where viewname = 'street';\n viewdef\n | view_oid\n-------------------------------------------------------------------------------------------------+----------\n SELECT r.name, r.thepath, c.cname FROM ONLY road r, real_city c WHERE (c.outline ## r.thepath); | 4001276\n(1 row)\n\nWhat does EXPLAIN show for your two queries? (Maybe you'd better make\nit EXPLAIN VERBOSE.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 17:31:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Odd results in SELECT " }, { "msg_contents": "On Thu, 10 May 2001, Tom Lane wrote:\n\n> Philip Warner <pjw@rhyme.com.au> writes:\n> > Can anyone suggest why this might be happening (I think it's in 7.1b4):\n> \n> Can't duplicate in current sources:\n> \n> regression=# SELECT definition as viewdef,\n> regression-# (select oid from pg_rewrite where\n> regression(# rulename='_RETstreet') as view_oid\n> regression-# from pg_views where viewname = 'street';\n> viewdef\n> | view_oid\n> -------------------------------------------------------------------------------------------------+----------\n> SELECT r.name, r.thepath, c.cname FROM ONLY road r, real_city c WHERE (c.outline ## r.thepath); | 4001276\n> (1 row)\n> \n> regression=# SELECT definition as viewdef,\n> regression-# (select oid from pg_rewrite where\n> regression(# rulename='_RET' || 'street') as view_oid\n> regression-# from pg_views where viewname = 'street';\n> viewdef\n> | view_oid\n> -------------------------------------------------------------------------------------------------+----------\n> SELECT r.name, r.thepath, c.cname FROM ONLY road r, real_city c WHERE (c.outline ## r.thepath); | 4001276\n> (1 row)\n> \n> What does EXPLAIN show for your two queries? (Maybe you'd better make\n> it EXPLAIN VERBOSE.)\nI attached both.\n\n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz", "msg_date": "Fri, 11 May 2001 14:42:22 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": false, "msg_subject": "Re: Odd results in SELECT " }, { "msg_contents": "See my prior reply to Philip: the problem is that the given string is\nlonger than NAMEDATALEN. When you write\n\trulename = 'foo'\n(rulename is of type NAME) the untyped literal string 'foo' gets coerced\nto NAME, ie truncated to fit, and all is well. When you write\n\trulename = ('foo' || 'bar')\nthe result of the || operator is type TEXT, so instead rulename is\nconverted to TEXT and a text comparison is performed. In this case the\nrighthand value is not truncated and so the match will always fail.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 09:53:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Odd results in SELECT " } ]
[ { "msg_contents": "My nightly regression tests for OBSD failed for i386 and sparc. Attached\nis the regression.diff, I don't know what to make of it.. Looks like\nproblems w/ foreign keys.\n\n...\nparallel group (5 tests): portals_p2 rules select_views alter_table\nforeign_key\n select_views ... ok\n alter_table ... FAILED\n portals_p2 ... ok\n rules ... ok\n foreign_key ... FAILED\nparallel group (3 tests): limit temp plpgsql\n...\n\n\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5", "msg_date": "Thu, 10 May 2001 13:25:11 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "Regression tests for OBSD scrammed.." }, { "msg_contents": "\nIs this CVS or 7.1.X?\n\n\n> My nightly regression tests for OBSD failed for i386 and sparc. Attached\n> is the regression.diff, I don't know what to make of it.. Looks like\n> problems w/ foreign keys.\n> \n> ...\n> parallel group (5 tests): portals_p2 rules select_views alter_table\n> foreign_key\n> select_views ... ok\n> alter_table ... FAILED\n> portals_p2 ... ok\n> rules ... ok\n> foreign_key ... FAILED\n> parallel group (3 tests): limit temp plpgsql\n> ...\n> \n> \n> \n> b. palmer, bpalmer@crimelabs.net\n> pgp: www.crimelabs.net/bpalmer.pgp5\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 13:43:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "\nYes, I see it too. Must have been one of the patches I applied\nyesterday. Not sure which one.\n\n> My nightly regression tests for OBSD failed for i386 and sparc. Attached\n> is the regression.diff, I don't know what to make of it.. Looks like\n> problems w/ foreign keys.\n> \n> ...\n> parallel group (5 tests): portals_p2 rules select_views alter_table\n> foreign_key\n> select_views ... ok\n> alter_table ... FAILED\n> portals_p2 ... ok\n> rules ... ok\n> foreign_key ... FAILED\n> parallel group (3 tests): limit temp plpgsql\n> ...\n> \n> \n> \n> b. palmer, bpalmer@crimelabs.net\n> pgp: www.crimelabs.net/bpalmer.pgp5\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 13:48:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "> Is this CVS or 7.1.X?\n>\n> > My nightly regression tests for OBSD failed for i386 and sparc. Attached\n> > is the regression.diff, I don't know what to make of it.. Looks like\n> > problems w/ foreign keys.\n\nDoh, sorry all.\n\nCVS as of 3am last night..\n\n\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Thu, 10 May 2001 13:49:55 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "> > Is this CVS or 7.1.X?\n> >\n> > > My nightly regression tests for OBSD failed for i386 and sparc. Attached\n> > > is the regression.diff, I don't know what to make of it.. Looks like\n> > > problems w/ foreign keys.\n> \n> Doh, sorry all.\n> \n> CVS as of 3am last night..\n\nOK, do an initdb and watch the errors disapper. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 15:43:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, do an initdb and watch the errors disapper. :-)\n\nDid someone forget to bump catversion.h?\n\nI didn't notice any recent commits that sounded like they'd need to\nforce an initdb, but maybe I missed something.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 17:19:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.. " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, do an initdb and watch the errors disapper. :-)\n> \n> Did someone forget to bump catversion.h?\n> \n> I didn't notice any recent commits that sounded like they'd need to\n> force an initdb, but maybe I missed something.\n\nI will bump it now. I didn't see anything either, but initdb fixed my\nregression test errors.\n\nCatversiion updated. Done. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 18:39:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I will bump it now. I didn't see anything either, but initdb fixed my\n> regression test errors.\n\nThat was probably a waste of effort. I just finished pulling down and\nrecompiling CVS tip. I see no regression failures, either using a data\ndirectory left over from yesterday or with a fresh initdb. So I don't\nthink it's a catalog compatibility issue. I suspect a \"make distclean\"\nand \"make all\" might have more to do with fixing it.\n\nAlternatively, we might have a genuine portability problem lurking.\nDoes either of you still see a problem after doing a full rebuild?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 18:50:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.. " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will bump it now. I didn't see anything either, but initdb fixed my\n> > regression test errors.\n> \n> That was probably a waste of effort. I just finished pulling down and\n> recompiling CVS tip. I see no regression failures, either using a data\n> directory left over from yesterday or with a fresh initdb. So I don't\n> think it's a catalog compatibility issue. I suspect a \"make distclean\"\n> and \"make all\" might have more to do with fixing it.\n> \n> Alternatively, we might have a genuine portability problem lurking.\n> Does either of you still see a problem after doing a full rebuild?\n\nI don't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 19:34:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "Hmm ... Bruce, were you doing serial or parallel regress tests?\nI'm finding that the serial tests work and the parallels blow up.\nIt might be that this is because of my anti-lseek hacking, but\nI kinda doubt it because the failures occur right about where\nbpalmer saw trouble with last night's CVS ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 22:29:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.. " }, { "msg_contents": "> Hmm ... Bruce, were you doing serial or parallel regress tests?\n> I'm finding that the serial tests work and the parallels blow up.\n> It might be that this is because of my anti-lseek hacking, but\n> I kinda doubt it because the failures occur right about where\n> bpalmer saw trouble with last night's CVS ...\n\nI was doing serial.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 22:31:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was doing serial.\n\nAnd ... ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 22:37:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.. " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was doing serial.\n> \n> And ... ?\n> \n\nIt is fine now. I did see the error, but an initdb fixed it. I am 100%\nOK now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 22:42:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was doing serial.\n\nTry the parallels a few times, and you *will* see it fail.\n\nReason: Stephan added a bunch of tests to alter_table.sql that\ncreate/modify/delete tables named pktable and fktable.\n\nUnfortunately, foreign_key.sql uses those same names for its\ntest tables ... and the parallel tests run these two tests\nin parallel. Ooops.\n\nPossible solutions: (a) rename tables in one test or the other,\nor (b) use TEMPORARY tables in one test or the other. I kinda\nlike (b), just to exercise temp tables in some interesting new\nways. Whaddya think?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 22:51:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.. " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was doing serial.\n> \n> Try the parallels a few times, and you *will* see it fail.\n> \n> Reason: Stephan added a bunch of tests to alter_table.sql that\n> create/modify/delete tables named pktable and fktable.\n> \n> Unfortunately, foreign_key.sql uses those same names for its\n> test tables ... and the parallel tests run these two tests\n> in parallel. Ooops.\n> \n> Possible solutions: (a) rename tables in one test or the other,\n> or (b) use TEMPORARY tables in one test or the other. I kinda\n> like (b), just to exercise temp tables in some interesting new\n> ways. Whaddya think?\n\nSounds good to me, and makes sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 23:10:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "> Alternatively, we might have a genuine portability problem lurking.\n> Does either of you still see a problem after doing a full rebuild?\n\nSorry this is late, but..\n\nI was still having the problem, even after a rm -rf of pgsql and a cvs co\nof tip. Tom, let me know if you need any more information. I'll speak\nup again if tonight's regression tests still fail.\n\n- brandon\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Thu, 10 May 2001 23:35:11 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "Re: Regression tests for OBSD scrammed.. " }, { "msg_contents": "\nTom says is the parallel regresion tests, and he knows the cause.\n\n> > Alternatively, we might have a genuine portability problem lurking.\n> > Does either of you still see a problem after doing a full rebuild?\n> \n> Sorry this is late, but..\n> \n> I was still having the problem, even after a rm -rf of pgsql and a cvs co\n> of tip. Tom, let me know if you need any more information. I'll speak\n> up again if tonight's regression tests still fail.\n> \n> - brandon\n> \n> b. palmer, bpalmer@crimelabs.net\n> pgp: www.crimelabs.net/bpalmer.pgp5\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 23:41:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "> > Possible solutions: (a) rename tables in one test or the other,\n> > or (b) use TEMPORARY tables in one test or the other. I kinda\n> > like (b), just to exercise temp tables in some interesting new\n> > ways. Whaddya think?\n\nI have a preference for (a). If we want to test temporary tables, let's\nhave a test which does that. But having a possible name conflict mixed\nin to another test seems to be asking for trouble, or at least does not\ndecouple things as much as they could be.\n\nThere is a benefit to having complex tests (a great benefit) but without\nalso having decoupled tests the diagnostic benefit is not as clear cut\nimho.\n\nBruce, would you have time to generate a regression test for temporary\ntables? If we don't have one now, we should.\n\n - Thomas\n", "msg_date": "Fri, 11 May 2001 04:25:53 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> Possible solutions: (a) rename tables in one test or the other,\n> or (b) use TEMPORARY tables in one test or the other. I kinda\n> like (b), just to exercise temp tables in some interesting new\n> ways. Whaddya think?\n\n> I have a preference for (a). If we want to test temporary tables, let's\n> have a test which does that. But having a possible name conflict mixed\n> in to another test seems to be asking for trouble, or at least does not\n> decouple things as much as they could be.\n\nBut we already have a ton of regress tests that work on nonconflicting\ntable names. Seems like we add coverage if we try a few that are doing\nparallel uses of plain and temp tables of the same name.\n\n> Bruce, would you have time to generate a regression test for temporary\n> tables? If we don't have one now, we should.\n\nThere is one. But as a single test, it proves nothing about whether\ntemp tables conflict with similarly-named tables from the point of view\nof another backend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 01:00:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.. " }, { "msg_contents": "> > > Possible solutions: (a) rename tables in one test or the other,\n> > > or (b) use TEMPORARY tables in one test or the other. I kinda\n> > > like (b), just to exercise temp tables in some interesting new\n> > > ways. Whaddya think?\n> \n> I have a preference for (a). If we want to test temporary tables, let's\n> have a test which does that. But having a possible name conflict mixed\n> in to another test seems to be asking for trouble, or at least does not\n> decouple things as much as they could be.\n> \n> There is a benefit to having complex tests (a great benefit) but without\n> also having decoupled tests the diagnostic benefit is not as clear cut\n> imho.\n> \n> Bruce, would you have time to generate a regression test for temporary\n> tables? If we don't have one now, we should.\n\nWe already have one as temp.out. Is there something more it should be\ntesting?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 May 2001 07:16:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "My regression tests ran clean this morning. Thanks guys.\n\n- brandon\n\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Fri, 11 May 2001 09:17:56 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "Re: Re: Regression tests for OBSD scrammed.." }, { "msg_contents": "> My regression tests ran clean this morning. Thanks guys.\n> \n\nI figured out why recompile/initdb fixed it for me. I saw the same\ncompile errors you did because I had patched my expected.out files are\npart of applying a patch. I had not recompiled, so I saw the same\nfailures you did. \n\nI did a recompile and initdb just to clean everything out, and it went\naway. My failures were part of patch application, while yours where\nfrom the parallel regression tests used with the new patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 21:34:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Regression tests for OBSD scrammed.." } ]
[ { "msg_contents": "We've known for a long time that Postgres does a lot of\nredundant-seeming \"lseek(fd,0,SEEK_END)\" kernel calls while inserting\ndata; one for each inserted tuple, in fact. This is coming from\nRelationGetBufferForTuple() in src/backend/access/heap/hio.c, which does\nRelationGetNumberOfBlocks() to ensure that it knows the currently last\npage of the relation to insert into. That results in the lseek() call,\nwhich is the only way to be sure we know the current file EOF exactly,\ngiven that other backends might be extending the file too.\n\nWe have talked about avoiding this kernel call by keeping an accurate\nEOF location somewhere in shared memory. However, I just had what is\neither a brilliant or foolish idea: who says that we absolutely must\ninsert the new tuple on the very last page of the table? If it fits on\na page that's not-quite-the-last-one, why shouldn't we put it there?\nIf that works, we could just use \"rel->rd_nblocks-1\" as our initial\nguess of the page to insert onto, and skip the lseek. It doesn't\nmatter if rd_nblocks is slightly out of date. The logic in \nRelationGetBufferForTuple would then be something like:\n\n\t/*\n\t * First, use cached rd_nblocks to guess which page to put tuple\n\t * on.\n\t */\n\tif (rel->rd_nblocks > 0)\n\t{\n\t\tsee if tuple will fit on page rel->rd_nblocks-1;\n\t\tif so, put it there and return.\n\t}\n\t/*\n\t * Before extending relation, make sure no one else has done\n\t * so more recently than our last rd_nblocks update. (If we\n\t * blindly extend the relation here, then probably most of the\n\t * page the other guy added will end up going to waste.)\n\t */\n\tnewlastblock = RelationGetNumberOfBlocks(relation);\n\tif (newlastblock > rel->rd_nblocks)\n\t{\n\t\t/*\n\t\t * Someone else has indeed extended the rel.\n\t\t * Update my idea of the rel length, and see if\n\t\t * I can fit my tuple on the page he made.\n\t\t */\n\t\trel->rd_nblocks = newlastblock;\n\t\tsee if tuple will fit on page rel->rd_nblocks-1;\n\t\tif so, put it there and return.\n\t}\n\t/*\n\t * Otherwise, extend the rel by one block and put our tuple\n\t * there, same as before. (Be sure to update rel->rd_nblocks\n\t * for next time...)\n\t */\n\nAn additional small win is that we'd not have to do the\n\tif (!relation->rd_myxactonly)\n\t\tLockPage(relation, 0, ExclusiveLock);\nbit unless the first insertion attempt fails. This lock is only needed\nto ensure that just one backend extends the rel at a time, so as long as\nwe are adding a tuple to a pre-existing page there's no need to grab it.\nThat would improve concurrency some more, since the majority of tuple\ninsertions will succeed in adding to an existing page.\n\nSo the question is, is it safe to insert on non-last pages? AFAIK,\nthe only aspect of the system that really makes assumptions about tuple\npositioning that sequential scans stop when they reach rel->rd_nblocks\n(which they update at the beginning of the scan). They are assuming\nthat tuples appearing on pages added after a scan starts are\nuninteresting because they can't be committed from the point of view of\nthe scanning transaction. But that assumption is not violated by\nplacing new tuples in pages earlier than the last possible place.\n\nComments? Is there a hole in my reasoning?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 13:40:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Getting rid of excess lseeks()" } ]
[ { "msg_contents": "Hello all-\n\nYesterday I upgraded my database from Pg v7.1RC1 to v7.1.1. Since this\nupgrade, I have been having unbelievable performance problems with updates\nto a particular table, and I've tracked the problem down to a rule within\nthat table.\n\nI've enclosed a simple case study at the end of this email (the real\nexample is basically the same, except that there are many more fields in\nthe tables). I will send the real table definitions if anyone thinks it\nwould be useful.\n\nThe problem is that in Pg v7.1RC1 (and previously with Pg v7.0.3) a simple\nupdate to the child table, changing the boolean active='t' to active='f'\nwould be basically instantaneous. Now, it takes about an hour. The real\ndatabase has ~10000 records in total between the \"child\" and \"parent\" \ntables.\n\nBasically, the rule \"r_inactivate_child\" below is the problem. If I drop \nthat rule, everything runs fast again.\n\nThe idea of this rule is to set active='f' in the parent table whenever\nall of the children (things in the child table) are inactive.\n\nAny suggestions would be *greatly* appreciated! Thanks!\n\nPS: Most likely the problem is in the design of the rule (I'm sure it \ncould be done better), but I would remind you that these same updates were \nvery, very fast in the older versions of Pg.\n\nPSS: I'm running linux, kernel v2.4.4, RH7.1, homerolled PG.\n\n-----------------------------------\nTables and rules:\n\nCREATE TABLE parent ( \n\tparentid int4 PRIMARY KEY, \n\tactive boolean \n);\nCREATE TABLE child ( \n\tchildid int4 PRIMARY KEY, \n\tparentid int4 references parent(parentid),\n\tactive boolean \n);\n\nCREATE RULE r_inactivate_child\n\tAS ON UPDATE TO child\n\t\tWHERE NEW.active='f' AND OLD.active='t'\n\tDO UPDATE parent SET active='f'\n\t\tWHERE parentid=NEW.parentid\n\t\t\n\t\tAND (SELECT count(*) FROM child\n\t\t\tWHERE parentid=NEW.parentid AND \n\t\t\t\tchildid<>NEW.childid AND active='t') = 0;\n\nCREATE RULE r_activate_child \n\tAS ON UPDATE TO child\n\t\tWHERE NEW.active='t' AND OLD.active='f'\n\tDO UPDATE parent SET active='t'\n\t\tWHERE parentid=NEW.parentid AND active='f';\n\n-----------------------------------\nPopulate with data:\nINSERT INTO parent (parentid, active) VALUES (1, 't');\nINSERT INTO child (childid, parentid, active) VALUES (1, 1, 't');\nINSERT INTO child (childid, parentid, active) VALUES (2, 1, 't');\nINSERT INTO child (childid, parentid, active) VALUES (3, 1, 't');\n\n(note, you will need *a lot* more data like this to see the slow \nupdates... but you get the idea, I hope).\n\n-----------------------------------\nPerform an update:\nUPDATE child SET active='f' WHERE childid=2;\n\n(this would take an hour on a ~8000 record child, ~3000 record parent \ndatabase)\n\n\n-----------------------------------\nExplain:\n\ntest=# explain update child set active='t' where childid=2;\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..30020.00 rows=1000000 width=10)\n -> Nested Loop (cost=0.00..30020.00 rows=1000000 width=10)\n -> Seq Scan on parent (cost=0.00..20.00 rows=1000 width=10)\n -> Seq Scan on child (cost=0.00..20.00 rows=1000 width=0)\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..49.28 rows=25 width=14)\n -> Index Scan using child_pkey on child (cost=0.00..8.16 rows=5 \nwidth=4)\n -> Index Scan using parent_pkey on parent (cost=0.00..8.16 rows=5 \nwidth=10)\n\nNOTICE: QUERY PLAN:\n\nIndex Scan using child_pkey on child (cost=0.00..8.14 rows=10 width=14)\n\n-- \n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Extracta Mol�culas Naturais, Rio de Janeiro, Brasil\n email: lapham@extracta.com.br web: http://www.extracta.com.br/\n***-*--*----*-------*------------*--------------------*---------------\n", "msg_date": "Thu, 10 May 2001 17:53:19 -0300", "msg_from": "Jon Lapham <lapham@extracta.com.br>", "msg_from_op": true, "msg_subject": "Problem with a rule on upgrade to v7.1.1" }, { "msg_contents": "Jon Lapham <lapham@extracta.com.br> writes:\n> Yesterday I upgraded my database from Pg v7.1RC1 to v7.1.1. Since this\n> upgrade, I have been having unbelievable performance problems with updates\n> to a particular table, and I've tracked the problem down to a rule within\n> that table.\n\nUh, have you VACUUM ANALYZEd yet? Those EXPLAIN numbers look\nsuspiciously like default statistics ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 17:56:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1 " }, { "msg_contents": "On Thu, May 10, 2001 at 05:56:11PM -0400, Tom Lane wrote:\n> Jon Lapham <lapham@extracta.com.br> writes:\n> > Yesterday I upgraded my database from Pg v7.1RC1 to v7.1.1. Since this\n> > upgrade, I have been having unbelievable performance problems with updates\n> > to a particular table, and I've tracked the problem down to a rule within\n> > that table.\n> \n> Uh, have you VACUUM ANALYZEd yet? Those EXPLAIN numbers look\n> suspiciously like default statistics ...\n> \n> \t\t\tregards, tom lane\n\nNope, forgot to on the little demonstration tables I made. I tacked the \npost-VACUUM ANALYZE explain results (they look much better) at the end of \nthis email.\n\nHowever, I did run a VACUUM ANALYZE on my real database. And, just to be \nsure, I just ran it again. The updates still take a very, very long time \n(actually it is about 12 minutes, not an hour as I previously stated, it \njust feels like an hour).\n\nI also included the explain output for my real database (main_v0_8).\n\nThanks Tom!\n-Jon\n\nPS: anything else I should try?\n\n---------------------------------\ntest=# vacuum analyze;\nVACUUM\ntest=# explain update child set active='t' where \nchildid=2;\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..2.07 rows=3 width=10)\n -> Nested Loop (cost=0.00..2.07 rows=3 width=10)\n -> Seq Scan on parent (cost=0.00..1.01 rows=1 width=10)\n -> Seq Scan on child (cost=0.00..1.03 rows=3 width=0)\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..2.07 rows=1 width=14)\n -> Seq Scan on parent (cost=0.00..1.01 rows=1 width=10)\n -> Seq Scan on child (cost=0.00..1.04 rows=1 width=4)\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on child (cost=0.00..1.04 rows=1 width=14)\n\nEXPLAIN\n\n-------------------------------------------\nmain_v0_8=# VACUUM ANALYZE;\nVACUUM\nmain_v0_8=# explain update tplantorgan set active='f' where \nsampleid=100430;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..2243933.76 rows=1 width=239)\n -> Seq Scan on tplantorgan (cost=0.00..2243931.72 rows=1 width=4)\n SubPlan\n -> Aggregate (cost=258.96..258.96 rows=1 width=0)\n -> Seq Scan on tplantorgan (cost=0.00..258.96 rows=1 \nwidth=0)\n -> Index Scan using tplant_pkey on tplant (cost=0.00..2.03 rows=1 \nwidth=235)\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..1112558.20 rows=31883520 width=235)\n -> Nested Loop (cost=0.00..1112558.20 rows=31883520 width=235)\n -> Seq Scan on tplant (cost=0.00..167.80 rows=3680 width=235)\n -> Seq Scan on tplantorgan (cost=0.00..215.64 rows=8664 width=0)\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on tplantorgan (cost=0.00..237.30 rows=1 width=103)\n\nEXPLAIN\n\n\n-- \n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Extracta Mol�culas Naturais, Rio de Janeiro, Brasil\n email: lapham@extracta.com.br web: http://www.extracta.com.br/\n***-*--*----*-------*------------*--------------------*---------------\n", "msg_date": "Thu, 10 May 2001 19:27:06 -0300", "msg_from": "Jon Lapham <lapham@extracta.com.br>", "msg_from_op": true, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1" }, { "msg_contents": "Jon Lapham <lapham@extracta.com.br> writes:\n>> Uh, have you VACUUM ANALYZEd yet? Those EXPLAIN numbers look\n>> suspiciously like default statistics ...\n\n> Nope, forgot to on the little demonstration tables I made. I tacked the \n> post-VACUUM ANALYZE explain results (they look much better) at the end of \n> this email.\n\nAh. The plans for the little demo tables are uninteresting anyway,\nbut the plans for the real tables are interesting.\n\nNext question: do you still have your 7.0.* DB up? Can you get an\nEXPLAIN that shows how it did it (on the real tables)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 18:44:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1 " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n> [snip]\n> Next question: do you still have your 7.0.* DB up? Can you get an\n> EXPLAIN that shows how it did it (on the real tables)?\n\nSetting it up right now... unfortunately I will have to do a recompile /\nreinstall as I have rm -r'ed the old versions. So, this may take a little bit\nof time, as I will also have to remove a few minor Pg v7.1 only items\nenhancements to the tables. I'll get back to you soon.\n\nI should be able to also give you the EXPLAIN data for a v7.1RC1 db.\n\n-- \n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Extracta Mol�culas Naturais, Rio de Janeiro, Brasil\n email: lapham@extracta.com.br web: http://www.extracta.com.br/\n***-*--*----*-------*------------*--------------------*---------------\n\n\n\n", "msg_date": "Fri, 11 May 2001 02:06:14 -0000", "msg_from": "\"Jon Lapham\" <lapham@extracta.com.br>", "msg_from_op": false, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1 " }, { "msg_contents": "On Thu, May 10, 2001 at 06:44:39PM -0400, Tom Lane wrote:\n> Next question: do you still have your 7.0.* DB up? Can you get an\n> EXPLAIN that shows how it did it (on the real tables)?\n\nTom-\n\nOkay. I started from a clean slate, by recompiling both Pgv7.1.1 and\nPgv7.1RC1, initdb'ing each (after appropriately changing /etc/ld.so.conf,\nrunning ldconfig, etc, etc), and restoring my real DB from a previously\ncreated dump file. I didn't do Pgv7.0.3 b/c I think it may be unnecessary\nsince 7.1RC1 doesn't show this problem, while 7.1.1 does. But, if you\nreally think it necessary, I will repeat his using 7.0.3.\n\nNotes: \n1) As usual, the 7.1RC1 returns from the \"UPDATE ... \" command as fast\nas I press enter. The 7.1.1 returns from the \"UPDATE ... \" command in\nabout 10 minutes.\n2) The two explains are identical.\n3) Both updates succeed, it is only the time difference that is the \nproblem\n4) Running \"UPDATE tplantorgan SET active='t' WHERE sampleid=100430;\" \n(setting the boolean to true, instead of false) is instantaneous for both \n7.1RC1 and 7.1.1\n5) There are 8664 and 3680 tuples in the \"tplantorgan\" and \"tplant\" tables \nrespectively. So this is a relatively small DB.\n\n-Jon\n\nThe actual results:\n----------------------------------\nPg v7.1RC1 (restored from 2001-05-10 db dump):\n\nmain_v0_8=# vacuum ANALYZE ;\nVACUUM\nmain_v0_8=# explain update tplantorgan set active='f' where \nsampleid=100430;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..2243933.76 rows=1 width=239)\n -> Seq Scan on tplantorgan (cost=0.00..2243931.72 rows=1 width=4)\n SubPlan\n -> Aggregate (cost=258.96..258.96 rows=1 width=0)\n -> Seq Scan on tplantorgan (cost=0.00..258.96 rows=1 \nwidth=0)\n -> Index Scan using tplant_pkey on tplant (cost=0.00..2.03 rows=1 \nwidth=235)\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..1112558.20 rows=31883520 width=235)\n -> Nested Loop (cost=0.00..1112558.20 rows=31883520 width=235)\n -> Seq Scan on tplant (cost=0.00..167.80 rows=3680 width=235)\n -> Seq Scan on tplantorgan (cost=0.00..215.64 rows=8664 width=0)\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on tplantorgan (cost=0.00..237.30 rows=1 width=103)\n\nEXPLAIN\nmain_v0_8=# update tplantorgan set active='f' where sampleid=100430;\nUPDATE 1\n\n----------------------------------\nPg v7.1.1 (restored from 2001-05-10 db dump):\n\nmain_v0_8=# VACUUM ANALYZE ;\nVACUUM\nmain_v0_8=# explain update tplantorgan set active='f' where \nsampleid=100430;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..2243933.76 rows=1 width=239)\n -> Seq Scan on tplantorgan (cost=0.00..2243931.72 rows=1 width=4)\n SubPlan\n -> Aggregate (cost=258.96..258.96 rows=1 width=0)\n -> Seq Scan on tplantorgan (cost=0.00..258.96 rows=1 \nwidth=0)\n -> Index Scan using tplant_pkey on tplant (cost=0.00..2.03 rows=1 \nwidth=235)\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..1112558.20 rows=31883520 width=235)\n -> Nested Loop (cost=0.00..1112558.20 rows=31883520 width=235)\n -> Seq Scan on tplant (cost=0.00..167.80 rows=3680 width=235)\n -> Seq Scan on tplantorgan (cost=0.00..215.64 rows=8664 width=0)\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on tplantorgan (cost=0.00..237.30 rows=1 width=103)\n\nEXPLAIN\nmain_v0_8=# update tplantorgan set active='f' where sampleid=100430;\nUPDATE 1\nmain_v0_8=# select active from tplantorgan where sampleid=100430;\n active \n--------\n f\n(1 row)\n\n-- \n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Extracta Mol�culas Naturais, Rio de Janeiro, Brasil\n email: lapham@extracta.com.br web: http://www.extracta.com.br/\n***-*--*----*-------*------------*--------------------*---------------\n", "msg_date": "Fri, 11 May 2001 10:05:06 -0300", "msg_from": "Jon Lapham <lapham@extracta.com.br>", "msg_from_op": true, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1" }, { "msg_contents": "Jon Lapham <lapham@extracta.com.br> writes:\n> I didn't do Pgv7.0.3 b/c I think it may be unnecessary\n> since 7.1RC1 doesn't show this problem, while 7.1.1 does. But, if you\n> really think it necessary, I will repeat his using 7.0.3.\n\nNo, that seems like useless work.\n\n> 1) As usual, the 7.1RC1 returns from the \"UPDATE ... \" command as fast\n> as I press enter. The 7.1.1 returns from the \"UPDATE ... \" command in\n> about 10 minutes.\n> 2) The two explains are identical.\n\nOh, *that's* interesting.\n\n> 5) There are 8664 and 3680 tuples in the \"tplantorgan\" and \"tplant\" tables \n> respectively. So this is a relatively small DB.\n\nWould you be willing to send me a dump of the whole DB (or at least the\ntables needed for this query)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 09:20:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1 " }, { "msg_contents": "Jon Lapham <lapham@extracta.com.br> writes:\n> But, there is definitely something wrong here, b/c the rule that is \n> causing this *should* only need to run the subselect [SELECT count(*) FROM \n> tplantorgan WHERE tplantid=NEW.tplantid AND sampleid<>NEW.sampleid AND \n> active='t'] one time! My understanding is that the first conditional in \n> the statement (WHERE tplantid=NEW.tplantid) would be evaluated before the \n> subselect, and there is only 1 tuple in which tplantid=NEW.tplantid.\n\nWell, I've figured it out. The problem is that the expensive subselect\nis actually being evaluated *first* among the several available WHERE\nclauses, and thus it's run at basically every row of tplantorgan. The\ntplantid=NEW.tplantid clause does not help because it's a join clause\nand is not evaluated until we do the join between tplant and\ntplantorgan. The subselect is a restriction clause and so is evaluated\nlower down in the plan tree.\n\nThere are other restriction clauses available, however: OLD.active='t'\nfrom the rule qual and sampleid=100430 from the original query both\nend up in the same list of restriction clauses for the tplantorgan scan.\nBut in 7.1.1 they get put behind the subselect clause. Had they come\nfirst, the subselect would get evaluated at very few tplantorgan rows.\n\nPostgres has never been particularly careful about the ordering of WHERE\nclauses, and depending on release version, phase of the moon, etc etc\nit's perfectly possible that the subselect would have ended up last in\nthe other versions you happened to try. I was able to make current\nsources run quickly by backing out the rev 1.59 change seen at\nhttp://www.ca.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/optimizer/plan/initsplan.c\nThis explains why you saw different results between 7.1RC1 and 7.1.1.\nThere was probably some other change between 7.0.2 and 7.0.3 that caused\n7.0.3 to put the clauses in the \"right\" order whereas 7.0.2 didn't, but\nI don't feel like trawling the revision history right now to find it.\n\nThe long-term solution to this is for the planner to pay attention to\nthe execution cost of WHERE clauses and try to put the expensive ones\nlast in whatever list they end up in.\n\nMeanwhile, I don't really recommend that you hack up the code to reverse\nthe ordering yet again. The query is a mess anyway, and rewriting it\nseems the better pathway.\n\n> I'm beginning to suspect that my rule is just simply designed poorly...\n\nYes. Why not replace both of those rules with\n\nON UPDATE to tplantorgan DO\n UPDATE tplant\n SET active = EXISTS (SELECT 1 FROM tplantorgan WHERE\n tplantid=NEW.tplantid AND active)\n WHERE tplantid=NEW.tplantid;\n\nwhich seems a lot more obvious as well as quicker. BTW, an index on\ntplantorgan(tplantid) would likely help too...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 14:34:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1 " }, { "msg_contents": "Tom-\n\nThanks for the detective work. This makes perfect sense now, and explains \nthe radically different behaviour between 7.1RC1 and 7.1.1. I will change \nmy rules to not have to depend on Pg choosing which WHERE clause to \nevaluate first.\n\n-Jon\n\nPS: (Talking *way* above my head now) Would be possible to have \nEXPLAIN flag this type of problem? Remember, the EXPLAIN output for \n7.1RC1 and 7.1.1 were identical.\n\n-- \n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Extracta Mol�culas Naturais, Rio de Janeiro, Brasil\n email: lapham@extracta.com.br web: http://www.extracta.com.br/\n***-*--*----*-------*------------*--------------------*---------------\n", "msg_date": "Fri, 11 May 2001 16:46:58 -0300", "msg_from": "Jon Lapham <lapham@extracta.com.br>", "msg_from_op": true, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1" }, { "msg_contents": "Jon Lapham <lapham@extracta.com.br> writes:\n> PS: (Talking *way* above my head now) Would be possible to have \n> EXPLAIN flag this type of problem? Remember, the EXPLAIN output for \n> 7.1RC1 and 7.1.1 were identical.\n\nYeah, because EXPLAIN doesn't show the individual qual clauses attached\nto each plan node. EXPLAIN VERBOSE does ... but it's, um, too verbose\nfor most people.\n\nI've speculated to myself about designing some intermediate level of\nEXPLAIN display that would show things like qual clauses and indexscan\nconditions in a readable fashion (unlike EXPLAIN VERBOSE). It could use\nthe ruleutils.c code to produce the output, so there's not that much\ncoding involved, just some tasteful selection of exactly what details to\nshow and how to format the output. But I haven't gotten around to\nactually doing anything ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 15:59:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with a rule on upgrade to v7.1.1 " } ]
[ { "msg_contents": "I'm not sure about this one, but it should be possible to change the type of\na column without vacuum, to reoorder the columns, change the name,...\n\nIt would allow people who design GUIs to PG to do a very nice designer\ninterface (like on MS SQL7.0).\n\nAlso on the design part, the UML standard and others allow the drawing of\ninterfaces (ArgoUML, Dia,..). Someone could work out an export from UML\ngraph to PG Table Creation.\n\nI'm still working on a new geographic type (geoobj) for PG follwing ISO\nstandards. Except the standard is still a draft. So I can't meet any\ndeadline... (fmaps.sourceforge.net)\n\nFranck Martin\nNetwork and Database Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: franck@sopac.org <mailto:franck@sopac.org> \nWeb site: http://www.sopac.org/ <http://www.sopac.org/> \nSupport FMaps: http://fmaps.sourceforge.net/ <http://fmaps.sourceforge.net/>\n\n\nThis e-mail is intended for its addresses only. Do not forward this e-mail\nwithout approval. The views expressed in this e-mail may not be necessarily\nthe views of SOPAC.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Friday, 11 May 2001 5:21 \nTo: PostgreSQL-development\nSubject: [HACKERS] 7.2 items\n\n\nHere is a small list of big TODO items. I was wondering which ones\npeople were thinking about for 7.2?\n\n---------------------------------------------------------------------------\n\n* Add replication of distributed databases [replication]\n\to automatic fallover\n\to load balancing\n\to master/slave replication\n\to multi-master replication\n\to partition data across servers\n\to sample implementation in contrib/rserv\n\to queries across databases or servers (two-phase commit)\n* Point-in-time data recovery using backup and write-ahead log\n* Allow row re-use without vacuum (Vadim)\n* Add the concept of dataspaces/tablespaces [tablespaces]\n* Allow better control over user privileges [privileges]\n* Allow elog() to return error codes, module name, file name, line\n number, not just messages [elog]\n* Allow international error message support and add error codes [elog]\n* Make binary/file in/out interface for TOAST columns\n* Large object interface improvements\n* Allow inherited tables to inherit index, UNIQUE constraint, and primary\nkey\n [inheritance]\n* Add ALTER TABLE DROP COLUMN feature [drop]\n* Add ALTER TABLE ... DROP CONSTRAINT\n* Automatically drop constraints/functions when object is dropped\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n", "msg_date": "Fri, 11 May 2001 10:19:20 +1200", "msg_from": "Franck Martin <Franck@sopac.org>", "msg_from_op": true, "msg_subject": "RE: 7.2 items" } ]
[ { "msg_contents": "Just curious (and without having looked at a line of code),\n\nIf your idea works, would it be possible, or even a good idea, to \nhave PostgreSQL extend the relation in a non-linear fashion? So, for \na given statement, the second time it finds itself extending the \nrelation it does so by 2 x pagesize, the third time, now having \nexhausted 3 pages, it extends the relation by 4 x pagesize, etc. \nOracle has its STORAGE clause of the CREATE TABLE statement which \nallows for tuning of such things, but I'm wondering if PostgreSQL \ncan/should do some adaptive allocation of disk space. Perhaps it \nmight cut down on large bulk load times?\n\nJust curious,\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tTom Lane [SMTP:tgl@sss.pgh.pa.us]\n\nWe have talked about avoiding this kernel call by keeping an accurate\nEOF location somewhere in shared memory. However, I just had what is\neither a brilliant or foolish idea: who says that we absolutely must\ninsert the new tuple on the very last page of the table? If it fits \non\na page that's not-quite-the-last-one, why shouldn't we put it there?\nIf that works, we could just use \"rel->rd_nblocks-1\" as our initial\nguess of the page to insert onto, and skip the lseek. \n", "msg_date": "Thu, 10 May 2001 20:03:45 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: Getting rid of excess lseeks()" }, { "msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> If your idea works, would it be possible, or even a good idea, to \n> have PostgreSQL extend the relation in a non-linear fashion?\n\nThe trick would be to ensure that the extra blocks actually got used\nfor something ... without more logic than is there now, all the backends\nwould glom onto the last new page and ignore the possibility of putting\ntuples into the other pages you'd added.\n\nThe hack I've proposed (and am currently testing) doesn't really do\nanything to reduce the per-page overhead of extending the relation.\nWhat it does do is reduce the per-tuple overhead of adding tuples\nto an extant last page. Basically we are down to an lseek per block\ninstead of an lseek per tuple ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 22:21:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Getting rid of excess lseeks() " } ]
[ { "msg_contents": "Are we releasing tomorrow. I will stamp the CVS STABLE branch tonight\nas 7.1.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 20:47:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "7.1.2 release" }, { "msg_contents": "At 20:47 10/05/01 -0400, Bruce Momjian wrote:\n>Are we releasing tomorrow. I will stamp the CVS STABLE branch tonight\n>as 7.1.2.\n>\n\nI have not applied the latest pg_dump patches, and I'm still working on a\nproblem with the view extract.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 11 May 2001 11:12:28 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release" }, { "msg_contents": "On Thu, 10 May 2001, Bruce Momjian wrote:\n\n> Are we releasing tomorrow. I will stamp the CVS STABLE branch tonight\n> as 7.1.2.\n\nNot that I'm aware of ... I heard mention something about a couple of\nfixes, but we *just* put out 7.1.1 ...\n\nIf ppl are affected by the bugs, use cvsup and set yoru tag to\nREL7_1_STABLE to download the latest STABLE code ... or use anon-cvs ...\n\n", "msg_date": "Thu, 10 May 2001 22:16:00 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release" }, { "msg_contents": "> On Thu, 10 May 2001, Bruce Momjian wrote:\n> \n> > Are we releasing tomorrow. I will stamp the CVS STABLE branch tonight\n> > as 7.1.2.\n> \n> Not that I'm aware of ... I heard mention something about a couple of\n> fixes, but we *just* put out 7.1.1 ...\n> \n> If ppl are affected by the bugs, use cvsup and set yoru tag to\n> REL7_1_STABLE to download the latest STABLE code ... or use anon-cvs ...\n\nThat is fine. I am not crazy about doing it now either. It is just\nthat Tom mentioned early in the week we need a release, and you said how\nabout Friday. I will brand 7.1.2 so it is ready whenever we want it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 21:24:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.1.2 release" }, { "msg_contents": "> On Thu, 10 May 2001, Bruce Momjian wrote:\n> \n> > Are we releasing tomorrow. I will stamp the CVS STABLE branch tonight\n> > as 7.1.2.\n> \n> Not that I'm aware of ... I heard mention something about a couple of\n> fixes, but we *just* put out 7.1.1 ...\n> \n> If ppl are affected by the bugs, use cvsup and set yoru tag to\n> REL7_1_STABLE to download the latest STABLE code ... or use anon-cvs ...\n> \n\nIn fact, we can easily create a patch for this fix. It is only a few\nlines.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 21:25:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.1.2 release" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> That is fine. I am not crazy about doing it now either. It is just\n> that Tom mentioned early in the week we need a release, and you said how\n> about Friday. I will brand 7.1.2 so it is ready whenever we want it.\n\nI think I said Friday, not Marc ... and I didn't hear anyone else\nagreeing with me anyway ;-)\n\nWe should certainly wait until Philip is happy with the state of\npg_dump. Right now I don't know of any other open issues.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 22:25:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > That is fine. I am not crazy about doing it now either. It is just\n> > that Tom mentioned early in the week we need a release, and you said how\n> > about Friday. I will brand 7.1.2 so it is ready whenever we want it.\n> \n> I think I said Friday, not Marc ... and I didn't hear anyone else\n> agreeing with me anyway ;-)\n> \n\nHmm I've thought no one has any objection :-).\nI agree with you because the bug is very critical.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 11 May 2001 11:39:12 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I agree with you because the bug is very critical.\n\nYes, I'd like to get that plpgsql bug fix out as soon as possible.\n\nBut the pg_dump things that Philip is fixing are important too,\nso I think we should wait a couple more days for those.\n(Philip, we are just talking about a few days, right?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 22:43:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "On Thu, 10 May 2001, Tom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I agree with you because the bug is very critical.\n>\n> Yes, I'd like to get that plpgsql bug fix out as soon as possible.\n\nIsn't this only critical for those that are using it? Does it affect\nthose that don't use plpgsql?\n\n\n", "msg_date": "Fri, 11 May 2001 00:18:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "While you guys are still awake, I may as well ask this:\n\nI gather that the following code goes though the heap and removes all check\ncontraints associated with a particular table, but how do I extend the code\nto match both a table relid and the constraint name? Basically I'm just\nasking how I express 'SQL SELECT' queries using the heap access functions?\n\n\trcrel = heap_openr(RelCheckRelationName, RowExclusiveLock);\n\tScanKeyEntryInitialize(&key, 0, Anum_pg_relcheck_rcrelid,\n\t\t\t\t\t\t F_OIDEQ, RelationGetRelid(rel));\n\n\trcscan = heap_beginscan(rcrel, 0, SnapshotNow, 1, &key);\n\n\twhile (HeapTupleIsValid(tup = heap_getnext(rcscan, 0)))\n\t\tsimple_heap_delete(rcrel, &tup->t_self);\n\n\theap_endscan(rcscan);\n\theap_close(rcrel, RowExclusiveLock);\n\nCheers,\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Friday, 11 May 2001 10:43 AM\n> To: Hiroshi Inoue\n> Cc: Bruce Momjian; Philip Warner; The Hermit Hacker;\n> PostgreSQL-development\n> Subject: Re: [HACKERS] 7.1.2 release\n>\n>\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I agree with you because the bug is very critical.\n>\n> Yes, I'd like to get that plpgsql bug fix out as soon as possible.\n>\n> But the pg_dump things that Philip is fixing are important too,\n> so I think we should wait a couple more days for those.\n> (Philip, we are just talking about a few days, right?)\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Fri, 11 May 2001 11:21:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: 7.1.2 release " }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Isn't this only critical for those that are using it? Does it affect\n> those that don't use plpgsql?\n\nNo, but I think it's pretty critical for those that do ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 23:22:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "At 22:43 10/05/01 -0400, Tom Lane wrote:\n>(Philip, we are just talking about a few days, right?)\n\nYes - it's waiting on the problem Zoltan reported (the select from\npg_rewrite etc). I can't reproduce the problem on any of my DBs.\n\nIf worst comes to worst, I have a (nasty) workaround, but I'm more worried\nthat it might reflect a larger problem with the way I have used\ncolumn-select expressions in pg_dump. \n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 11 May 2001 13:42:34 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I gather that the following code goes though the heap and removes all check\n> contraints associated with a particular table, but how do I extend the code\n> to match both a table relid and the constraint name?\n\nAdd another ScanKey. Look at uses of ScanKeyEntryInitialize for examples.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 01:04:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Yes - it's waiting on the problem Zoltan reported (the select from\n> pg_rewrite etc). I can't reproduce the problem on any of my DBs.\n\nI've just realized that the problem is a lot simpler than it appears.\nThe given string is too long for a NAME:\n\nregression=# select ('_RET' || 'szallitolevel_tetele_ervenyes')::name;\n ?column?\n---------------------------------\n _RETszallitolevel_tetele_erveny\n(1 row)\n\nWhen you write\n\n\tselect oid from pg_rewrite where \n rulename='_RETszallitolevel_tetele_ervenyes'\n\nthe unknown-type literal is coerced to NAME --- ie truncated --- and\nthen the comparison works. But when you write\n\n\tselect oid from pg_rewrite where \n rulename='_RET' || 'szallitolevel_tetele_ervenyes'\n\nthe result of the || will be type TEXT not type NAME. So the rulename\ngets promoted to TEXT and texteq is used ... with the result that\n_RETszallitolevel_tetele_ervenye does not match\n_RETszallitolevel_tetele_ervenyes.\n\nSolution: don't use ||, or explicitly cast its result to NAME:\n\n\tselect oid from pg_rewrite where \n rulename=('_RET' || 'szallitolevel_tetele_ervenyes')::name\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 01:28:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "At 01:28 11/05/01 -0400, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> Yes - it's waiting on the problem Zoltan reported (the select from\n>> pg_rewrite etc). I can't reproduce the problem on any of my DBs.\n>\n>I've just realized that the problem is a lot simpler than it appears.\n>The given string is too long for a NAME:\n\nUng. That's a bit nasty for views:\n\npjw=# create view szallitolevel_tetele_erveny01 as select * from t1;\nCREATE\npjw=# create view szallitolevel_tetele_erveny02 as select * from t1;\nERROR: Attempt to insert rule \"_RETszallitolevel_tetele_erveny\" failed:\nalready exists\n\nBut at least I can fix pg_dump now.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 11 May 2001 19:57:29 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "On Thu, 10 May 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Isn't this only critical for those that are using it? Does it affect\n> > those that don't use plpgsql?\n>\n> No, but I think it's pretty critical for those that do ...\n\nSo, why not create a quick patch for those that need it, and let those\nwith the capability pull from CVS/CVSup ... that is why we have them setup\n...\n\n\n", "msg_date": "Fri, 11 May 2001 08:49:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "Bruce Momjian writes:\n\n> Are we releasing tomorrow. I will stamp the CVS STABLE branch tonight\n> as 7.1.2.\n\nMake sure you are labelling the documentation with the correct version.\nThomas should set up his documentation build area for the current and\nstable branches, and Marc has to make sure that the mk-release picks up\nthe right one. Right now it would probably pick up current, which is\ndefinitely not right. Man, this should be easier.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 11 May 2001 23:29:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> I have not applied the latest pg_dump patches, and I'm still working on a\n> problem with the view extract.\n\nPhilip, I see you applied some pg_dump patches yesterday. Have you\nresolved all your outstanding issues, or is there still more you want\nto do before 7.1.2?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 13:47:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > Are we releasing tomorrow. I will stamp the CVS STABLE branch tonight\n> > as 7.1.2.\n> \n> Make sure you are labelling the documentation with the correct version.\n> Thomas should set up his documentation build area for the current and\n> stable branches, and Marc has to make sure that the mk-release picks up\n> the right one. Right now it would probably pick up current, which is\n> definitely not right. Man, this should be easier.\n\nI have stamped the SGML version file for each release appropriately.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 18:18:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.1.2 release" }, { "msg_contents": "At 13:47 12/05/01 -0400, Tom Lane wrote:\n>\n>Philip, I see you applied some pg_dump patches yesterday. Have you\n>resolved all your outstanding issues, or is there still more you want\n>to do before 7.1.2?\n>\n\nEverything I know about is resolved.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 13 May 2001 11:12:16 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n>> Philip, I see you applied some pg_dump patches yesterday. Have you\n>> resolved all your outstanding issues, or is there still more you want\n>> to do before 7.1.2?\n\n> Everything I know about is resolved.\n\nOkay, good. I did some experimentation this afternoon with dumping the\n7.0 regression database using both native 7.0 pg_dump and the\ncurrent-sources one. Seemed to work pretty well, though I did make one\nchange: I think we should assume proisstrict = FALSE when dumping from a\n7.0 db, not TRUE. The forcing consideration for this is that I was\ngetting \"isstrict\" markers on plpgsql_call_handler, which would be a\nreally nasty problem if it got into the field: people would report that\nthey couldn't get plpgsql functions to work with NULLs, and we'd be\nunable to duplicate the misbehavior. More generally, it doesn't matter\nfor old C functions, because the fmgr_oldstyle wrapper will cause the\nright things to happen, and I don't think we want to force strictness\nfor SQL or PL functions. (That's why I chose CREATE FUNCTION's default\nto be non strict...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 21:26:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.2 release " } ]
[ { "msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\tmomjian@hub.org\t01/05/10 21:46:33\n\nModified files:\n\t. : README configure configure.in register.txt \n\tdoc : bug.template \n\tdoc/src/sgml : version.sgml \n\tsrc/include : config.h.win32 \n\tsrc/interfaces/ecpg/lib: Makefile \n\tsrc/interfaces/ecpg/preproc: Makefile \n\tsrc/interfaces/jdbc: jdbc.jpx \n\tsrc/interfaces/libpgeasy: Makefile \n\tsrc/interfaces/libpgtcl: Makefile \n\tsrc/interfaces/libpq: Makefile libpq.rc \n\tsrc/interfaces/libpq++: Makefile \n\tsrc/interfaces/odbc: GNUmakefile \n\tsrc/interfaces/perl5: Pg.pm \n\nLog message:\n\tStamp CVS as 7.2. Update all interface version numbers. This is the\n\ttime to do it, not during beta because people are using this stuff in\n\tproduction sometimes.\n\n", "msg_date": "Thu, 10 May 2001 21:46:34 -0400 (EDT)", "msg_from": "Bruce Momjian - CVS <momjian@hub.org>", "msg_from_op": true, "msg_subject": "pgsql/ /README /configure /configure.in /regis ..." }, { "msg_contents": "Bruce Momjian - CVS writes:\n\n> \tStamp CVS as 7.2. Update all interface version numbers. This is the\n> \ttime to do it, not during beta because people are using this stuff in\n> \tproduction sometimes.\n\nNo it's not. You don't know yet whether\n\na) the interface will change at all\n\nb) a change will require a major version bump\n\nThe ideal time to do this is just before the first beta release.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 11 May 2001 23:32:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pgsql/ /README /configure /configure.in /regis ..." }, { "msg_contents": "Bruce Momjian - CVS writes:\n\n> \tsrc/interfaces/perl5: Pg.pm\n>\n> Log message:\n> \tStamp CVS as 7.2. Update all interface version numbers. This is the\n> \ttime to do it, not during beta because people are using this stuff in\n> \tproduction sometimes.\n\nThere's already a version 1.9.0 (which is what this got changed to) out on\nCPAN. Who maintains this?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 11 May 2001 23:35:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/ /README /configure /configure.in /regis ..." }, { "msg_contents": "> Bruce Momjian - CVS writes:\n> \n> > \tsrc/interfaces/perl5: Pg.pm\n> >\n> > Log message:\n> > \tStamp CVS as 7.2. Update all interface version numbers. This is the\n> > \ttime to do it, not during beta because people are using this stuff in\n> > \tproduction sometimes.\n> \n> There's already a version 1.9.0 (which is what this got changed to) out on\n> CPAN. Who maintains this?\n\nThe CPAN I thought was DBI, while ours is old, right? As far as I know,\nwe are the only maintainers for the non-DBI version. Edmund is the DBI\nmaintainer, and I have asked if we can add the DBI version to CVS.\n\nIf there is a non-DBI version numbered 1.9.0 already, I have no idea\nhow.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 20:57:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/ /README /configure /configure.in /regis ..." }, { "msg_contents": "> Bruce Momjian - CVS writes:\n> \n> > \tStamp CVS as 7.2. Update all interface version numbers. This is the\n> > \ttime to do it, not during beta because people are using this stuff in\n> > \tproduction sometimes.\n> \n> No it's not. You don't know yet whether\n> \n> a) the interface will change at all\n\nEvery interface changes in one way or another during a major development\ncycle.\n\n> b) a change will require a major version bump\n\nThat will be done when someone majorly breaks the API. Uping the minor\nnumber not doesn't prevent use from uping the major number later.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 20:58:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql/ /README /configure /configure.in /regis ..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> There's already a version 1.9.0 (which is what this got changed to) out on\n>> CPAN. Who maintains this?\n\n> The CPAN I thought was DBI, while ours is old, right?\n\nNo. interfaces/perl5 is a stand-alone Perl client library that is\nunrelated (more or less anyway) to DBD/DBI. DBI might be better,\nbut that doesn't necessarily mean we should abandon support for\nthe non-DBI library.\n\n> As far as I know, we are the only maintainers for the non-DBI version.\n\n*Somebody* is maintaining it, because the version on CPAN is not the\nsame as ours. (Which appears to be sufficient proof that there's\nstill interest in it...)\n\nIt would be good to get our copy back in sync with CPAN.\n\nBTW, this is exactly what I was concerned about w.r.t. adopting the DBI\nlibrary into our CVS: if it drifts away from what's on CPAN then we\nhave a problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 22:21:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/ /README /configure /configure.in /regis\n\t..." }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> There's already a version 1.9.0 (which is what this got changed to) out on\n> >> CPAN. Who maintains this?\n> \n> > The CPAN I thought was DBI, while ours is old, right?\n> \n> No. interfaces/perl5 is a stand-alone Perl client library that is\n> unrelated (more or less anyway) to DBD/DBI. DBI might be better,\n> but that doesn't necessarily mean we should abandon support for\n> the non-DBI library.\n\nNo, I wasn't suggesting we eliminate ours. I was just thinking we\nshould have the DBD/DBI in there too.\n\n> \n> > As far as I know, we are the only maintainers for the non-DBI version.\n> \n> *Somebody* is maintaining it, because the version on CPAN is not the\n> same as ours. (Which appears to be sufficient proof that there's\n> still interest in it...)\n\nGood point, and why don't we have those patches, and who is it?\n\n> It would be good to get our copy back in sync with CPAN.\n\nYes!\n\n> BTW, this is exactly what I was concerned about w.r.t. adopting the DBI\n> library into our CVS: if it drifts away from what's on CPAN then we\n> have a problem.\n\nWe need the maintainers to give us patches or give them CVS access, if\nthe agree. This is similar to how we moved ODBC into our tree.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 22:27:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [COMMITTERS] pgsql/ /README /configure /configure.in\n\t/regis ..." } ]
[ { "msg_contents": "Hi!\nI am facing two problems in porting from oracle to Postgres SQL.\n\n1> There is a code in Oracle like\n\n Type Tstate is table of number(9)\n index by binary_integer;\n.\n........\nTo define a runtime table, basically it works like a array, How can it\nbe possible in Postgres SQL,\nI have tried create temp table.... But it not works..\nIs there any way to use arrays.\n\n\n2> There is one function in Oracle Executesql '...........' to execute\n and what i got in Postgres is Execute immediate '.........'\n But it is giving error at Execute.\n\nI will be very thankful if any one help me.\n\n Amit\n( India )\n\n", "msg_date": "Fri, 11 May 2001 12:24:25 +0530", "msg_from": "Amit <amitsaxena178@rediffmail.com>", "msg_from_op": true, "msg_subject": "Problems in porting from Oracle to Postgres" }, { "msg_contents": "\tThis is more appropriate for the pgsql-sql list, so im forwarding it\nthat way. The hackers list is for other purposes.\n\nOn Fri, May 11, 2001 at 12:24:25PM +0530, Amit wrote:\n> \n> 1> There is a code in Oracle like\n> \n> Type Tstate is table of number(9)\n> index by binary_integer;\n> ........\n> To define a runtime table, basically it works like a array, How can it\n> be possible in Postgres SQL,\n> I have tried create temp table.... But it not works..\n> Is there any way to use arrays.\n\n\tIt'd be much easier to help you if you posted the function/procedure\nyou're trying to port. Just one line is harder.\n \n> 2> There is one function in Oracle Executesql '...........' to execute\n> and what i got in Postgres is Execute immediate '.........'\n> But it is giving error at Execute.\n\n\tAgain, you're giving way too little detail. What error? What are you\ntrying? Without this, it's very hard to help.\n\n\t-Roberto\n\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Developer \nCannot open CATFOOD.CAN - Eat logitech mouse instead (Y/n)?\n", "msg_date": "Fri, 11 May 2001 09:06:05 -0600", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems in porting from Oracle to Postgres" } ]
[ { "msg_contents": "Can someone confirm that REL7_1_STABLE is a branch and not a tag? I am\nhaving trouble doing 'cvs log -rREL7_1_STABLE' and wanted to make sure\neverything was set up properly. I can 'cvs update/commit' fine.\n\nIt is my understanding that it should be a branch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 May 2001 07:12:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "REL7_1_STABLE tag/branch" }, { "msg_contents": "\nit is a branch ... for lack of a better way to work it:\n\nsymbolic names:\n REL7_1_STABLE: 1.106.0.2\n REL7_1_BETA: 1.79\n REL7_1_BETA3: 1.86\n REL7_1_BETA2: 1.86\n REL7_1: 1.102\n REL7_0_PATCHES: 1.70.0.2\n REL7_0: 1.70\n REL6_5_PATCHES: 1.52.0.2\n REL6_5: 1.52\n REL6_4: 1.44.0.2\n release-6-3: 1.33\n SUPPORT: 1.1.1.1\n PG95-DIST: 1.1.1\n\nthe big long numbers (1.106.0.2) denote branches ... the shorter ones\n(1.79) simple tags along the main trunk ...\n\nOn Fri, 11 May 2001, Bruce Momjian wrote:\n\n> Can someone confirm that REL7_1_STABLE is a branch and not a tag? I am\n> having trouble doing 'cvs log -rREL7_1_STABLE' and wanted to make sure\n> everything was set up properly. I can 'cvs update/commit' fine.\n>\n> It is my understanding that it should be a branch.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Fri, 11 May 2001 08:51:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: REL7_1_STABLE tag/branch" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can someone confirm that REL7_1_STABLE is a branch and not a tag?\n\nSeems to work for committing stuff into the branch, so it must be\na branch ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 09:37:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: REL7_1_STABLE tag/branch " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Can someone confirm that REL7_1_STABLE is a branch and not a tag? I am\n> having trouble doing 'cvs log -rREL7_1_STABLE' and wanted to make sure\n> everything was set up properly. I can 'cvs update/commit' fine.\n> \n> It is my understanding that it should be a branch.\n\nYou can find out by doing `cvs status -v' on some file which has the\ntag.\n\nLooks to me like a branch:\n REL7_1_STABLE (branch: 1.23.2)\n\nIan\n", "msg_date": "11 May 2001 10:19:45 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: REL7_1_STABLE tag/branch" } ]
[ { "msg_contents": "\n> > As an aside, I do however think, that optimizing the O_SYNC path of\n> > the WAL code to block writes to larger blocks (doesn't need to be\n> > more than 256k) would lead to nearly the same performance as a raw\n> > device on most filesystems. (Maybe also add code to reuse backed up\n> > logfiles to avoid the need to preallocate space) Imho this is the part\n> > of the code where the brainwork should first be put into. It is also a\n> > prerequisite to make raw devices fast, since if you write 8k blocks to\n> > a raw device, that is slow (not faster than a fs).\n> \n> You cannot block writes to the WAL without blocking transactions waiting \n> on the write, because completion of that write is necessary for the \n> transaction to complete.\n\nYes, this is obvious, but:\n\nYou *can* block writes into larger blocks as long as no commit comes \ninbetween. This essentially increases performance e.g. for bulk loads\nwhere single transactions are > 8k of WAL. A typical example is even in the \nregression test, the \"copy ... from\" statements. They really suffer from\nthe O_SYNC mode. This mode is essentially what you would have now for a\nraw device WAL.\n\n> Moving the WAL volume's disk head into position is the major investment \n> you are amortizing with your large blocks. If the head is already in \n> position, it is about as efficient to write a little as to write a lot.\n\nThis is only half of the story for large transactions. For large transactions\nyou need to write more than the current 8k in one call (only in the raw device, \nor O_SYNC mode of course). Writing in large blocks also helps the fs to reduce \nhead movement. After every write call the OS suspends the current \nprocess, and makes room for another backend to e.g read a block on the same drive, \nthus forcing head movement.\n\nI suggest you do some tests with raw devices, which I already did, to see what happens\nif you only write 8k blocks (you only get 50-60% performance compared to 256k).\n\nThe IO performance gain you can achieve on a raw device compared to a \npreallocated filesystem file is imho neglectible. e.g. on AIX it is due to a global\nkernel parameter, that defaults to a max 32k block size for read ahead and write behind. \nI noted the advantages in a previous thread about why Oracle wants raw devices,\nand I don't think they are worth it at the current state of PostgreSQL.\n \nAndreas\n", "msg_date": "Fri, 11 May 2001 14:16:31 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: WAL and raw devices (was: volume management)" } ]
[ { "msg_contents": "Hi,\n\nI've just noticed that (after a upgrade from 7.0.3 to 7.1) the following\ndid'nt work anymore:\n\ncreate tabla a (n1 serial, n2 int);\ngrant all on a to nobody;\n\n<reconnect as user nobody>\n\ninsert into a (n2) value (1);\nn1.nextval: you don't have permission to set sequence n1\n\nIt worked on 7.0.3\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Fri, 11 May 2001 19:09:46 +0200", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Bug or feature?" }, { "msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> I've just noticed that (after a upgrade from 7.0.3 to 7.1) the following\n> did'nt work anymore:\n\n> create tabla a (n1 serial, n2 int);\n> grant all on a to nobody;\n\n> <reconnect as user nobody>\n\n> insert into a (n2) value (1);\n> n1.nextval: you don't have permission to set sequence n1\n\n> It worked on 7.0.3\n\nYou'll have to grant update rights on the sequence object to nobody ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 15:08:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug or feature? " }, { "msg_contents": "Hi Tom,\n\nThanks for your quick reply. However, I knew (and did that). My post were\nmore ... philosophical:\n\nShoudn't postgres extend priviledges to the sequences generated by a\ncreate table ???\n\nRegards,\n\nOn Fri, 11 May 2001, Tom Lane wrote:\n\n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > I've just noticed that (after a upgrade from 7.0.3 to 7.1) the following\n> > did'nt work anymore:\n> \n> > create tabla a (n1 serial, n2 int);\n> > grant all on a to nobody;\n> \n> > <reconnect as user nobody>\n> \n> > insert into a (n2) value (1);\n> > n1.nextval: you don't have permission to set sequence n1\n> \n> > It worked on 7.0.3\n> \n> You'll have to grant update rights on the sequence object to nobody ...\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Sat, 12 May 2001 18:36:59 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: Bug or feature? " }, { "msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> Shoudn't postgres extend priviledges to the sequences generated by a\n> create table ???\n\nThat's not clear. The sequence is an independent object. Had you\nexplicitly done\n\n\tCREATE SEQUENCE myseq;\n\n\tCREATE TABLE mytab (f1 int default nextval('myseq'));\n\nwould you expect that granting permissions on mytab automatically\ngrants them on myseq as well? I think you might consider that\nsurprising. But there isn't any difference between this and what\nCREATE TABLE does.\n\nThere have been suggestions in the past that SERIAL should be a \"real\ndata type\" with the sequence object being hidden more effectively than\nit is now --- including auto-dropping it at table deletion, etc.\nIf that were to happen then the permissions issue would probably go away\ntoo. It doesn't seem to be a very high priority for anyone, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 12:48:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug or feature? " }, { "msg_contents": "On Sat, 12 May 2001, Tom Lane wrote:\n\n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > Shoudn't postgres extend priviledges to the sequences generated by a\n> > create table ???\n> \n> That's not clear. The sequence is an independent object. Had you\n> explicitly done\n> \n> \tCREATE SEQUENCE myseq;\n> \n> \tCREATE TABLE mytab (f1 int default nextval('myseq'));\n> \n> would you expect that granting permissions on mytab automatically\n> grants them on myseq as well? I think you might consider that\n> surprising. But there isn't any difference between this and what\n> CREATE TABLE does.\nI'm aware of that.\n> \n> There have been suggestions in the past that SERIAL should be a \"real\n> data type\" with the sequence object being hidden more effectively than\n> it is now --- including auto-dropping it at table deletion, etc.\n> If that were to happen then the permissions issue would probably go away\n> too. It doesn't seem to be a very high priority for anyone, though.\n> \nIMHO, this would be \"cleanner\".\n1) When you have lots of auto generated sequence, it becomes diffcult to\ntrack the ones you have to drop if you drop tables.\n2) This ACL problem could disapear if serial were a real type.\n\nAnyway what I'm concerned with is that I had no problems until I dumped\nfrom 7.0.3 and reloaded if 7.1.\n\nRegards\n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Sat, 12 May 2001 19:17:03 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: Bug or feature? " } ]
[ { "msg_contents": "Hi,\nCan u suggest me a help site for creating arrays in SQL.\n\nI will greatly appreciate your help.\n\nThanks\nRavi\n\n", "msg_date": "Fri, 11 May 2001 13:40:20 -0400", "msg_from": "Ravi Garlapadu <rgarlapa@oconee.rms.slb.com>", "msg_from_op": true, "msg_subject": "Arrays in PL-SQL" } ]
[ { "msg_contents": "In developers corner, search returns this:\n\n#!/usr/bin/perl\n\nprint \"Content-type: text/html\\r\\n\\r\\n\";\n\nprint \"<html><head><META HTTP-EQUIV=\\\"Refresh\\\" \nCONTENT=\\\"0;URL=search.html\\\"></head></html>\\r\\n\\r\\n\";\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@webline.dk\n", "msg_date": "Fri, 11 May 2001 20:09:15 +0200", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": true, "msg_subject": "Search" }, { "msg_contents": "On Fri, 11 May 2001, Kaare Rasmussen wrote:\n\n> In developers corner, search returns this:\n>\n> #!/usr/bin/perl\n>\n> print \"Content-type: text/html\\r\\n\\r\\n\";\n>\n> print \"<html><head><META HTTP-EQUIV=\\\"Refresh\\\"\n> CONTENT=\\\"0;URL=search.html\\\"></head></html>\\r\\n\\r\\n\";\n>\n>\n\nWhat are you using for a browser and what mirror are you using? I just\ntried from the main site (www.ca) with Netscape 4.76, Netscape 6, Opera\nand Mozilla and all of them do the redirect to the search page.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 11 May 2001 16:27:13 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Search" } ]
[ { "msg_contents": "I've create a few types, including BOX3D (a simple bounding volume) and\nAGG_POINTS3D (a list of points with a BOX3D bounding volume).\n\nI've managed to get an rtree index on both the BOX3D type and\nAGG_POINTS3D.\nThe agg_points3d index cheats by using the bounding volume inside the\nAGG_POINTS3D type.\n\nI've \"SET ENABLE_SEQSCAN=OFF;\" so it usually uses the rtree index when I\ndo things like:\n\nselect * from box_table where the_box && <hard coded box3d>;\n\nfor example;\nselect * from test_box where the_box &&\n 'BOX3D (\n[4273.95215,12385.8281,0.0],[4340.80566,12459.7949,0.0])'::BOX3D;\n\nOR \n\nselect * from test_points3d where the_pts && <hard coded agg_points3d\nobject>;\n\nfor example;\nselect * from test_pts where the_pts &&\n \n'POINTS3D([10077.4414,14361.6172,1.0],[12370.2773,14595.5791,1.0],[13259.3379,11554.0596,1.0],[10872.915,10477.8301,1.0])'::AGG_POINTS3D;\n\nI'm sure it using the rtree index because 'explain' says it does and its\nabout 10* faster than a sequence scan.\n\nSo far, so good. I'm happy.\n\nNow I want to be able to do an index scan into the AGG_POINTS3D table\nagainst a BOX3D. This is essentually what the rtree index is doing\nanyways.\n\nI defined a function agg_points3d(BOX3D) that converts the BOX3D into\nan AGG_POINTS3D.\n\nThe query:\n\n select loc from test_pts where the_pts &&\n\n'BOX3D([10077.4414,10477.8301,1.0],[13259.3379,14595.5791,1.0])'::BOX3D;\n\ngives the correct results. Postgres automatically uses the\nagg_points3d() function to convert the BOX3D into an AGG_POINTS3D. \nUnfortunately, it doesn't use the index scan anymore; it does a sequence\nscan.\n\nI tried the following queries as well;\n\nselect * from test_points3d where the_pts &&\nagg_points3d(\n'BOX3D([10077.4414,10477.8301,1.0],[13259.3379,14595.5791,1.0])'::BOX3D\n);\n\n[Explicitly doing the above]\n\nselect * from test_points3d where the_pts &&\n(agg_points3d(\n'BOX3D([10077.4414,10477.8301,1.0],[13259.3379,14595.5791,1.0])'::BOX3D\n))::AGG_POINTS3D;\n\n[Ensuring postgres knows that the 2nd argument to && is an AGG_POINTS3D]\n\nMy question is why isnt it doing an index scan? And how do I get it to\nuse the index? The above 3 queries are really queries like:\n\nselect * from test_points3d where the_pts && <AGG_POINTS3D>;\n\nwhich does use an index scan?\n\nThanks,\n\ndave\nps. The tables are defined as:\ncreate table test_points3d (loc varchar(100), the_pts AGG_POINTS3D) ;\ncreate table test_box (loc varchar(100), the_box BOX3D);\nBoth tables have about 200,000 random rows in them for testing.\n\nI create the indexes with:\n\n create index rt_test_box on test_box using rtree (the_box\nrt_box3d_ops);\ncreate index rt_test_points on test_points3d using rtree (the_pts\nrt_points3d_ops);\n", "msg_date": "Fri, 11 May 2001 13:00:50 -0700", "msg_from": "Dave Blasby <dblasby@refractions.net>", "msg_from_op": true, "msg_subject": "Rtree on custom data types; type conversion stops index use." }, { "msg_contents": "Dave Blasby <dblasby@refractions.net> writes:\n> gives the correct results. Postgres automatically uses the\n> agg_points3d() function to convert the BOX3D into an AGG_POINTS3D. \n> Unfortunately, it doesn't use the index scan anymore; it does a sequence\n> scan.\n\nFirst question: what Postgres version?\n\nNext question (if PG >= 7.0): did you mark your type conversion routine\nas cachable?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 May 2001 16:19:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rtree on custom data types; type conversion stops index use. " }, { "msg_contents": "I'm using 7.1.1, and your suggestion WORKED!\n\nThanks for your prompt reply!\n\nRefractions Research will be releasing (open source) \"postGIS\" very soon\n(as soon as its in a releasable state). \nIt will contain GIS data types (box3d, multi-point3d, multi-polyline3d,\nmulti-complex-polygon3d) and GIS operations.\n\n\nTom Lane wrote:\n...\n> First question: what Postgres version?\n> \n> Next question (if PG >= 7.0): did you mark your type conversion routine\n> as cachable?\n", "msg_date": "Fri, 11 May 2001 13:50:43 -0700", "msg_from": "Dave Blasby <dblasby@refractions.net>", "msg_from_op": true, "msg_subject": "Re: Rtree on custom data types; type conversion stops index\n use." }, { "msg_contents": "> Refractions Research will be releasing (open source) \"postGIS\" very soon\n> (as soon as its in a releasable state).\n> It will contain GIS data types (box3d, multi-point3d, multi-polyline3d,\n> multi-complex-polygon3d) and GIS operations.\n\nCool!\n\n - Thomas\n", "msg_date": "Sat, 12 May 2001 04:09:07 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Rtree on custom data types; type conversion stops index use." } ]
[ { "msg_contents": "I'm not sure if this is still needed in postgres to define the length of\na variable/table name.\n\nIn postgres_ext.h, I changed:\n\n#define NAMEDATALEN 32\nto\n#define NAMEDATALEN 51\n\nEverything compiled and installed. However, the initdb started up but\nthen just said that it failed.\n\nI did a gmake clean and changed the 51 back to 32 and everything went\nthrough correctly (make, install, and initdb). Can anyone else verify if\nthis is correct or even makes sense?\n\nThanks.\n-Tony\n\n\n", "msg_date": "Fri, 11 May 2001 16:47:10 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Trouble with initdb when the #define NAMEDATALEN = 51" } ]
[ { "msg_contents": "The only really interesting things that tclConfig.sh (and tkConfig.sh)\ntells us are the names of various libraries. But those names can be used\nportably with any compiler, so I don't see why we need to subscribe to the\nwhole deal. AFAICT, the rest (TCL_CC, TCL_SHLIB_SUFFIX, etc.) is merely a\nhelp for people who don't know how to build shared libraries, but we do,\nso we should use our own way.\n\nNaah, don't tell me it breaks on HP-UX. Make it work... ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 12 May 2001 01:49:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Why do we use Tcl's compiler and flags?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The only really interesting things that tclConfig.sh (and tkConfig.sh)\n> tells us are the names of various libraries. But those names can be used\n> portably with any compiler, so I don't see why we need to subscribe to the\n> whole deal. AFAICT, the rest (TCL_CC, TCL_SHLIB_SUFFIX, etc.) is merely a\n> help for people who don't know how to build shared libraries, but we do,\n> so we should use our own way.\n\nI think this may be a hangover from a time when Tcl was more likely to\nknow how to build shlibs than our own makefiles were.\n\nHowever, it doesn't appear to me that we'll get rid of all that much\ncruft if we change. We'll still need to find and read tclConfig.sh\nin order to find out such interesting things as which shlibs libtcl.so\nis dependent on. Why are you concerned about fixing something that's\nnot especially broken?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 22:58:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why do we use Tcl's compiler and flags? " }, { "msg_contents": "Tom Lane writes:\n\n> However, it doesn't appear to me that we'll get rid of all that much\n> cruft if we change. We'll still need to find and read tclConfig.sh\n> in order to find out such interesting things as which shlibs libtcl.so\n> is dependent on. Why are you concerned about fixing something that's\n> not especially broken?\n\nIt *is* especially broken if the compiler referenced by TCL_CC does not\nexist on the system. This situation is not uncommon on commercial\noperating systems if the user did not shell out the extra cash for the\nvendor's compiler suite, but the vendor did provide a Tcl installation.\nWe hear these bug reports once in a while, mostly from Solaris users.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 13 May 2001 12:11:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Why do we use Tcl's compiler and flags? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Why are you concerned about fixing something that's\n>> not especially broken?\n\n> It *is* especially broken if the compiler referenced by TCL_CC does not\n> exist on the system.\n\nUm. Good point... although I'd still say such an installation is\nbroken, since it'll preclude building most available Tcl extension\npackages. But we have gotten a lot of reports like that, so it\nseems worth working around the problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 May 2001 11:03:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why do we use Tcl's compiler and flags? " } ]
[ { "msg_contents": "Sorry, I forgot to include that I'm compiling this on RedHat 6.2,\nPentium III with Postgres 7.1.1.\n\n-Tony\n\n\n\n> I'm not sure if this is still needed in postgres to define the length of\n> a variable/table name.\n>\n> In postgres_ext.h, I changed:\n>\n> #define NAMEDATALEN 32\n> to\n> #define NAMEDATALEN 51\n>\n> Everything compiled and installed. However, the initdb started up but\n> then just said that it failed.\n>\n> I did a gmake clean and changed the 51 back to 32 and everything went\n> through correctly (make, install, and initdb). Can anyone else verify if\n> this is correct or even makes sense?\n>\n> Thanks.\n> -Tony\n>\n>\n\n", "msg_date": "Fri, 11 May 2001 16:54:35 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Addition to: Trouble with initdb when the #define NAMEDATALEN = 51" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n>> In postgres_ext.h, I changed:\n>> \n>> #define NAMEDATALEN 32\n>> to\n>> #define NAMEDATALEN 51\n>> \n>> Everything compiled and installed. However, the initdb started up but\n>> then just said that it failed.\n\nI have not tried that in awhile, but the last time I did, it worked\nfine. Are you sure you did a *complete* rebuild? I'd suggest make\ndistclean at the top level, configure, make all, install, initdb.\n\nBTW, 51 is a gratuitously wasteful setting --- given alignment\nconsiderations, any value that's not a multiple of 4 is pointless.\n(It should work ... but it's pointless.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 00:35:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Addition to: Trouble with initdb when the #define NAMEDATALEN =\n\t51" }, { "msg_contents": "At 12:35 AM 5/12/01 -0400, Tom Lane wrote:\n>BTW, 51 is a gratuitously wasteful setting --- given alignment\n>considerations, any value that's not a multiple of 4 is pointless.\n>(It should work ... but it's pointless.)\n\nWould n^2-1 or n*8 -1 be better than n^2 or n*8? \n\nFor postgresql it's in the source somewhere, but assuming we can't look,\nwhich would be a better bet? \n\nCheerio,\nLink.\n\n", "msg_date": "Sat, 12 May 2001 21:41:16 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Addition to: Trouble with initdb when the #define\n\tNAMEDATALEN = 51" }, { "msg_contents": "Lincoln Yeoh writes:\n\n> At 12:35 AM 5/12/01 -0400, Tom Lane wrote:\n> >BTW, 51 is a gratuitously wasteful setting --- given alignment\n> >considerations, any value that's not a multiple of 4 is pointless.\n> >(It should work ... but it's pointless.)\n>\n> Would n^2-1 or n*8 -1 be better than n^2 or n*8?\n\nThe important issue here is the storage layout of the system catalog\ntuples. If you take a look you will see that in most tables the field\nafter a name field is either an oid or an int4, both of which generally\nrequire 4-byte alignment. Since names are also 4-byte aligned you will\nwaste up to 3 bytes in padding if you don't choose NAMEDATALEN a multple\nof 4.\n\nNote that the system will actually only allow NAMEDATALEN-1 characters in\na name, maybe you were referring to that.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 12 May 2001 16:02:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Addition to: Trouble with initdb when the #define\n\tNAMEDATALEN = 51" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> At 12:35 AM 5/12/01 -0400, Tom Lane wrote:\n>> BTW, 51 is a gratuitously wasteful setting --- given alignment\n>> considerations, any value that's not a multiple of 4 is pointless.\n>> (It should work ... but it's pointless.)\n\n> Would n^2-1 or n*8 -1 be better than n^2 or n*8? \n\nNo. There is a pad byte involved, but it's included in NAMEDATALEN\n(ie, if you set it to 64, you really get names up to 63 characters).\nYou want NAMEDATALEN itself to be a round number, since if it's not\nthe following byte(s) will just be wasted for alignment padding anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 12:35:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Addition to: Trouble with initdb when the #define NAMEDATALEN =\n\t51" }, { "msg_contents": "Tom Lane wrote:\n\n> \"G. Anthony Reina\" <reina@nsi.edu> writes:\n> >> In postgres_ext.h, I changed:\n> >>\n> >> #define NAMEDATALEN 32\n> >> to\n> >> #define NAMEDATALEN 51\n> >>\n> >> Everything compiled and installed. However, the initdb started up but\n> >> then just said that it failed.\n>\n> I have not tried that in awhile, but the last time I did, it worked\n> fine. Are you sure you did a *complete* rebuild? I'd suggest make\n> distclean at the top level, configure, make all, install, initdb.\n>\n\nI did a 'gmake clean'. I'll try again today. Perhaps I'll find something\nthat I was overlooking on Friday.\n\n-Tony\n\n\n", "msg_date": "Mon, 14 May 2001 09:43:02 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: Addition to: Trouble with initdb when the #define\n\tNAMEDATALEN = 51" } ]
[ { "msg_contents": "In addition to my RedHat 6.2 server, I'm installing Postgres 7.1.1 on an\nSGI O2 (IRIX 6.5.10). The configure works, but the 'gmake all' fails\nwhen it tries to compile 'xact.c':\n\ncc-1521 cc: WARNING File = /usr/include/setjmp.h, Line = 26\n A nonstandard preprocessing directive is used.\n\n #ident \"$Revision: 1.36 $\"\n ^\n\ncc-1070 cc: ERROR File = xact.c, Line = 696\n The indicated type is incomplete.\n\n struct timeval delay;\n ^\n\n1 error detected in the compilation of \"xact.c\".\ngmake[4]: *** [xact.o] Error 2\ngmake[4]: Leaving directory\n`/usr/src/postgresql-7.1.1/src/backend/access/transam'\ngmake[3]: *** [transam-recursive] Error 2\ngmake[3]: Leaving directory\n`/usr/src/postgresql-7.1.1/src/backend/access'\ngmake[2]: *** [access-recursive] Error 2\ngmake[2]: Leaving directory `/usr/src/postgresql-7.1.1/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/usr/src/postgresql-7.1.1/src'\ngmake: *** [all] Error 2\no21.nsi.edu:postgres::/usr/src/postgresql-7.1.1 >\n\nI'm using the SGI MIPSPro 7.1 C compiler. I haven't had any problems\nlike this when compiling previous versions of Postgres. If necessary, I\ncould try to get gcc instead of the MIPSPro compiler, but I wonder if\nthe xact.c definition for timeval could be modified to pass on my\nmachine.\n\nThanks.\n-Tony\n\n\n", "msg_date": "Fri, 11 May 2001 16:59:47 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Installation on SGI IRIX 6.5.10" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> In addition to my RedHat 6.2 server, I'm installing Postgres 7.1.1 on an\n> SGI O2 (IRIX 6.5.10). The configure works, but the 'gmake all' fails\n> when it tries to compile 'xact.c':\n\n> cc-1521 cc: WARNING File = /usr/include/setjmp.h, Line = 26\n> A nonstandard preprocessing directive is used.\n\n> #ident \"$Revision: 1.36 $\"\n> ^\n\nThat looks like IRIX's own fault. If their compiler doesn't like their\nown <setjmp.h>, it's their issue to resolve ...\n\n> cc-1070 cc: ERROR File = xact.c, Line = 696\n> The indicated type is incomplete.\n\n> struct timeval delay;\n> ^\n\nHm. Which system header file defines struct timeval on IRIX?\nI'd expect <time.h> or <sys/time.h>, but maybe they keep it\nsomeplace unusual.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 14:11:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Installation on SGI IRIX 6.5.10 " }, { "msg_contents": "Tom Lane wrote:\n\n> > cc-1070 cc: ERROR File = xact.c, Line = 696\n> > The indicated type is incomplete.\n>\n> > struct timeval delay;\n> > ^\n>\n> Hm. Which system header file defines struct timeval on IRIX?\n> I'd expect <time.h> or <sys/time.h>, but maybe they keep it\n> someplace unusual.\n\nIn /usr/include/sys/time.h:\n\n#if _XOPEN4UX || defined(_BSD_TYPES) || defined(_BSD_COMPAT)\n/*\n * Structure returned by gettimeofday(2) system call,\n * and used in other calls.\n * Note this is also defined in sys/resource.h\n */\n#ifndef _TIMEVAL_T\n#define _TIMEVAL_T\nstruct timeval {\n#if _MIPS_SZLONG == 64\n __int32_t :32;\n#endif\n time_t tv_sec; /* seconds */\n long tv_usec; /* and microseconds */\n};\n\n\n-Tony\n\n\n", "msg_date": "Mon, 14 May 2001 09:41:04 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: Installation on SGI IRIX 6.5.10" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> Tom Lane wrote:\n>> Hm. Which system header file defines struct timeval on IRIX?\n>> I'd expect <time.h> or <sys/time.h>, but maybe they keep it\n>> someplace unusual.\n\n> In /usr/include/sys/time.h:\n\n> #if _XOPEN4UX || defined(_BSD_TYPES) || defined(_BSD_COMPAT)\n\nNext thought is that maybe none of these control symbols are defined\nby default --- could you look into that possibility? Perhaps some\ncompiler switches or #defines are needed to get IRIX to allow\n\"struct timeval\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 19:38:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Installation on SGI IRIX 6.5.10 " }, { "msg_contents": "Tom Lane wrote:\n\n> > #if _XOPEN4UX || defined(_BSD_TYPES) || defined(_BSD_COMPAT)\n>\n> Next thought is that maybe none of these control symbols are defined\n> by default --- could you look into that possibility? Perhaps some\n> compiler switches or #defines are needed to get IRIX to allow\n> \"struct timeval\"?\n>\n\n> regards, tom lane\n\nIn xact.c, I added:\n\n#define _BSD_COMPAT 1\n\nbefore\n\n#include <sys/time.h>\n\nIt seems to get through that part of the compilation okay now. I'm not\nsure if that will break anything else but it seems minor.\n\nThere's a new problem with async.c:\n\ncc-1515 cc: ERROR File = async.c, Line = 172\n A value of type \"int\" cannot be assigned to an entity of type \"char\n*\".\n\n notifyName = strdup(relname);\n ^\n\n1 error detected in the compilation of \"async.c\".\ngmake[3]: *** [async.o] Error 2\ngmake[3]: Leaving directory\n`/usr/src/postgresql-7.1.1/src/backend/commands'\ngmake[2]: *** [commands-recursive] Error 2\ngmake[2]: Leaving directory `/usr/src/postgresql-7.1.1/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/usr/src/postgresql-7.1.1/src'\ngmake: *** [all] Error 2\n\nIt looks like I just need to change the code to explicitly cast the\nvariable.\n\n-Tony\n\n\n\n", "msg_date": "Mon, 14 May 2001 17:00:09 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: Installation on SGI IRIX 6.5.10" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> There's a new problem with async.c:\n\n> cc-1515 cc: ERROR File = async.c, Line = 172\n> A value of type \"int\" cannot be assigned to an entity of type \"char\n> *\".\n\n> notifyName = strdup(relname);\n> ^\n\nEvidently IRIX also considers strdup() to be nonstandard :-(\n\nIt's hard to believe that SGI is quite this braindead. I think there is\nsomething broken about configure on your setup. Can't tell what from\nhere --- suggest you call in some IRIX gurus.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 20:11:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Installation on SGI IRIX 6.5.10 " }, { "msg_contents": "Tom Lane wrote:\n\n> Evidently IRIX also considers strdup() to be nonstandard :-(\n>\n> It's hard to believe that SGI is quite this braindead. I think there is\n> something broken about configure on your setup. Can't tell what from\n> here --- suggest you call in some IRIX gurus.\n>\n\nYep. So goes SGI. I can't figure out why this error is showing up. When I\nlooked at the man page for strdup:\n\nchar *strdup (const char *s1);\n\nwhich is how it looks to be used in async.c. I simply added a specific\ntype-cast:\n\nnotifyName = (char *) strdup(relname);\n\nand it compiled async.c fine (of course, now I'll have to go through some\nof the other files that also have strdup and change them).\n\nI'm going to see if the SGI technical support considers this a bug or not.\n\n-Tony\n\n\n\n\n", "msg_date": "Mon, 14 May 2001 17:21:29 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: Installation on SGI IRIX 6.5.10" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> which is how it looks to be used in async.c. I simply added a specific\n> type-cast:\n\n> notifyName = (char *) strdup(relname);\n\nThat absolutely should NOT be necessary; there should be a proper\nextern declaration of strdup visible. Perhaps it should be added\nto include/port/irix5.h (cf port/nextstep.h).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 20:23:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Installation on SGI IRIX 6.5.10 " }, { "msg_contents": "Tom Lane wrote:\n\n>\n> That absolutely should NOT be necessary; there should be a proper\n> extern declaration of strdup visible. Perhaps it should be added\n> to include/port/irix5.h (cf port/nextstep.h).\n>\n> regards, tom lane\n\nJust to make sure, I tried compiling on another SGI. Everything went fine\nwithout any kludgy workarounds. It looks like somehow my compiler/OS has\nsome problems. I'm upgrading to the latest OS version and compiler version.\n\nThanks for the help.\n\n-Tony\n\n\n", "msg_date": "Tue, 15 May 2001 10:11:17 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: Installation on SGI IRIX 6.5.10" } ]
[ { "msg_contents": "Hello , mans\n\nYour server have a bug that sometimes cause coredumps.....\n\nSystem: FreeBSD 4.2-20001127-STABLE\nCompiler: gcc 2.95.2\nPlatform: x86 (PII-600)\nRam: 256 Mb\n\nWith server work only one program - IServerd, there is 10 parallel\nprocesses that have db connection.\n\nHere information that i get from cores\n\n/------------------------------------------------\n#0 0x80c11b3 in exec_append_initialize_next ()\n#1 0x80c1287 in ExecInitAppend () \n#2 0x80be3ee in ExecInitNode () \n#3 0x80be239 in EvalPlanQual () \n#4 0x80bdbb3 in ExecReplace () \n#5 0x80bd871 in ExecutePlan () \n#6 0x80bccea in ExecutorRun () \n#7 0x81036fb in ProcessQuery () \n#8 0x810217d in pg_exec_query_string () \n#9 0x81031ac in PostgresMain () \n#10 0x80eda66 in DoBackend () \n#11 0x80ed622 in BackendStartup () \n#12 0x80ec815 in ServerLoop () \n#13 0x80ec1fb in PostmasterMain () \n#14 0x80cd0a8 in main () \n#15 0x8064765 in _start ()\n/------------------------------------------------\n\nDump of assembler code for function exec_append_initialize_next:\n0x80c1174 <exec_append_initialize_next>: push %ebp \n0x80c1175 <exec_append_initialize_next+1>: mov %esp,%ebp \n0x80c1177 <exec_append_initialize_next+3>: push %esi \n0x80c1178 <exec_append_initialize_next+4>: push %ebx \n0x80c1179 <exec_append_initialize_next+5>: mov 0x8(%ebp),%esi \n0x80c117c <exec_append_initialize_next+8>: mov 0x20(%esi),%ebx\n0x80c117f <exec_append_initialize_next+11>: mov 0x54(%esi),%edx\n0x80c1182 <exec_append_initialize_next+14>: mov 0x18(%edx),%eax\n0x80c1185 <exec_append_initialize_next+17>: mov 0x1c(%edx),%ecx\n0x80c1188 <exec_append_initialize_next+20>: test %eax,%eax \n0x80c118a <exec_append_initialize_next+22>: \n jge 0x80c1198 <exec_append_initialize_next+36> \n0x80c118c <exec_append_initialize_next+24>: movl $0x0,0x18(%edx)\n0x80c1193 <exec_append_initialize_next+31>: xor %eax,%eax \n0x80c1195 <exec_append_initialize_next+33>: \n jmp 0x80c11be <exec_append_initialize_next+74> \n0x80c1197 <exec_append_initialize_next+35>: nop \n0x80c1198 <exec_append_initialize_next+36>: cmp %ecx,%eax \n0x80c119a <exec_append_initialize_next+38>: \n jl 0x80c11a4 <exec_append_initialize_next+48> \n0x80c119c <exec_append_initialize_next+40>: dec %ecx \n0x80c119d <exec_append_initialize_next+41>: mov %ecx,0x18(%edx)\n0x80c11a0 <exec_append_initialize_next+44>: xor %eax,%eax \n0x80c11a2 <exec_append_initialize_next+46>: \n jmp 0x80c11be <exec_append_initialize_next+74> \n0x80c11a4 <exec_append_initialize_next+48>: cmpb $0x0,0x50(%esi)\n0x80c11a8 <exec_append_initialize_next+52>: \n je 0x80c11b9 <exec_append_initialize_next+69> \n0x80c11aa <exec_append_initialize_next+54>: shl $0x5,%eax \n0x80c11ad <exec_append_initialize_next+57>: add 0x10(%ebx),%eax\n0x80c11b0 <exec_append_initialize_next+60>: mov %eax,0x18(%ebx)\n0x80c11b3 <exec_append_initialize_next+63>: mov 0x1c(%eax),%eax\n ^^^^^^ - <<<<<<<<<< here >>>>>>>>>>\n \n0x80c11b6 <exec_append_initialize_next+66>: mov %eax,0x1c(%ebx)\n0x80c11b9 <exec_append_initialize_next+69>: mov $0x1,%eax \n0x80c11be <exec_append_initialize_next+74>: pop %ebx \n0x80c11bf <exec_append_initialize_next+75>: pop %esi \n0x80c11c0 <exec_append_initialize_next+76>: leave \n0x80c11c1 <exec_append_initialize_next+77>: ret \n\nWith respect,\nA.V.Shutko mailto:AVShutko@mail.khstu.ru\n\n\n", "msg_date": "Sat, 12 May 2001 12:48:55 +1100", "msg_from": "\"A.V.Shutko\" <AVShutko@mail.khstu.ru>", "msg_from_op": true, "msg_subject": "Postgres bug (working with iserverd)" }, { "msg_contents": "\"A.V.Shutko\" <AVShutko@mail.khstu.ru> writes:\n> Your server have a bug that sometimes cause coredumps.....\n\nWhat version of postgres? What is the query being processed?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 22:05:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres bug (working with iserverd) " }, { "msg_contents": "\"A.V.Shutko\" <AVShutko@mail.khstu.ru> writes:\n> Your server have a bug that sometimes cause coredumps.....\n\n> Tom Lane> What version of postgres?\n> # ./postgres -V\n> postgres (PostgreSQL) 7.1\n\nOkay, I think I understand the scenario here. Are you using table\ninheritance? I can produce a crash in the same place using UPDATE\nof an inheritance group:\n\n\nregression=# create table par(f1 int);\nCREATE\nregression=# create table child(f2 int) inherits (par);\nCREATE\nregression=# insert into par values(1);\nINSERT 1453231 1\nregression=# begin;\nBEGIN\nregression=# update par set f1 = f1 + 1;\nUPDATE 1\n\n<< now start a second backend, and in it also do >>\n\nregression=# update par set f1 = f1 + 1;\n\n<< second backend blocks waiting for first one to commit;\n go back to first backend and do >>\n\nregression=# end;\nCOMMIT\n\n<< now second backend crashes in exec_append_initialize_next >>\n\n\nThe direct cause of the problem is that EvalPlanQual isn't completely\ninitializing the estate that it sets up for re-evaluating the plan.\nIn particular it's not filling in es_result_relations and\nes_num_result_relations, which need to be set up if the top plan node\nis an Append. (That's probably my fault.) But there are a bunch of\nother fields that it's failing to copy, too.\n\nVadim, I'm thinking that EvalPlanQual would be better if it memcpy'd\nthe parent estate, and then changed the fields that should be different,\nrather than zeroing the child state and then copying the fields that\nneed to be copied. Seems like the default behavior should be to copy\nfields rather than leave them zero. What do you think? Which fields\nshould really be zero in the child?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 May 2001 00:22:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres bug (working with iserverd) " }, { "msg_contents": "I wrote:\n> The direct cause of the problem is that EvalPlanQual isn't completely\n> initializing the estate that it sets up for re-evaluating the plan.\n> In particular it's not filling in es_result_relations and\n> es_num_result_relations, which need to be set up if the top plan node\n> is an Append. (That's probably my fault.) But there are a bunch of\n> other fields that it's failing to copy, too.\n\nI believe I have fixed this problem in CVS sources for current and\nREL7_1, at least to the extent that EvalPlanQual processing produces\nthe right answers for updates/deletes in inheritance trees.\n\nHowever, EvalPlanQual still leaks more memory than suits me ---\nauxiliary memory allocated by the plan nodes is not recovered.\nI think the correct way to implement it would be to create a new\nmemory context for each level of EvalPlanQual execution and use\nthat context as the \"per-query context\" for the sub-query. The\nwhole context (including the copied plan) would be freed at the\nend of the sub-query. The notion of a stack of currently-unused\nepqstate nodes would go away.\n\nThis would mean a few more cycles per tuple to copy the plan tree over\nagain each time, but I think that's pretty trivial compared to the plan\nstartup/shutdown costs that we incur anyway. Besides, I have hopes of\nmaking plan trees read-only whenever we do the fabled querytree\nredesign, so the cost will someday go away.\n\nComments, objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 20:49:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres bug (working with iserverd) " }, { "msg_contents": "> However, EvalPlanQual still leaks more memory than suits me ---\n> auxiliary memory allocated by the plan nodes is not recovered.\n> I think the correct way to implement it would be to create a new\n> memory context for each level of EvalPlanQual execution and use\n> that context as the \"per-query context\" for the sub-query. The\n> whole context (including the copied plan) would be freed at the\n> end of the sub-query. The notion of a stack of currently-unused\n> epqstate nodes would go away.\n> \n> This would mean a few more cycles per tuple to copy the plan tree over\n> again each time, but I think that's pretty trivial compared to the plan\n> startup/shutdown costs that we incur anyway. Besides, I have hopes of\n> making plan trees read-only whenever we do the fabled querytree\n> redesign, so the cost will someday go away.\n\nIsn't plan shutdown supposed to free memory? How subselects run queries\nagain and again? I wasn't in planner/executor areas for long time and\nhave no time to look there now -:(, so - just asking -:)\n\nVadim\n\n\n", "msg_date": "Mon, 14 May 2001 21:57:12 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Postgres bug (working with iserverd) " }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n>> However, EvalPlanQual still leaks more memory than suits me ---\n>> auxiliary memory allocated by the plan nodes is not recovered.\n\n> Isn't plan shutdown supposed to free memory?\n\nYeah, but it leaks all over the place; none of the plan node types\nbother to free their state nodes, for example. There are lots of other\ncases. You really have to reset the per-query context to get rid of all\nthe cruft allocated during ExecInitNode.\n\n> How subselects run queries again and again?\n\nThey don't end and restart them; they just rescan them. If we had\nthis substitute-a-new-tuple hack integrated into the Param mechanism,\nthen EvalPlanQual could use ExecReScan too, but at the moment no...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 01:17:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres bug (working with iserverd) " }, { "msg_contents": "> > How subselects run queries again and again?\n> \n> They don't end and restart them; they just rescan them. If we had\n\nThanks for recollection.\n\n> this substitute-a-new-tuple hack integrated into the Param mechanism,\n> then EvalPlanQual could use ExecReScan too, but at the moment no...\n\nI see.\n\nVadim\n\n\n", "msg_date": "Mon, 14 May 2001 23:57:58 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Postgres bug (working with iserverd) " } ]
[ { "msg_contents": "I am having problems with transactions and foreign key constraints in\npostgres 7.0-3 (RPM distribution). . The foreign key constraints were\nblocking concurrent transactions. Here is an example where something blocked\nbut shouldn't have blocked:\n\ncreate table hello10 (myid serial primary key, myvalue int4);\n\ncreate table hello11(myvalue int4, foreign key (myvalue) references\nhello10);\n\ninsert into hello10 (myvalue) values (1);\n\n---- ok, now everything is set up for the blocking problem.\n\nNow have two logins to psql:\n\npsql1# begin;\npsql1# insert into hello11 (myvalue) values (1)\npsql1#\n\nswitch to the other login\npsql2# begin;\npsql2# insert into hello11 (myvalue) values (1)\n*** block ***\n\nIt shouldn't block there. Basically it happens when two transactions try to\ninsert something into tables (doesn't have to be the same one) which both\nhave a foreign key constraint to a common key. I did some poking around and\nluckily did find something in the archives that was similar here:\n\nhttp://fts.postgresql.org/db/mw/msg.html?mid=30149\n\nIt was mentioned that it was a problem, and there was a workaround (add\nINITIALLY DEFFERED to the constraint). The workaround works. My question is,\nis this fixed in Postgres 7.1 (i don't have a spare machine to test, sorry)?\n\n-rchit\n\n", "msg_date": "Fri, 11 May 2001 18:51:14 -0700", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "inserts on a transaction blocking other inserts" }, { "msg_contents": "Rachit Siamwalla wrote:\n> [...]\n>\n> It shouldn't block there. Basically it happens when two transactions try to\n> insert something into tables (doesn't have to be the same one) which both\n> have a foreign key constraint to a common key. I did some poking around and\n> luckily did find something in the archives that was similar here:\n\n The required lock to ensure that the PK doesn't get changed\n after the constraint checked for it's existence would be a\n shared read lock. Unfortunately, there is no SQL syntax\n doing a SELECT that does it. So the only way for now is\n doing an exclusive write lock with SELECT FOR UPDATE.\n\n Not fixed in 7.1.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 16 May 2001 15:55:52 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: inserts on a transaction blocking other inserts" } ]
[ { "msg_contents": "On Sat, May 12, 2001 at 12:47:33AM -0400, Neil Conway wrote:\n> I've been experimenting with pgcrypto 0.3 (distributed with\n> Postgres 7.1.0), and I think I've found a bug.\n> \n> I compiled Pgcrypto with OpenSSL, using gcc 2.95.4 and\n> OpenSSL 0.9.6a (the latest Debian 'unstable' packages).\n\n> web=> select encode(digest('blah', 'sha1'), 'base64');\n> FATAL 1: pg_encode: overflow, encode estimate too small\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Succeeded.\n\n> Is this a bug? Can it be fixed?\n\nThis is a bug alright. And a silly one :)\n\nThanks for reporting. For standalone package apply this\npatch with -p2.\n\npgsql-hackers: this should get into REL7_1_STABLE.\n\n-- \nmarko\n\n\nIndex: contrib/pgcrypto/encode.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/contrib/pgcrypto/encode.c,v\nretrieving revision 1.4\ndiff -u -r1.4 encode.c\n--- contrib/pgcrypto/encode.c\t2001/03/22 03:59:10\t1.4\n+++ contrib/pgcrypto/encode.c\t2001/05/12 08:28:50\n@@ -349,7 +349,7 @@\n uint\n b64_enc_len(uint srclen)\n {\n-\treturn srclen + (srclen / 3) + (srclen / (76 / 2));\n+\treturn srclen + (srclen + 2 / 3) + (srclen / (76 / 2)) + 2;\n }\n \n uint\n", "msg_date": "Sat, 12 May 2001 10:51:29 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: bug in pgcrypto 0.3" }, { "msg_contents": "\nApplied to 7.1.X and 7.2.\n\n\n> On Sat, May 12, 2001 at 12:47:33AM -0400, Neil Conway wrote:\n> > I've been experimenting with pgcrypto 0.3 (distributed with\n> > Postgres 7.1.0), and I think I've found a bug.\n> > \n> > I compiled Pgcrypto with OpenSSL, using gcc 2.95.4 and\n> > OpenSSL 0.9.6a (the latest Debian 'unstable' packages).\n> \n> > web=> select encode(digest('blah', 'sha1'), 'base64');\n> > FATAL 1: pg_encode: overflow, encode estimate too small\n> > pqReadData() -- backend closed the channel unexpectedly.\n> > This probably means the backend terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Succeeded.\n> \n> > Is this a bug? Can it be fixed?\n> \n> This is a bug alright. And a silly one :)\n> \n> Thanks for reporting. For standalone package apply this\n> patch with -p2.\n> \n> pgsql-hackers: this should get into REL7_1_STABLE.\n> \n> -- \n> marko\n> \n> \n> Index: contrib/pgcrypto/encode.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/contrib/pgcrypto/encode.c,v\n> retrieving revision 1.4\n> diff -u -r1.4 encode.c\n> --- contrib/pgcrypto/encode.c\t2001/03/22 03:59:10\t1.4\n> +++ contrib/pgcrypto/encode.c\t2001/05/12 08:28:50\n> @@ -349,7 +349,7 @@\n> uint\n> b64_enc_len(uint srclen)\n> {\n> -\treturn srclen + (srclen / 3) + (srclen / (76 / 2));\n> +\treturn srclen + (srclen + 2 / 3) + (srclen / (76 / 2)) + 2;\n> }\n> \n> uint\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 22:17:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: bug in pgcrypto 0.3" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Applied to 7.1.X and 7.2.\n\nBut, but...\n\n> > -\treturn srclen + (srclen / 3) + (srclen / (76 / 2));\n> > +\treturn srclen + (srclen + 2 / 3) + (srclen / (76 / 2)) + 2;\n\n(srclen + 2 / 3) is always the same as (srclen).\n\nPerhaps this was meant to be ((srclen + 2) / 3)?\n\nThe current code is safe, but weird.\n\nIan\n", "msg_date": "14 May 2001 13:15:59 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Re: bug in pgcrypto 0.3" }, { "msg_contents": "On Mon, May 14, 2001 at 01:15:59PM -0700, Ian Lance Taylor wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Applied to 7.1.X and 7.2.\n> \n> But, but...\n\n;)\n\n> > > -\treturn srclen + (srclen / 3) + (srclen / (76 / 2));\n> > > +\treturn srclen + (srclen + 2 / 3) + (srclen / (76 / 2)) + 2;\n> \n> (srclen + 2 / 3) is always the same as (srclen).\n> \n> Perhaps this was meant to be ((srclen + 2) / 3)?\n\nI guess too... Its no good to create patches half-asleep...\n\n> The current code is safe, but weird.\n\nBut I got very good response time :)\n\nWell, the correct code - that corresponds to current\nencode - is below. I even got the linefeed stuff wrong.\n\n-- \nmarko\n\n\n\nIndex: contrib/pgcrypto/encode.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/contrib/pgcrypto/encode.c,v\nretrieving revision 1.5\ndiff -u -r1.5 encode.c\n--- contrib/pgcrypto/encode.c\t2001/05/13 02:17:09\t1.5\n+++ contrib/pgcrypto/encode.c\t2001/05/14 21:29:43\n@@ -349,7 +349,8 @@\n uint\n b64_enc_len(uint srclen)\n {\n-\treturn srclen + (srclen + 2 / 3) + (srclen / (76 / 2)) + 2;\n+\t/* 3 bytes will be converted to 4, linefeed after 76 chars */\n+\treturn (srclen + 2) * 4 / 3 + srclen / (76 * 3 / 4);\n }\n \n uint\n", "msg_date": "Mon, 14 May 2001 23:47:13 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: Re: bug in pgcrypto 0.3" }, { "msg_contents": "\nApplied for 7.1.X and 7.2.\n\n> On Mon, May 14, 2001 at 01:15:59PM -0700, Ian Lance Taylor wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Applied to 7.1.X and 7.2.\n> > \n> > But, but...\n> \n> ;)\n> \n> > > > -\treturn srclen + (srclen / 3) + (srclen / (76 / 2));\n> > > > +\treturn srclen + (srclen + 2 / 3) + (srclen / (76 / 2)) + 2;\n> > \n> > (srclen + 2 / 3) is always the same as (srclen).\n> > \n> > Perhaps this was meant to be ((srclen + 2) / 3)?\n> \n> I guess too... Its no good to create patches half-asleep...\n> \n> > The current code is safe, but weird.\n> \n> But I got very good response time :)\n> \n> Well, the correct code - that corresponds to current\n> encode - is below. I even got the linefeed stuff wrong.\n> \n> -- \n> marko\n> \n> \n> \n> Index: contrib/pgcrypto/encode.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/contrib/pgcrypto/encode.c,v\n> retrieving revision 1.5\n> diff -u -r1.5 encode.c\n> --- contrib/pgcrypto/encode.c\t2001/05/13 02:17:09\t1.5\n> +++ contrib/pgcrypto/encode.c\t2001/05/14 21:29:43\n> @@ -349,7 +349,8 @@\n> uint\n> b64_enc_len(uint srclen)\n> {\n> -\treturn srclen + (srclen + 2 / 3) + (srclen / (76 / 2)) + 2;\n> +\t/* 3 bytes will be converted to 4, linefeed after 76 chars */\n> +\treturn (srclen + 2) * 4 / 3 + srclen / (76 * 3 / 4);\n> }\n> \n> uint\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 00:45:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: bug in pgcrypto 0.3" } ]
[ { "msg_contents": "\nWould it be possible to allocate varibles that can be addressed with SET?\n\nCREATE [TEMP] VARIABLE fubar ;\n\nset FUBAR=5 ;\n\n\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sat, 12 May 2001 11:19:43 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "SET variables" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Would it be possible to allocate varibles that can be addressed with SET?\n\nAnd what would you do with them?\n\nThere is a simple variable facility in psql these days, if that helps.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 23:22:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SET variables " }, { "msg_contents": "Tom Lane wrote:\n\n> mlw <markw@mohawksoft.com> writes:\n> > Would it be possible to allocate varibles that can be addressed with SET?\n>\n> And what would you do with them?\n>\n> There is a simple variable facility in psql these days, if that helps.g\n\nI was thinking more like:\n\ncreate variable fubar ;\n\nset fubar = select max(column) from table;\n\nselect * from table where column = :fubar;\n\nObviously this is a very simple example. I guess I am asking for something\nanalogous to temporary tables, but on a single datum level.\n\nI like the way psql does it, but it would be better to have this available in\nthe native query language.\n\nThis is similar to a feature which Oracle has. It is mainly to avoid hitting\nthe query planner. Oracle caches query execution instructions, and using a\nvariable is a way to reuse cached queries with different data.\n\nBeing able to set variables and use them in queries may help some people port\nfrom Oracle to Postgres.\n\nBTW I am also working on the impression that a view is more efficient than\nreissuing a complex query. Or is there no difference?\n\n>\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n", "msg_date": "Sun, 13 May 2001 08:31:25 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: SET variables" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Obviously this is a very simple example. I guess I am asking for something\n> analogous to temporary tables, but on a single datum level.\n\nWhat's wrong with a one-row temporary table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 May 2001 11:05:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SET variables " }, { "msg_contents": "At 11:05 AM 5/13/01 -0400, Tom Lane wrote:\n>mlw <markw@mohawksoft.com> writes:\n>> Obviously this is a very simple example. I guess I am asking for something\n>> analogous to temporary tables, but on a single datum level.\n>\n>What's wrong with a one-row temporary table?\n\nWell, the following query would then be a join with that one-row table, which\nmeans that you pay the cost of creating the temporary table, optimizing the\njoin (which presumably isn't terribly bad since it's a trivial one), etc.\n\nThis might not be bad if this were something done rarely. But people in\nthe Oracle world use BIND variables a LOT (and essentially BIND vars are\nwhat are being asked for). As was pointed out, the use of BIND variables\nmake Oracle's brain-dead query caching useful (it does source caching, and\nwithout BIND variables \"select * from foo where foo_key = 1\" doesn't match\n\"select * from foo where foo_key = 2\", making caching not all that useful).\n\nThat's not the only reason to use them, though.\n\nThere are literally tens of thousands of them in OpenACS. We had to work around\nthe fact that PG doesn't offer this capability by hacking the AOLserver driver.\nIf we were working in an application that we didn't control at every level (i.e.\na closed-source webserver environment with a closed-source driver) the workaround\nyou suggest would involve the creation and deletion of tens of thousands of \ntemporary tables on a busy website.\n\nNot a very scalable workaround in my world ... obviously rewriting the application\nto remove BIND variables would be the solution we would've chosen if we hadn't\nbeen able to hack the functionality into the driver. One reason for the \nheavy use of BIND variables in the ACS is that you then get type checking in\nthe query, so removing them would require extensive type checking within the\napplication code before submitting dynamic queries to the database to help avoid\nthe \"smuggled SQL\" problem. (SQL snippets smuggled in via URL arguments).\n\nOur driver hack was able to provide the same safeguards against \"smuggled SQL\"\nso again, full control over our enviroment means we can live easily without BIND\nvars.\n\nBut it's easy for me to see why folks want them. \n\nThis reminds me a bit of the argument against incorporating the patch implementing\nthe Oracle parameter type mechanism. Folks with a lot of experience with PL/SQL\nwill just scratch their heads bemusedly when they read an statement saying \"I don't\nreally see that many people would write functions like this\", etc. This patch would\ngreatly simplify the mechanized translation of PL/SQL into PL/pgSQL, even if the\nfeature per se is \"useless\" (which I happen to disagree with). It's not uncommon\nfor large Oracle applications to include thousands of PL/SQL procedures and functions,\nsince many subscribe to the notion that application logic should reside entirely\nwithin the database if possible. So mechanical translation has a certain attraction\nto the person wanting to port a large-scale application from Oracle to PG.\n\nThe interesting thing to me doesn't simply lie in the debate over this or that feature.\nThe interesting thing to me is that more and more requests to ease porting from Oracle\nto Postgres are cropping up. \n\nThis says that more and more people from the \"real\" RDBMS world are starting to take\nPostgres seriously.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 13 May 2001 08:33:19 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Re: SET variables " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Obviously this is a very simple example. I guess I am asking for something\n> > analogous to temporary tables, but on a single datum level.\n> \n> What's wrong with a one-row temporary table?\n> \nThe syntax would be too different than oracle.\n\nselect * from table where field = :var\n\nIf it is fairly easy to do, it would be very helpful for would be porting.\n\n\n-- \n42 was the answer, 49 was too soon.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 13 May 2001 12:31:33 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: SET variables" }, { "msg_contents": "Don Baccus wrote:\n\n> The interesting thing to me doesn't simply lie in the debate over this or that feature.\n> The interesting thing to me is that more and more requests to ease porting from Oracle\n> to Postgres are cropping up.\n> \n> This says that more and more people from the \"real\" RDBMS world are starting to take\n> Postgres seriously.\n\nSpeaking for myself, I think Larry has enough money. The costs of Oracle are\nastounding. As I see it, I think Postgres could be the \"single server\" answer\nto the sky high per processor licensing that Oracle has.\n\nA Postgres with enough Oracle-isms would be a world beater. As it is, when I\nshow Oracle people what Postgres can do, they are blown away. They love the\nfact that temporary tables are in an isolated name space, sequences are more\nflexible, and a lot of the other neat features.\n\nIf we could do:\nselect * from database.table.field where database.table.field =\nlocaltable.field;\nselect * from table where field = :var;\n\nand not have to vacuum\n\nPostgres would be incredible. As it is, it is a great database. If it could\nhave features which make Oracle people comfortable it would be a very serious\nalternative to Oracle. Companies like Greatbridge and PostgreSQL inc. would\nhave a much easier sell.\n\n\n\n-- \n42 was the answer, 49 was too soon.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 13 May 2001 12:45:20 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: SET variables" }, { "msg_contents": "At 12:45 PM 5/13/01 -0400, mlw wrote:\n\n>A Postgres with enough Oracle-isms would be a world beater.\n\nNo doubt. I'm not as extremist as Philip Greenspun has been in the past\nregarding how far PG should go in implementing Oracle-isms, though. His\nstated opinion in the past was that PG should implement Oracle's wretched\ndate arithmetic (which he recognizes is wretched) rather than stick with\nSQL92 date and timestamp types (which he recognizes is superior). I'd\noppose that.\n\nSo \"oracle-isms\" should be inspected for merit, that's for sure. The\ninclusion of \"to_char\" and friends not only helped people port from Oracle\nto PG but is useful on its own. \n\nI'd put both BIND vars and the enhanced types in parameter lists (which\nwe already have in PL/pgSQL var decls) in that class.\n\nThere are a lot of other features I'd question, though. \"CONNECT BY\" is\ndifficult to work around because there's no simplistic way to implement\nhierarchical queries in straight SQL92, but the solutions in SQL92 tend\nto scale a lot better and be more general. So I'd argue against putting\nmuch effort into \"CONNECT BY\", or at least putting it at a high priority,\nwhich would probably put me at odds with quite a few Oracle users.\n\n>Postgres would be incredible. As it is, it is a great database. If it could\n>have features which make Oracle people comfortable it would be a very serious\n>alternative to Oracle. Companies like Greatbridge and PostgreSQL inc. would\n>have a much easier sell.\n\nThere are actually very few gratuitous features in Oracle - the company's\nvery, very customer driven. Most of the really horrible differences from\nstandard SQL92 - date arithmetic, empty string is the same as NULL in DML\nstatements - are there for historical reasons, i.e. they predate SQL\nstandardization and Oracle's found it self locked-in/boxed-in by acres of\nexisting customer code.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 13 May 2001 10:28:08 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: SET variables" }, { "msg_contents": "> A Postgres with enough Oracle-isms would be a world beater. As it is, when I\n> show Oracle people what Postgres can do, they are blown away. They love the\n> fact that temporary tables are in an isolated name space, sequences are more\n\nAre you saying that multiple people can't create temp tables with the\nsame name, or that you can't create a temp table that masks a real\ntable? I know PostgreSQL does both.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 13 May 2001 17:16:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: SET variables" }, { "msg_contents": "> In Oracle, temp tables occupy the same name space. One can not have two or more\n> users with the same temp table name without it being the same table. This is\n> why temp tables are not as used as one would think in Oracle.\n> \n> To use a temp table in Oracle you have to some up with some random naming\n> scheme. It really blows. Because of this Oracle developers have long stayed\n> away from temp tables.\n\nWow, that really does stink. I know Informix can't have a temp table\nwith the same name as a real table, and I thought that was bad, but\nOracle is much worse.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 13 May 2001 17:29:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SET variables" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > A Postgres with enough Oracle-isms would be a world beater. As it is, when I\n> > show Oracle people what Postgres can do, they are blown away. They love the\n> > fact that temporary tables are in an isolated name space, sequences are more\n> \n> Are you saying that multiple people can't create temp tables with the\n> same name, or that you can't create a temp table that masks a real\n> table? I know PostgreSQL does both.\n\nIn Oracle, temp tables occupy the same name space. One can not have two or more\nusers with the same temp table name without it being the same table. This is\nwhy temp tables are not as used as one would think in Oracle.\n\nTo use a temp table in Oracle you have to some up with some random naming\nscheme. It really blows. Because of this Oracle developers have long stayed\naway from temp tables.\n\n\n-- \n42 was the answer, 49 was too soon.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 13 May 2001 17:31:28 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: SET variables" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > In Oracle, temp tables occupy the same name space. One can not have two or more\n> > users with the same temp table name without it being the same table. This is\n> > why temp tables are not as used as one would think in Oracle.\n> >\n> > To use a temp table in Oracle you have to some up with some random naming\n> > scheme. It really blows. Because of this Oracle developers have long stayed\n> > away from temp tables.\n> \n> Wow, that really does stink. I know Informix can't have a temp table\n> with the same name as a real table, and I thought that was bad, but\n> Oracle is much worse.\n>\n\nAlthough....\n\nOne can see the advantage in a globally shared temporary table. For instance,\nsomething like user web session management. One can insert and update against\nthe temp table and never have to worry about disk I/O or vacuuming. (Assuming a\ntemp table is implemented as a memory buffer)\n\n\n\n-- \n42 was the answer, 49 was too soon.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 13 May 2001 18:45:10 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: SET variables" }, { "msg_contents": "> > Wow, that really does stink. I know Informix can't have a temp table\n> > with the same name as a real table, and I thought that was bad, but\n> > Oracle is much worse.\n> >\n> \n> Although....\n> \n> One can see the advantage in a globally shared temporary table. For instance,\n> something like user web session management. One can insert and update against\n> the temp table and never have to worry about disk I/O or vacuuming. (Assuming a\n> temp table is implemented as a memory buffer)\n\nYes, but having a temp table never hit disk is a different issue from\nits visibility. We could eventually implement the memory-only feature\nif we wanted to. Right now, we have it dumping to disk as a backing\nstore for the table, assuming it wouldn't fit all in memory.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 13 May 2001 20:16:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SET variables" } ]
[ { "msg_contents": "The PL/Python build is now up to speed to as much as an extend as is under\nour control. I can also confirm that it works (as defined by the test\nsuite) with Python 1.5 and 2.1. I've also put a chapter with\ndocumentation in the programmer's guide.\n\nWe already know about the libpython (not-)shared library issue. As it\nturns out, not only is libpython generally not a shared library, there\nisn't even a designed in way to make one. See this discussion:\n\nhttp://www.python.org/cgi-bin/faqw.py?req=show&file=faq03.030.htp\n\nThe way it's currently set up is that it will use whatever linking with\n-lpythonx.y gives you. Using the static library this way works on many\nplatforms, although it's not as efficient (each backend process image will\nhave its own copy of the library). On some platforms it won't work at\nall, of course.\n\nIn the distance you can hear the package maintainers rejoicing...\n\nThe second issue is that of dynamic loading. The way it currently looks\non platforms with the dlopen() interface (which is what most platforms\nhave) is that when your Python code imports a module which is implemented\nin a C shared library, the Python interpreter fires up its own dynamic\nloading facilities. However, due to the rules of relocation scoping,\nthose shared libraries cannot use the symbols defined in the plpython\nimage and therefore the libpython symbols themselves are not available.\nWithout those, no code works.\n\nThe way this is currently resolved is that all the C shared libraries that\nform part of the modules that you might want to use are linked into\nplpython.so directly, which puts them into a different relocation group,\nso they have access to the symbols of libpython. This is a stupid hack,\nof course.\n\nThe real fix is to change the dynamic loader(s) to make use of the\nRTLD_GLOBAL flag, which makes all dlopen'ed symbols available to everyone.\nPersonally, I don't see any harm in using this option. Does anyone else?\n(In fact, if someone were to create an untrusted PL/Perl language he would\nprobably run into this problem as well.)\n\nSounds a bit complicated? See this documentation:\n\nhttp://docs.sun.com:80/ab2/coll.40.5/REFMAN3/@Ab2PageView/325438?DwebQuery=dlopen&oqt=dlopen&Ab2Lang=C&Ab2Enc=iso-8859-1\n\nResolving this issue on non-dlopen platforms is left as an exercise.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 12 May 2001 19:50:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "State of PL/Python build" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> We already know about the libpython (not-)shared library issue. As it\n> turns out, not only is libpython generally not a shared library, there\n> isn't even a designed in way to make one.\n\nUgh. Can we get the Python boys to raise their level of concern about\nthat?\n\n> The way it's currently set up is that it will use whatever linking with\n> -lpythonx.y gives you. Using the static library this way works on many\n> platforms, although it's not as efficient (each backend process image will\n> have its own copy of the library).\n\nHuh? As I understand it, what will happen is that libpython.a will get\nphysically incorporated into plpython.so (.sl, whatever). So all\nPostgres backends that are using plpython should share one copy of that\nlibrary. It won't be shared with non-Postgres python processes, but\nit's not nearly as bad as \"one copy per backend\".\n\nThe real problem is that on systems where non-PIC code can't be used to\nbuild a shared library, the whole thing will not work at all. As with\nplperl, it'd be nice if we could detect this at configure time.\n\nI wonder whether people would like an option to statically link\nlibperl.a and/or libpython.a into the Postgres backend proper? That\nwould allow plperl/plpython to be used on platforms where this is an\nissue, without having to make a nonstandard build of perl/python.\n\n> The real fix is to change the dynamic loader(s) to make use of the\n> RTLD_GLOBAL flag, which makes all dlopen'ed symbols available to everyone.\n> Personally, I don't see any harm in using this option. Does anyone else?\n\nNo, on the platforms where it solves the problem...\n\n> Resolving this issue on non-dlopen platforms is left as an exercise.\n\nAs near as I can tell from the man page, HPUX's shl_load behaves this\nway all the time. Don't know about other non-dlopen platforms.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 14:46:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: State of PL/Python build " }, { "msg_contents": "On Saturday 12 May 2001 14:46, Tom Lane wrote:\n>\n> I wonder whether people would like an option to statically link\n> libperl.a and/or libpython.a into the Postgres backend proper? That\n> would allow plperl/plpython to be used on platforms where this is an\n> issue, without having to make a nonstandard build of perl/python.\n\nI would certainly like this. Currently, I have to maintain a special perl \ninstall that exists for no reason but to give me a shared lib version I can\nlink against.\n\n-- \nMark Hollomon\n", "msg_date": "Sat, 12 May 2001 21:53:10 -0400", "msg_from": "Mark Hollomon <mhh@mindspring.com>", "msg_from_op": false, "msg_subject": "Re: State of PL/Python build" }, { "msg_contents": "> The real fix is to change the dynamic loader(s) to make use of the\n> RTLD_GLOBAL flag, which makes all dlopen'ed symbols available to everyone.\n> Personally, I don't see any harm in using this option. Does anyone else?\n> (In fact, if someone were to create an untrusted PL/Perl language he would\n> probably run into this problem as well.)\n\nI have RTLD_GLOBAL on BSD/OS, and it calls it a Linux-compatibility\nflag. I am pretty amazed you got it working without RTLD_GLOBAL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 22:22:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: State of PL/Python build" }, { "msg_contents": "\nOn Sat, May 12, 2001 at 02:46:45PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > We already know about the libpython (not-)shared library issue. As it\n> > turns out, not only is libpython generally not a shared library, there\n> > isn't even a designed in way to make one.\n> \n> Ugh. Can we get the Python boys to raise their level of concern about\n> that?\n\nI looked through Python's configure script, and it appears that the\ndefault is to build as a shared library. \n\n> The real problem is that on systems where non-PIC code can't be used to\n> build a shared library, the whole thing will not work at all. As with\n> plperl, it'd be nice if we could detect this at configure time.\n\nPython versions greater than 1.5.2 ship with the distutils module, it\ncan be used to detect if python was built with a shared library. Is\nthere anyway to detect whether non-PIC code can be used in a shared\nlibrary?\n\nAndrew\n\n-- \n\n\n\n\n", "msg_date": "Sat, 12 May 2001 23:25:42 -0400", "msg_from": "andrew@corvus.biomed.brown.edu (Andrew Bosma)", "msg_from_op": false, "msg_subject": "Re: State of PL/Python build" }, { "msg_contents": "Tom Lane writes:\n\n> I wonder whether people would like an option to statically link\n> libperl.a and/or libpython.a into the Postgres backend proper? That\n> would allow plperl/plpython to be used on platforms where this is an\n> issue, without having to make a nonstandard build of perl/python.\n\nNot unless you also link in plperl/plpython itself or mess with\n-whole-archive type linker flags.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 15 May 2001 19:02:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: State of PL/Python build " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I wonder whether people would like an option to statically link\n>> libperl.a and/or libpython.a into the Postgres backend proper? That\n>> would allow plperl/plpython to be used on platforms where this is an\n>> issue, without having to make a nonstandard build of perl/python.\n\n> Not unless you also link in plperl/plpython itself or mess with\n> -whole-archive type linker flags.\n\nThe former is what I had in mind.\n\nYes, it's ugly and it bloats the binary, but people would presumably\nonly do this if they intended to use the language. So the bloat is\nsomewhat illusory. And it's less ugly than having to build a\nnonstandard install of python or perl.\n\nI could even see people doing this on platforms where they didn't have\nto (because a non-PIC libpython or libperl could be included into a\nshared library plpython or plperl). It should give a performance\nadvantage, which might be interesting to heavy users of those PLs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 13:09:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: State of PL/Python build " }, { "msg_contents": "On Tuesday 15 May 2001 13:09, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> I wonder whether people would like an option to statically link\n> >> libperl.a and/or libpython.a into the Postgres backend proper? That\n> >> would allow plperl/plpython to be used on platforms where this is an\n> >> issue, without having to make a nonstandard build of perl/python.\n> >\n> > Not unless you also link in plperl/plpython itself or mess with\n> > -whole-archive type linker flags.\n>\n> The former is what I had in mind.\n>\n> Yes, it's ugly and it bloats the binary, but people would presumably\n> only do this if they intended to use the language. So the bloat is\n> somewhat illusory. And it's less ugly than having to build a\n> nonstandard install of python or perl.\n>\n\nI'm not sure it is any uglier than including the (to me) useless geometry\ndatatypes.\n\nI would be happy to help get this working with plperl.\n\nAnother interesting idea is to allow the postmaster to 'preload' some of the \nlibraries. This would help cut down the startup cost of a new backend that \nneeded a widely used language.\n\n-- \nMark Hollomon\n", "msg_date": "Wed, 16 May 2001 16:35:33 -0400", "msg_from": "Mark Hollomon <mhh@mindspring.com>", "msg_from_op": false, "msg_subject": "static link of plpython/plperl - was Re: State of PL/Python build" } ]
[ { "msg_contents": "when creating a index unique in a table that accept nulls the unique\nconstraint doesn't work. \n\nExample:\n---\ncreate table test_unique(i1 integer, i2 integer, unique(i1,i2);\ninsert into test_unique(1,null);\ninsert into test_unique(1,null);\ninsert into test_unique(1,null);\n---\nall \"inserts\" above insert sucefully.\n", "msg_date": "Sun, 13 May 2001 00:07:11 +0200", "msg_from": "Domingo Alvarez Duarte <domingo@dad-it.com>", "msg_from_op": true, "msg_subject": "bug in \"create unique index\"" }, { "msg_contents": "\nThis is correct by spec. NULLs are a special case.\n\n From UNIQUE <table subquery) which unique constraints are defined\n against:\n2) If there are no two rows in T such that the value of each column\n in one row is non-null and is equal to the value of the cor-\n responding column in the other row according to Subclause 8.2,\n \"<comparison predicate>\", then the result of the <unique predi-\n cate> is true; otherwise, the result of the <unique predicate>\n is false.\n\n[This means that there will be no two rows such that the value\nof each column in non-null and is equal to to the value in the\nother since one of the columns is null]\n\nOn Sun, 13 May 2001, Domingo Alvarez Duarte wrote:\n\n> when creating a index unique in a table that accept nulls the unique\n> constraint doesn't work. \n> \n> Example:\n> ---\n> create table test_unique(i1 integer, i2 integer, unique(i1,i2);\n> insert into test_unique(1,null);\n> insert into test_unique(1,null);\n> insert into test_unique(1,null);\n> ---\n> all \"inserts\" above insert sucefully.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n", "msg_date": "Sat, 12 May 2001 20:50:39 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: bug in \"create unique index\"" } ]
[ { "msg_contents": "\nSorry, worst Subject I've ever come up with, but this is one of those \"I\nhaven't got a clue how to describe\" emails ...\n\nSimple query:\n\n SELECT distinct s.gid, s.created, i.title\n FROM status s LEFT JOIN images i ON (s.gid = i.gid AND i.active), personal_data pd, relationship_wanted rw\n WHERE s.active AND s.status != 0\n AND s.gid = 17111\n AND (s.gid = pd.gid AND pd.gender = 0)\n AND (s.gid = rw.gid AND rw.gender = 0 );\n\nProduces:\n\n gid | created | title\n-------+------------------------+--------\n 17111 | 2000-10-19 15:20:46-04 | image1\n 17111 | 2000-10-19 15:20:46-04 | image2\n 17111 | 2000-10-19 15:20:46-04 | image3\n(3 rows)\n\nGreat, what I expect ...\n\nBut:\n\n SELECT distinct s.gid, s.created, count(i.title) AS images\n FROM status s LEFT JOIN images i ON (s.gid = i.gid AND i.active), personal_data pd, relationship_wanted rw\n WHERE s.active AND s.status != 0\n AND s.gid = 17111\n AND (s.gid = pd.gid AND pd.gender = 0)\n AND (s.gid = rw.gid AND rw.gender = 0 )\nGROUP BY s.gid, s.created;\n\nProduces:\n\n/tmp/psql.edit.70.62491: 7 lines, 353 characters.\n gid | created | images\n-------+------------------------+--------\n 17111 | 2000-10-19 15:20:46-04 | 15\n(1 row)\n\nSo why is it counting 12 more images then are actually found/exist:\n\ntestdb=# select title from images where gid = 17111;\n title\n--------\n image1\n image3\n image2\n(3 rows)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Sat, 12 May 2001 20:12:54 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "bug in JOIN or COUNT or ... ?" }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> So why is it counting 12 more images then are actually found/exist:\n\nHm. Could we see the EXPLAIN output for both of those?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 19:52:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in JOIN or COUNT or ... ? " }, { "msg_contents": "On Sat, 12 May 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > So why is it counting 12 more images then are actually found/exist:\n>\n> Hm. Could we see the EXPLAIN output for both of those?\n\nwithout count:\n\nNOTICE: QUERY PLAN:\n\nUnique (cost=8.66..8.67 rows=1 width=37)\n -> Sort (cost=8.66..8.66 rows=1 width=37)\n -> Nested Loop (cost=0.00..8.65 rows=1 width=37)\n -> Nested Loop (cost=0.00..6.52 rows=1 width=33)\n -> Nested Loop (cost=0.00..4.26 rows=1 width=29)\n -> Index Scan using status_gid on status s (cost=0.00..2.23 rows=1 width=12)\n -> Index Scan using images_gid on images i (cost=0.00..2.02 rows=1 width=17)\n -> Index Scan using personal_data_gid on personal_data pd (cost=0.00..2.25 rows=1 width=4)\n -> Index Scan using relationship_wanted_gid on relationship_wanted rw (cost=0.00..2.11 rows=1 width=4)\n\nEXPLAIN\n\nwith count:\n\nNOTICE: QUERY PLAN:\n\nUnique (cost=8.68..8.69 rows=1 width=37)\n -> Sort (cost=8.68..8.68 rows=1 width=37)\n -> Aggregate (cost=8.66..8.67 rows=1 width=37)\n -> Group (cost=8.66..8.67 rows=1 width=37)\n -> Sort (cost=8.66..8.66 rows=1 width=37)\n -> Nested Loop (cost=0.00..8.65 rows=1 width=37)\n -> Nested Loop (cost=0.00..6.52 rows=1 width=33)\n -> Nested Loop (cost=0.00..4.26 rows=1 width=29)\n -> Index Scan using status_gid on status s (cost=0.00..2.23 rows=1 width=12)\n -> Index Scan using images_gid on images i (cost=0.00..2.02 rows=1 width=17)\n -> Index Scan using personal_data_gid on personal_data pd (cost=0.00..2.25 rows=1 width=4)\n -> Index Scan using relationship_wanted_gid on relationship_wanted rw (cost=0.00..2.11 rows=1 width=4)\n\nEXPLAIN\n\n\n\n>\n> \t\t\tregards, tom lane\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n\n", "msg_date": "Sat, 12 May 2001 21:47:12 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: bug in JOIN or COUNT or ... ? " } ]
[ { "msg_contents": "Ah, I see it: your join against relationship_wanted isn't unique.\n\nglobalmatch=# select count(*) from personal_data pd\nglobalmatch-# where pd.gid = 17111 AND pd.gender = 0;\n count \n-------\n 1\n(1 row)\n\nglobalmatch=# select count(*) from relationship_wanted rw\nglobalmatch-# where rw.gid = 17111 AND rw.gender = 0;\n count \n-------\n 5\n(1 row)\n\nglobalmatch=# \n\nSo that inflates the number of rows coming out of the join by 5.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 21:34:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: bug in JOIN or COUNT or ... ? " }, { "msg_contents": "On Sat, 12 May 2001, Tom Lane wrote:\n\n> Ah, I see it: your join against relationship_wanted isn't unique.\n>\n> globalmatch=# select count(*) from personal_data pd\n> globalmatch-# where pd.gid = 17111 AND pd.gender = 0;\n> count\n> -------\n> 1\n> (1 row)\n>\n> globalmatch=# select count(*) from relationship_wanted rw\n> globalmatch-# where rw.gid = 17111 AND rw.gender = 0;\n> count\n> -------\n> 5\n> (1 row)\n>\n> globalmatch=#\n>\n> So that inflates the number of rows coming out of the join by 5.\n\nOkay, then I'm lost ... why wouldn't that show up without the COUNT()? I\ndoubt doubt your analysis, I just want to understand why ...\n\n", "msg_date": "Sat, 12 May 2001 22:39:29 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: bug in JOIN or COUNT or ... ? " }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n>> So that inflates the number of rows coming out of the join by 5.\n\n> Okay, then I'm lost ... why wouldn't that show up without the COUNT()? I\n> doubt doubt your analysis, I just want to understand why ...\n\nYou had DISTINCT on your query, which hid the duplicated rows from you.\nBut that happens *after* aggregate processing, so it doesn't hide the\ndups from COUNT().\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 May 2001 21:44:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: bug in JOIN or COUNT or ... ? " }, { "msg_contents": "On Sat, 12 May 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> >> So that inflates the number of rows coming out of the join by 5.\n>\n> > Okay, then I'm lost ... why wouldn't that show up without the COUNT()? I\n> > doubt doubt your analysis, I just want to understand why ...\n>\n> You had DISTINCT on your query, which hid the duplicated rows from you.\n> But that happens *after* aggregate processing, so it doesn't hide the\n> dups from COUNT().\n\nAhhhh, okay, that makes sense ... thanks for taking the time to check it\nfor me ... and explaining what I was missing ...\n\n", "msg_date": "Sat, 12 May 2001 22:51:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: bug in JOIN or COUNT or ... ? " } ]
[ { "msg_contents": "Internet is putting lot of competition fire & heat under Microsoft SQL\nServer\n\nYour boss will tell you - \"Now, that we have high speed internet\nconnection\nwhy do you need commercial SQL servers?? Simply use your mouse button,\nclick and\ndownload the open-source Postgresql, InterBase or MySQL !!!!\"\n\nThere will be no Oracle or Microsoft Corporation in about 2-3 years time\n(as Internet is\na very big threat to Oracle/Microsoft corporation). Microsoft corp will\nbe a dead\nmeat....because of lightening speed broad-band-internet.\n\nIf you are running MS Windows 2000/NT, then here is the chance for you\nto try this superb SQL RDBMS open-source database.\n\nNow, PostgreSQL is packaged in Cygwin32-setup.exe, simply download the\ncygwin setup.exe and double click on setup.exe\nto install and run PostgreSQL on MS Windows 2000/NT\n\nInstalling and running the pgsql on MS Windows is extremely easy now and\n\nis quite rock solid on Windows 2000/NT desktop or server\n\nPostgreSQL is a \"LINUX\" of database systems - very powerful and\nreliable..\n\nEverybody is asking \"What is the equivalent of Linux in SQL databases\n??\"\nThe answer is \"PostgreSQL\" RDBMS server.\n\nPlease try and go to http://sources.redhat.com/cygwin/download.html\nto download cygwin (which has pgsql)\n\nAfter installing the cygwin, read the user guides at\nhttp://www.postgresql.org\n\nYour boss will say - \"Purchase commercial support for PostgresQL from\nGreatBridge at http://greatbridge.com or from http://www.mysql.com .\nMicrosoft\nproducts costs too much money!!! Why do need these MS SQL servers\nanymore ????\"\n\nOpen-source SQL RDBMS and their rankings are -\nRanked 1st Gold Medal : PostgreSQL http://www.postgresql.org\nRanked 2nd Silver Medal : Interbase SQLserver\nhttp://www.borland.com/devsupport/interbase/opensource\nRanked 3rd Bronze Medal : MySQL http://www.mysql.com\nRanked 4th : Many others ....\n\n\n\n\n", "msg_date": "Sun, 13 May 2001 03:22:46 GMT", "msg_from": "universe_made_of_atoms <almighty99@hotmail.com>", "msg_from_op": true, "msg_subject": "Internet is putting lot of competition fire & heat under Microsoft\n\tSQL Server" }, { "msg_contents": "Seems little point in posting this to postgresql groups . . .preaching to\nthe converted??? Perhaps you should post on a micro$oft or a general SQL\ngroup?\n\nOh, and how about a good guide for upsizing from M$ SQL and / or M$ ACCESS\nto Postgresql - that'd be far more usefull ...\n\n(Aside: Money is not the real issue here. I had nothing against Microsoft in\nthe past, in fact with WIN98SE / Office 97, I was quite impressed with the\nease of use, and usefulness of the applications - now some stability issues\nneeded fixing - and I had looked forward to WinMe / Office 2K to fixing\nthese issues. Since 'upgrading' to WinMe / Office 2K, however, I've had at\nleast 1 or 2 crashes per day. Saving a Word document shouldn't bring down\nthe whole OS!! Thus, I'm now thoroughly disgruntled. If I'm now looking for\nalternatives to Windoze and M$ applications - it's entirely Microsoft's\nfault for rushing out incomplete and buggy software, and then trying to\n'dupe' their customers into buying 'upgrades', instead of admitting that\nthey're really beta releases. BTW, why the heck do we need new office\napplications every 1 to 2 years anyway? Most of the M$ Office users in my\ncompany haven't even got to grips with even one tenth the functionality of\nOffice 95 yet!).\n\nRegards,\nSGO.\n\"universe_made_of_atoms\" <almighty99@hotmail.com> wrote in message\nnews:3AFDFE9C.C8F3DECF@hotmail.com...\n> Internet is putting lot of competition fire & heat under Microsoft SQL\n> Server\n>\n> Your boss will tell you - \"Now, that we have high speed internet\n> connection\n> why do you need commercial SQL servers?? Simply use your mouse button,\n> click and\n> download the open-source Postgresql, InterBase or MySQL !!!!\"\n>\n> There will be no Oracle or Microsoft Corporation in about 2-3 years time\n> (as Internet is\n> a very big threat to Oracle/Microsoft corporation). Microsoft corp will\n> be a dead\n> meat....because of lightening speed broad-band-internet.\n>\n> If you are running MS Windows 2000/NT, then here is the chance for you\n> to try this superb SQL RDBMS open-source database.\n>\n> Now, PostgreSQL is packaged in Cygwin32-setup.exe, simply download the\n> cygwin setup.exe and double click on setup.exe\n> to install and run PostgreSQL on MS Windows 2000/NT\n>\n> Installing and running the pgsql on MS Windows is extremely easy now and\n>\n> is quite rock solid on Windows 2000/NT desktop or server\n>\n> PostgreSQL is a \"LINUX\" of database systems - very powerful and\n> reliable..\n>\n> Everybody is asking \"What is the equivalent of Linux in SQL databases\n> ??\"\n> The answer is \"PostgreSQL\" RDBMS server.\n>\n> Please try and go to http://sources.redhat.com/cygwin/download.html\n> to download cygwin (which has pgsql)\n>\n> After installing the cygwin, read the user guides at\n> http://www.postgresql.org\n>\n> Your boss will say - \"Purchase commercial support for PostgresQL from\n> GreatBridge at http://greatbridge.com or from http://www.mysql.com .\n> Microsoft\n> products costs too much money!!! Why do need these MS SQL servers\n> anymore ????\"\n>\n> Open-source SQL RDBMS and their rankings are -\n> Ranked 1st Gold Medal : PostgreSQL http://www.postgresql.org\n> Ranked 2nd Silver Medal : Interbase SQLserver\n> http://www.borland.com/devsupport/interbase/opensource\n> Ranked 3rd Bronze Medal : MySQL http://www.mysql.com\n> Ranked 4th : Many others ....\n>\n>\n>\n>\n\n\n", "msg_date": "Mon, 14 May 2001 11:17:11 +0800", "msg_from": "\"Steve O'Hagan\" <sohagan@dont-spam-me-stanger.com.hk>", "msg_from_op": false, "msg_subject": "Re: Internet is putting lot of competition fire & heat under\n\tMicrosoft SQL Server" }, { "msg_contents": "Hi,\n\nThis sort of post is giving open source software a bad name.\n\nI am glad that this post has only reached a small number of people who\nare already pro-PostgreSQL and has not reached the crowd who needs\nconvincing that their time, attention, enthusiasm, and money is better\nspent on PostgreSQL than on MS SQL Server (in the long run, due to the\n\"business model\". I know that Microsoft bought Jim Gray and a number\nof other people who are in a position to build amazing things; well,\nwe have Stonebraker ... or at least had him).\n\nIf anybody wishes to do good, they better start off by reading one of\nthe advocacy documents. Many of the advocacy arguments that apply to\nLinux can equally well be used in the context of PostgreSQL. One\nmight start reading here:\n\n\thttp://linuxtoday.com/stories/1847.html\n\nThe gist of the matter is, a post that contains expletives, excessive\nuse of the word \"boss\", derisive mockings of large companies (M$ would\nbe such a mocking), or other unqualified drivel without informational\ncontent, will make people think that the whole crowd around PostgreSQL\nis stupid.\n\nAn argued comparison of standards compliance with a list of 5\nintentional deviations by a Microsoft product might be more\nconvincing.\n\nSo please stop posting ignorant, but enthusiastic messages that give\nthe development team a bad name.\n\nThank you,\n\nOliver Seidel\n", "msg_date": "15 May 2001 09:05:45 +0200", "msg_from": "Oliver Seidel <seidel@in-medias-res.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Internet is putting lot of competition fire & heat\n\tunder Microsoft SQL Server" }, { "msg_contents": "Hi Steve,\n\nI don't know if you taken a look at them yet, but there are a number of\nMicrosoft Access to PostgreSQL conversion documents linked to from the\nmain page of the techdocs.postgresql.org website. \n(http://techdocs.postgresql.org)\n\nIf they need improving, then let us know in which way, etc. :-)\n\nRegards and best wishes,\n\nJustin Clift\n\nSteve O'Hagan wrote:\n> \n> Seems little point in posting this to postgresql groups . . .preaching to\n> the converted??? Perhaps you should post on a micro$oft or a general SQL\n> group?\n> \n> Oh, and how about a good guide for upsizing from M$ SQL and / or M$ ACCESS\n> to Postgresql - that'd be far more usefull ...\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 15 May 2001 20:40:08 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Re: Internet is putting lot of competition fire & heat\n\tunder Microsoft SQL Server" }, { "msg_contents": "Thus spake universe_made_of_atoms\n> PostgreSQL is a \"LINUX\" of database systems - very powerful and\n> reliable..\n> \n> Everybody is asking \"What is the equivalent of Linux in SQL databases\n> ??\"\n> The answer is \"PostgreSQL\" RDBMS server.\n\nOh please. Can we stop trying to tie everything to the current front\nrunner. I mean, PostgreSQL uses the BSD style license, development\nis done under the BSD (cathedral) model and hey, it was invented at\nBerkeley in the first place. How is it the \"equivalent of Linux\"\nother than that it has the same price tag more or less.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 15 May 2001 07:19:01 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Internet is putting lot of competition fire & heat\n under\n\tMicrosoft SQL Server" }, { "msg_contents": "Hi !\n\nI have 2 servers running apache, php and PG 7.0.3 running under Linux\n\nThe first one uses IDE disks, and everything run fine and fast. (uses linux \n2.0.36)\n\nThe second one uses SCSI drives with a Adaptec adapter. (uses Linux 2.2.14)\n\nMy problem is that acces to the DB on this server are very very slow (I \nguess that it s particulary slow when i do INSERTs)\nIt s so slow that even the keyboard and the display slow down ! The HD \nwrite or read a lot...\nWhen I do a \"ps\" or \"top\", I don't see any process taking all the ressources...\n\nSo, I guess there is something whith the SCSI...\n\nIs anybody has a clue ??\n\nthanks !\n\njean-arthur\nPS : I did a vacuum analyze, and the table are not so big (max tuples are \n5000 )\n\n----------------------------------------------------------------\nEuroVox\n4, place F�lix Eboue\n75583 Paris Cedex 12\nTel : 01 44 67 05 05\nFax : 01 44 67 05 19\nWeb : http://www.eurovox.fr\n----------------------------------------------------------------\n\n", "msg_date": "Tue, 15 May 2001 16:17:22 +0200", "msg_from": "Jean-Arthur Silve <jeanarthur@eurovox.fr>", "msg_from_op": false, "msg_subject": "Not a PG question: SCSI question" }, { "msg_contents": "On Tue, May 15, 2001 at 07:19:01AM -0400, D'Arcy J.M. Cain wrote:\n\n> > Everybody is asking \"What is the equivalent of Linux in SQL databases\n> > ??\"\n> > The answer is \"PostgreSQL\" RDBMS server.\n> \n> Oh please. Can we stop trying to tie everything to the current front\n> runner. I mean, PostgreSQL uses the BSD style license, development\n\n\tIt uses _the_ BSD license. \n\n> is done under the BSD (cathedral) model and hey, it was invented at\n> Berkeley in the first place. How is it the \"equivalent of Linux\"\n> other than that it has the same price tag more or less.\n\n\tI'm not defending the comparison/analogy, just saying that it makes\nsense to lay people who have heard of \"Linux\" when they are explained\nabout PostgreSQL.\n\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Developer \nBad command or file name. Go sit in corner.\n", "msg_date": "Tue, 15 May 2001 09:07:44 -0600", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Internet is putting lot of competition fire & heat\n\tunder Microsoft SQL Server" }, { "msg_contents": "Hi,\n\n\nLet me get this straight...;-)\n\nThe IDE server is running Linux 2.0.36 with PHP and Apache.\n\nThe SCSI server is running with PostgreSQL 7.0.3 ??\n\nAnd you're talking about speed in inserts? \n\nTry to use a kernel after 2.2.16 or the current 2.4.4 series, they\nhandle memory management rather well.. You should also optimize your\nkernel depending upon your system's needs... like increasing the total\nnumber of process, the amount of memory one application can use.. The\nmaximum number of files.. etc... Most of these are runtime tweaks found\nat the /proc/sys directory...;-)\n\nOn the PostgreSQL side, you also need to optimize it a little bit... \nKindly read the fine manual for this...;-)\n\nApache and PHP... well, here's a sort of bottleneck in itself... You\nmay need to increase your PHP (PHP4) scripts performance via Zend's\nCache, Zend Optimizer, APC, and/or the PHP Smarty Template... \n\nYou can also further increase performance if you have all your compiled\nPHP scripts under a RAM Disk System...;-)\n\nBut one of the greatest bottlenecks that may not be so obvious... Is a\nrather simple one...;-) Each time you try to do an insert to your\nPostgreSQL database from your PHP application, you have to initialize a\nnew connection...;-)\n\nI don't know how you accessed your server... Via psql? or from an\napplication...;-) And how did you do you INSERTS? Did you use\nTransactions? Did you enable -F so that PostgreSQL doesn't write to\nyour HDD often???\n\nCheers,\n\n\nJohn Clark\n\nJean-Arthur Silve wrote:\n> \n> Hi !\n> \n> I have 2 servers running apache, php and PG 7.0.3 running under Linux\n> \n> The first one uses IDE disks, and everything run fine and fast. (uses linux\n> 2.0.36)\n> \n> The second one uses SCSI drives with a Adaptec adapter. (uses Linux 2.2.14)\n> \n> My problem is that acces to the DB on this server are very very slow (I\n> guess that it s particulary slow when i do INSERTs)\n> It s so slow that even the keyboard and the display slow down ! The HD\n> write or read a lot...\n> When I do a \"ps\" or \"top\", I don't see any process taking all the ressources...\n> \n> So, I guess there is something whith the SCSI...\n> \n> Is anybody has a clue ??\n> \n> thanks !\n> \n> jean-arthur\n> PS : I did a vacuum analyze, and the table are not so big (max tuples are\n> 5000 )\n> \n\n-- \n /) John Clark Naldoza y Lopez (\\\n / ) Software Design Engineer II ( \\\n _( (_ _ Web-Application Development _) )_\n (((\\ \\> /_> Cable Modem Network Management System <_\\ </ /)))\n (\\\\\\\\ \\_/ / NEC Telecom Software Phils., Inc. \\ \\_/ ////)\n \\ / \\ /\n \\ _/ phone: (+63 32) 233-9142 loc. 3112 \\_ /\n / / cellphone: (+63 919) 399-4742 \\ \\\n / / email: njclark@ntsp.nec.co.jp \\ \\\n", "msg_date": "Wed, 16 May 2001 08:44:16 +0800", "msg_from": "\"John Clark L. Naldoza\" <njclark@ntsp.nec.co.jp>", "msg_from_op": false, "msg_subject": "Re: Not a PG question: SCSI question" }, { "msg_contents": "Thanks, some useful stuff at http://techdocs.postgresql.org. I'd only found\nthe info given at http://www.ca.postgresql.org/interfaces.html , perhaps the\ntwo should be more tightly cross-referenced.\n\nThe 'POSTGRESQL & ACCESS FAQ' solved a couple of problems I've been\nexperiencing. However, still draft version of FAQ so, a lot of stuff not\nfinished yet.\n\nI did note that the link on the http://www.ca.postgresql.org/interfaces.html\npage :\n\n http://nsmsweb3.oucs.ox.ac.uk/pg/Pgsetup.html\n\nAlways gives me an 'access forbidden error', and the link on the\nhttp://techdocs.postgresql.org page:\n\nhttp://www.bcinternet.com/%7Ehilliard/postgresqlport.cfm\n\nappears to be dead...\n\n\n----- Original Message -----\nFrom: \"Justin Clift\" <justin@postgresql.org>\nTo: \"Steve O'Hagan\" <sohagan@stanger.com.hk>\nCc: <pgsql-general@postgresql.org>\nSent: Tuesday, May 15, 2001 6:40 PM\nSubject: Re: [GENERAL] Re: Internet is putting lot of competition fire &\nheat under Microsoft SQL Server\n\n\n> Hi Steve,\n>\n> I don't know if you taken a look at them yet, but there are a number of\n> Microsoft Access to PostgreSQL conversion documents linked to from the\n> main page of the techdocs.postgresql.org website.\n> (http://techdocs.postgresql.org)\n>\n> If they need improving, then let us know in which way, etc. :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> Steve O'Hagan wrote:\n> >\n> > Seems little point in posting this to postgresql groups . . .preaching\nto\n> > the converted??? Perhaps you should post on a micro$oft or a general SQL\n> > group?\n> >\n> > Oh, and how about a good guide for upsizing from M$ SQL and / or M$\nACCESS\n> > to Postgresql - that'd be far more usefull ...\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n\n", "msg_date": "Wed, 16 May 2001 10:18:24 +0800", "msg_from": "\"Steve O'Hagan\" <sohagan@stanger.com.hk>", "msg_from_op": false, "msg_subject": "Re: Re: Internet is putting lot of competition fire & heat under\n\tMicrosoft SQL Server" }, { "msg_contents": "Hi\n\nJust a little miss I think John did.\nWith PHP you dont have to start a new connection everytime.\nYou can use the good ole pg_pconnect. It is a persistant connection to \nthe server.\nThe server keeps a pool of connections open for fast access.\n\nThere has been some problems with this so I recommend PHP 4.0.5.\nIt is, among other things, a matter of issuing a rollback at the end of \nthe request on each page.\n\nBest regards\nPer-Olof Pettersson\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 2001-05-16, 03:06:36, njclark@ntsp.nec.co.jp (\"John Clark L. Naldoza\") \nwrote regarding Re: Not a PG question: SCSI question:\n\n\n> Hi,\n\n\n> Let me get this straight...;-)\n\n> The IDE server is running Linux 2.0.36 with PHP and Apache.\n\n> The SCSI server is running with PostgreSQL 7.0.3 ??\n\n> And you're talking about speed in inserts?\n\n> Try to use a kernel after 2.2.16 or the current 2.4.4 series, they\n> handle memory management rather well.. You should also optimize your\n> kernel depending upon your system's needs... like increasing the total\n> number of process, the amount of memory one application can use.. The\n> maximum number of files.. etc... Most of these are runtime tweaks found\n> at the /proc/sys directory...;-)\n\n> On the PostgreSQL side, you also need to optimize it a little bit...\n> Kindly read the fine manual for this...;-)\n\n> Apache and PHP... well, here's a sort of bottleneck in itself... You\n> may need to increase your PHP (PHP4) scripts performance via Zend's\n> Cache, Zend Optimizer, APC, and/or the PHP Smarty Template...\n\n> You can also further increase performance if you have all your compiled\n> PHP scripts under a RAM Disk System...;-)\n\n> But one of the greatest bottlenecks that may not be so obvious... Is a\n> rather simple one...;-) Each time you try to do an insert to your\n> PostgreSQL database from your PHP application, you have to initialize a\n> new connection...;-)\n\n> I don't know how you accessed your server... Via psql? or from an\n> application...;-) And how did you do you INSERTS? Did you use\n> Transactions? Did you enable -F so that PostgreSQL doesn't write to\n> your HDD often???\n\n> Cheers,\n\n\n> John Clark\n\n> Jean-Arthur Silve wrote:\n> >\n> > Hi !\n> >\n> > I have 2 servers running apache, php and PG 7.0.3 running under Linux\n> >\n> > The first one uses IDE disks, and everything run fine and fast. (uses \nlinux\n> > 2.0.36)\n> >\n> > The second one uses SCSI drives with a Adaptec adapter. (uses Linux \n2.2.14)\n> >\n> > My problem is that acces to the DB on this server are very very slow (I\n> > guess that it s particulary slow when i do INSERTs)\n> > It s so slow that even the keyboard and the display slow down ! The HD\n> > write or read a lot...\n> > When I do a \"ps\" or \"top\", I don't see any process taking all the \nressources...\n> >\n> > So, I guess there is something whith the SCSI...\n> >\n> > Is anybody has a clue ??\n> >\n> > thanks !\n> >\n> > jean-arthur\n> > PS : I did a vacuum analyze, and the table are not so big (max tuples are\n> > 5000 )\n> >\n\n> --\n> /) John Clark Naldoza y Lopez (\\\n> / ) Software Design Engineer II ( \\\n> _( (_ _ Web-Application Development _) )_\n> (((\\ \\> /_> Cable Modem Network Management System <_\\ </ /)))\n> (\\\\\\\\ \\_/ / NEC Telecom Software Phils., Inc. \\ \\_/ ////)\n> \\ / \\ /\n> \\ _/ phone: (+63 32) 233-9142 loc. 3112 \\_ /\n> / / cellphone: (+63 919) 399-4742 \\ \\\n> / / email: njclark@ntsp.nec.co.jp \\ \\\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Wed, 16 May 2001 03:40:40 GMT", "msg_from": "Per-Olof Pettersson <pgsql@peope.net>", "msg_from_op": false, "msg_subject": "Re: Not a PG question: SCSI question" }, { "msg_contents": "On Wed, May 16, 2001 at 03:40:40AM +0000, some SMTP stream spewed forth: \n> Hi\n> \n> Just a little miss I think John did.\n> With PHP you dont have to start a new connection everytime.\n> You can use the good ole pg_pconnect. It is a persistant connection to \n> the server.\n> The server keeps a pool of connections open for fast access.\n\nThis should not be mistaken as connection pooling, per se.\nPersistent connections maintain a PostgreSQL backend for each unique\naccount connecting per httpd process.\n\n3 unique accounts -> 3 backends per httpd.\n10 httpd's -> 30 backends.\n\n> There has been some problems with this so I recommend PHP 4.0.5.\n> It is, among other things, a matter of issuing a rollback at the end of \n> the request on each page.\n\nI second this, or at least patching your sources and reinstalling if you\nplan to move to persistent connections under a non-new version of PHP. I\nhave wasted much first-hand time debugging silly PHP connection messes.\n\nCheers,\ndan\n\n> \n> Best regards\n> Per-Olof Pettersson\n> \n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n> \n> On 2001-05-16, 03:06:36, njclark@ntsp.nec.co.jp (\"John Clark L. Naldoza\") \n> wrote regarding Re: Not a PG question: SCSI question:\n> \n> \n> > Hi,\n> \n> \n> > Let me get this straight...;-)\n*snip*\n> \n> > But one of the greatest bottlenecks that may not be so obvious... Is a\n> > rather simple one...;-) Each time you try to do an insert to your\n> > PostgreSQL database from your PHP application, you have to initialize a\n> > new connection...;-)\n> \n*snip*\n> \n> > Cheers,\n> \n> > John Clark\n> \n*snip original message*\n> \n> > --\n> > /) John Clark Naldoza y Lopez (\\\n> > / ) Software Design Engineer II ( \\\n> > _( (_ _ Web-Application Development _) )_\n> > (((\\ \\> /_> Cable Modem Network Management System <_\\ </ /)))\n> > (\\\\\\\\ \\_/ / NEC Telecom Software Phils., Inc. \\ \\_/ ////)\n> > \\ / \\ /\n> > \\ _/ phone: (+63 32) 233-9142 loc. 3112 \\_ /\n> > / / cellphone: (+63 919) 399-4742 \\ \\\n> > / / email: njclark@ntsp.nec.co.jp \\ \\\n\nWow, remember, it is not the size of the .sig that makes the wave; it is\nthe motion of the ocean. Nine lines? pshaw.\n\n", "msg_date": "Tue, 15 May 2001 23:54:44 -0500", "msg_from": "GH <grasshacker@over-yonder.net>", "msg_from_op": false, "msg_subject": "Re: Re: Not a PG question: SCSI question" } ]
[ { "msg_contents": "> > Here is a small list of big TODO items. I was wondering which ones\n> > people were thinking about for 7.2?\n>\n>\n>Other than that, I'm mostly thinking about performance improvements\n>for 7.2, not features\n\nMy wish list includes incorporating the ideas brought forward in this\npost \nhttp://www.ca.postgresql.org/mhonarc/pgsql-hackers/2000-09/msg00513.html \nwhich discusses a patch that allows queries to return data from\nthe index scan directly. In the thread it was noted that this should\nbe optional in that their is a small storage overhead but preliminary\nresults were showing a potential 75X increase in performance. Oracle\nhas a similar index only table. A special index type that includes a\ntid would be really nice and I think worth investigating.\nBest Regards,\nCarl Garland\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n", "msg_date": "Sun, 13 May 2001 01:25:18 -0400", "msg_from": "\"carl garland\" <carlhgarland@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: 7.2 items" } ]
[ { "msg_contents": "> > Here is a small list of big TODO items. I was wondering which ones\n> > people were thinking about for 7.2?\n>\n>Other than that, I'm mostly thinking about performance improvements\n>for 7.2, not features ... as far as my personal plans go, that is.\n\nThis issue was brought up before as well but after searching the\narchives I couldn't find original post, but it didnt seem to be\naddressed by any core hackers. From the linux kernel mailing lists:\nhttp://www.appwatch.com/lists/linux-kernel/Week-of-Mon-20010305/026326.html\n\n\n>Tridge and I tried out the postgresql benchmark you used here and this\n>contention is due to a bug in postgres. From a quick strace, we found\n>the threads do a load of select(0, NULL, NULL, NULL, {0,0}). Basically all\n>threads are pounding on schedule().\n\n>Our guess is that the app has some form of userspace synchronisation\n>(semaphores/spinlocks). I'd argue that the app needs to be fixed not the\n>kernel, or a more valid test case is put forwards. :)\n\n\nAnd later here \nhttp://www.appwatch.com/lists/linux-kernel/Week-of-Mon-20010305/027408.html \n:\n\n>>Thanks for looking into postgresql/pgbench related locking. Yes, \n>>apparently postgresql uses a synchronization scheme that uses select()\n>>to effect delays for backing off while attempting to acquire a lock.\n>>However, it seems to me that runqueue lock contention was not entirely due \n>>to postgresql code, since it was largely alleviated by the multiqueue \n>>scheduler patch.\n\n>Im not saying that the multiqueue scheduler patch isn't needed, just that\n>this test case is caused by a bug in postgres. We shouldn't run around\n>fixing symptoms - dropping the contention in the runqueue lock might not\n>change the overall performance of the benchmark, on the other hand\nfixing the spinlocks in postgres probably will.\n\nMight be worth a look from core members to see if there really are\nissues here, the thread is about 8 msgs.\n\nBest Regards,\nCarl Garland\nOn the other hand, if postgres still pounds on the runqueue lock after\nthe bug has been fixed then we need to look at the multiqueue patch.\n\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n", "msg_date": "Sun, 13 May 2001 04:19:38 -0400", "msg_from": "\"carl garland\" <carlhgarland@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: 7.2 items" } ]
[ { "msg_contents": "Franck Martin wrote:\n> \n> I think OID should be truly unique in the world as to make it easier for\n> replication. If OID are real unique number (not in a table, not in a\n> database, but in the world) then replication can be easily built with\n> OIDs...\n> \n\nExactly! That is what the Mariposa project did - they made OIDs uniqe\nand \nconsisting of 32bit site id + 32bit local OID. I guess this could be\nsplit \nsome other way too, like 20 bit site id + 44bit local or any other.\n\nIMHO the best would be a scheme of 32bit site id + 32bit local, but each \nsite can get additional site ids from some central (for a supersite)\ntable \nwhen it sees that it is near runnig out of oids.\n\n-----------------------\nHannu\n", "msg_date": "Mon, 14 May 2001 10:30:45 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "I think OID should be truly unique in the world as to make it easier for\nreplication. If OID are real unique number (not in a table, not in a\ndatabase, but in the world) then replication can be easily built with\nOIDs...\n\nCheers.\n\nFranck Martin\nNetwork and Database Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: franck@sopac.org <mailto:franck@sopac.org> \nWeb site: http://www.sopac.org/\n<http://www.sopac.org/> Support FMaps: http://fmaps.sourceforge.net/\n<http://fmaps.sourceforge.net/> \n\nThis e-mail is intended for its addresses only. Do not forward this e-mail\nwithout approval. The views expressed in this e-mail may not be necessarily\nthe views of SOPAC.\n\n\n\n-----Original Message-----\nFrom: Lincoln Yeoh [mailto:lyeoh@pop.jaring.my]\nSent: Monday, 14 May 2001 3:45 \nTo: Bruce Momjian; PostgreSQL-development\nSubject: [HACKERS] Re: 7.2 items\n\n\nAt 01:20 PM 10-05-2001 -0400, Bruce Momjian wrote:\n>Here is a small list of big TODO items. I was wondering which ones\n>people were thinking about for 7.2?\n>\n>---------------------------------------------------------------------------\n\n\n\n4) Not really important to me but can serial be a proper type or something\nso that drop table will drop the linked sequence as well? \nMaybe:\n serial = old serial for compatibility\n serial4 = new serial\n serial8 = new serial using bigint\n(OK so 2 billion is big, but...)\n\n5) How will the various rollovers be handled e.g. OID, TID etc? What\nhappens if OIDs are not unique? As things get faster and bigger a nonunique\nOID in a table might just happen.\n\nCheerio,\nLink.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n", "msg_date": "Mon, 14 May 2001 17:34:02 +1200", "msg_from": "Franck Martin <Franck@sopac.org>", "msg_from_op": false, "msg_subject": "RE: Re: 7.2 items" }, { "msg_contents": "Hannu Krosing wrote:\n\n> Franck Martin wrote:\n> >\n> > I think OID should be truly unique in the world as to make it easier for\n> > replication. If OID are real unique number (not in a table, not in a\n> > database, but in the world) then replication can be easily built with\n> > OIDs...\n> >\n>\n> Exactly! That is what the Mariposa project did - they made OIDs uniqe\n> and\n> consisting of 32bit site id + 32bit local OID. I guess this could be\n> split\n> some other way too, like 20 bit site id + 44bit local or any other.\n>\n> IMHO the best would be a scheme of 32bit site id + 32bit local, but each\n> site can get additional site ids from some central (for a supersite)\n> table\n> when it sees that it is near runnig out of oids.\n>\n> -----------------------\n> Hannu\n\nAs I'm thinking about it there is a utility called uuidgen which generates\nsuch numbers.\n\nOn my Mandrake distro it is part of the e2fsprogs package. Orbit uses it to\ngenerate unique numbers too.\n\n--------------\nThe uuidgen program creates a new universally unique identifier (UUID)\nusing the libuuid(3)\n library. The new UUID can reasonably be considered unique among all\nUUIDs created on the\n local system, and among UUIDs created on other systems in the past and\nin the future.\n\n There are two types of UUID's which uuidgen can generate: time-based\nUUID's and random-based\n UUID's. By default uuidgen will generate a random-based UUID if a\nhigh-quality random number\n generator is present. Otherwise, it will chose a time-based UUID.\nIt is possible to force\n the generation of one of these two UUID types by using the -r or -t\noptions.\n\n The UUID of the form 1b4e28ba-2fa1-11d2-883f-b9a761bde3fb (in\nprintf(3) format\n \"%08x-%04x-%04x-%04x-%012x\") is output to the standard output.\n-------------------\n\nCheers.\nFranck@sopac.org\n\n", "msg_date": "Mon, 14 May 2001 23:01:23 +1200", "msg_from": "Franck Martin <franck@sopac.org>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Franck Martin <franck@sopac.org> writes:\n> The uuidgen program creates a new universally unique identifier (UUID)\n> using the libuuid(3)\n> library. The new UUID can reasonably be considered unique among all\n> UUIDs created on the\n> local system, and among UUIDs created on other systems in the past and\n> in the future.\n\n\"Reasonably considered\"?\n\nIn other words, this is a 64-bit random number generator. Sorry, I\nthink the odds of collision would be uncomfortably high if we were to\nuse such a thing for OIDs ... certainly so on installations that are\nworried about running out of 32-bit OIDs. It sounds to me like uuidgen\nis built on the assumption that only relatively small numbers of IDs\nwill be demanded from it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 10:39:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items " }, { "msg_contents": "\nFranck Martin wrote:\n\n> I think OID should be truly unique in the world as to make it easier for\n> replication. If OID are real unique number (not in a table, not in a\n> database, but in the world) then replication can be easily built with\n> OIDs...\n\nThe Apache server has a UNIQUE_ID implementation and it is really\nunique in the world, I use it for my web apps. Their implementation\nis really simple an works fine. It is 19 alphanumeric bytes long.\n\nRegards,\n\n\n", "msg_date": "Mon, 14 May 2001 17:03:18 +0200", "msg_from": "Gilles DAROLD <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Franck Martin <franck@sopac.org> writes:\n> > The uuidgen program creates a new universally unique identifier (UUID)\n> > using the libuuid(3)\n> > library. The new UUID can reasonably be considered unique among all\n> > UUIDs created on the\n> > local system, and among UUIDs created on other systems in the past and\n> > in the future.\n> \n> \"Reasonably considered\"?\n> \n> In other words, this is a 64-bit random number generator. Sorry, I\n> think the odds of collision would be uncomfortably high if we were to\n> use such a thing for OIDs ... certainly so on installations that are\n> worried about running out of 32-bit OIDs. It sounds to me like uuidgen\n> is built on the assumption that only relatively small numbers of IDs\n> will be demanded from it.\n\nuuidgen with the -t option generates a UUID which includes the current\ntime and the Ethernet hardware address. The value is about as\nglobally unique as it is possible to create in 128 bits. The same\nalgorithm is used by DCE, and a variant is used by DCOM. To be used\nproperly, you need to coordinate on one machine to ensure that\ndifferent processes on that machine don't generate the same UUID.\n\nHere is a description:\n http://www.opengroup.org/onlinepubs/9629399/apdxa.htm#tagcjh_20\n\nIan\n", "msg_date": "14 May 2001 10:21:03 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Re: 7.2 items" } ]
[ { "msg_contents": "Hi guys,\n\nI have an experimental piece of code like this (that tries to remove\npg_relcheck entries):\n\ndeleted = 0;\nwhile (HeapTupleIsValid(tup = heap_getnext(rcscan, 0))) {\n\tsimple_heap_delete(rcrel, &tup->t_self);\n\t++deleted;\n\n\t/* Keep catalog indices current */\n\tCatalogOpenIndices(Num_pg_relcheck_indices, Name_pg_relcheck_indices,\n\t\t\t\t\t relidescs);\n\tCatalogIndexInsert(relidescs, Num_pg_relcheck_indices, rcrel, tup);\n\tCatalogCloseIndices(Num_pg_relcheck_indices, relidescs);\n}\n\nWhat do I use instead of the CatalogIndexInsert command to tell the index\nthat a tuple has been removed? I don't see an equivalent CatalogIndexDelete\nfunction...\n\nIs there a better way of doing this?\n\nChris\n\n", "msg_date": "Mon, 14 May 2001 16:41:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Updating system catalogs after a tuple deletion" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What do I use instead of the CatalogIndexInsert command to tell the index\n> that a tuple has been removed?\n\nNothing. The tuple isn't really gone, and neither are its index\nentries. Getting rid of them later is VACUUM's problem.\n\nBTW, there already is code that cleans out pg_relcheck: see\nRemoveRelCheck() in src/backend/catalog/heap.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 10:36:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Updating system catalogs after a tuple deletion " }, { "msg_contents": "> Nothing. The tuple isn't really gone, and neither are its index\n> entries. Getting rid of them later is VACUUM's problem.\n\nSo the piece of code at the bottom of the AddRelationRawConstraints function\nthat adds the tuple to the indices in heap.c is only necessary because it's\n_adding_ a tuple?\n\n> BTW, there already is code that cleans out pg_relcheck: see\n> RemoveRelCheck() in src/backend/catalog/heap.c.\n\nI am aware of the RemoveRelCheck() function - however it seems to be\nsomewhat misnamed as it seems to simply delete all CHECKS on a relation.\n\nAnother question:\n\nHow should I handle RESTRICT/CASCADE? (For CHECK clauses only at the moment)\nSQL99 doesn't seem to be 100% clear on this, but it seems that if the user\nspecifies:\n\n* Nothing: Just drop the constraint, not worrying about dependencies.\n(Perhaps issuing a NOTICE if there are.)\n\n* RESTRICT: Refuse to drop the constraint if it's referred to by another\ncheck constraint (impossible?), or if it's referred to in a function or\ntrigger (how do I check this?) or if a view is dependent on it (impossible?)\n\n* CASCADE: Like RESTRICT, just drop any objects that depend on the\nconstraint. (This is not easy, and I think I will for the time being issue\nan ERROR saying that CASCADE is not implemented.) This is especially\ndifficult since I doubt that DROP FUNCTION x CASCADE or DROP VIEW x CASCADE\nare implemented...?\n\nLastly, inheritance? I plan to leave out worrying about inheritance for\nstarters, especially since it seems that half the constraints when added\ndon't even propagate themselves properly to child tables...\n\nRemarks?\n\nChris\n\n", "msg_date": "Tue, 15 May 2001 10:12:24 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Updating system catalogs after a tuple deletion " }, { "msg_contents": "On Tue, 15 May 2001, Christopher Kings-Lynne wrote:\n\n> Lastly, inheritance? I plan to leave out worrying about inheritance for\n> starters, especially since it seems that half the constraints when added\n> don't even propagate themselves properly to child tables...\n\nActually this brings up a problem I'm having with ALTER TABLE ADD\nCONSTRAINT and since it mostly affects you with DROP CONSTRAINT, I'll\nbring it up here. If you have a table that has check constraints or \nis inherited from multiple tables, what's the correct way to name an\nadded constraint that's being inherited? If it's $2 in the parent,\nbut the child already has a $2 defined, what should be done? The\nreason this affects drop constraint is knowing what to drop in the\nchild. If you drop $2 on the parent, what constraint(s) on the child\nget dropped?\n\n\n", "msg_date": "Mon, 14 May 2001 19:50:17 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "RE: Updating system catalogs after a tuple deletion " }, { "msg_contents": "> > Lastly, inheritance? I plan to leave out worrying about inheritance for\n> > starters, especially since it seems that half the constraints when added\n> > don't even propagate themselves properly to child tables...\n>\n> Actually this brings up a problem I'm having with ALTER TABLE ADD\n> CONSTRAINT and since it mostly affects you with DROP CONSTRAINT, I'll\n> bring it up here. If you have a table that has check constraints or\n> is inherited from multiple tables, what's the correct way to name an\n> added constraint that's being inherited? If it's $2 in the parent,\n> but the child already has a $2 defined, what should be done? The\n> reason this affects drop constraint is knowing what to drop in the\n> child. If you drop $2 on the parent, what constraint(s) on the child\n> get dropped?\n\nI recently had a patch of mine committed to heap.c (rev 1.163->1.164) in the\nAddRelationRawConstraints function that loops to make sure that\nautomatically generated constraint names are unique. It seems to me that it\nwould be relatively straightforward to make sure that the constraint name is\nunique in all the inherited tables as well.\n\nI've never messed with inheritance, so I probably won't look at implementing\nthat any time soon...\n\nChris\n\n", "msg_date": "Tue, 15 May 2001 10:56:16 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Updating system catalogs after a tuple deletion " }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Actually this brings up a problem I'm having with ALTER TABLE ADD\n> CONSTRAINT and since it mostly affects you with DROP CONSTRAINT, I'll\n> bring it up here. If you have a table that has check constraints or \n> is inherited from multiple tables, what's the correct way to name an\n> added constraint that's being inherited? If it's $2 in the parent,\n> but the child already has a $2 defined, what should be done? The\n> reason this affects drop constraint is knowing what to drop in the\n> child. If you drop $2 on the parent, what constraint(s) on the child\n> get dropped?\n\nSeems like depending on the name is inadequate. Perhaps a column should\nbe added to pg_relcheck to show that a constraint has been inherited.\nMaybe \"rcinherit\" = OID of parent's equivalent constraint, or 0 if\nconstraint was not inherited. Then you could do the right things\nwithout making any assumptions about constraint names being the same.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 23:10:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Updating system catalogs after a tuple deletion " }, { "msg_contents": "At 19:50 14/05/01 -0700, Stephan Szabo wrote:\n>\n>If it's $2 in the parent,\n>but the child already has a $2 defined, what should be done? The\n>reason this affects drop constraint is knowing what to drop in the\n>child. If you drop $2 on the parent, what constraint(s) on the child\n>get dropped?\n>\n\nIt is worth considering skipping the entire 'copy to children' approach?\nSomething like:\n\npg_constraints(constraint_id, constraint_name, constraint_details....)\npg_relation_constraints(rel_id, constraint_id)\n\nThen, when we drop constraint 'FRED', the relevant rows of these tables are\ndeleted. There is only ever one copy of the constraint definition.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 15 May 2001 13:24:46 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "RE: Updating system catalogs after a tuple deletion " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> It is worth considering skipping the entire 'copy to children' approach?\n> Something like:\n> pg_constraints(constraint_id, constraint_name, constraint_details....)\n> pg_relation_constraints(rel_id, constraint_id)\n> Then, when we drop constraint 'FRED', the relevant rows of these tables are\n> deleted. There is only ever one copy of the constraint definition.\n\nThis would work if we abandon the idea that a table cannot have\nmultiple constraints of the same name (which seems like an unnecessary\nrestriction to me anyway).\n\nA small advantage of doing it this way is that it'd be easier to detect\nthe case where the same constraint is multiply inherited from more than\none parent, as in\n\n\ttable P has a constraint\n\n\tC1 inherits from P\n\n\tC2 inherits from P\n\n\tGC1 inherits from C1,C2\n\nCurrently, GC1 ends up with two duplicate constraints, which wastes time\non every insert/update. Not a very big deal, perhaps, but annoying.\nIt'd be nice to recognize and remove the extra constraint. (However,\nthe inherited-from link that I proposed a few minutes ago could do that\ntoo, if the link always points at the original constraint and not at the\nimmediate ancestor.)\n\nBTW, any proposed DROP CONSTRAINT algorithm should be examined to make\nsure it doesn't fail on this sort of structure ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 23:34:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Updating system catalogs after a tuple deletion " }, { "msg_contents": "At 23:34 14/05/01 -0400, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> It is worth considering skipping the entire 'copy to children' approach?\n>> Something like:\n>> pg_constraints(constraint_id, constraint_name, constraint_details....)\n>> pg_relation_constraints(rel_id, constraint_id)\n>> Then, when we drop constraint 'FRED', the relevant rows of these tables are\n>> deleted. There is only ever one copy of the constraint definition.\n...\n>\n>A small advantage of doing it this way is that it'd be easier to detect\n>the case where the same constraint is multiply inherited from more than\n>one parent, as in\n>\n\nOther advantages include:\n\n - easy ALTER TABLE ALTER CONSTRAINT (does it exist?)\n - cleaner pg_dump code\n - possibility to have NULL names for system objects which avoids any\nnamespace corruption.\n\nIt's probably worth adding extra information to the pg_constraints table to\ninclude inform,ation about how it was created (pk, fk, user-defined etc).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 15 May 2001 14:52:43 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Updating system catalogs after a tuple deletion " }, { "msg_contents": "On Mon, 14 May 2001, Tom Lane wrote:\n\n> Philip Warner <pjw@rhyme.com.au> writes:\n> > It is worth considering skipping the entire 'copy to children' approach?\n> > Something like:\n> > pg_constraints(constraint_id, constraint_name, constraint_details....)\n> > pg_relation_constraints(rel_id, constraint_id)\n> > Then, when we drop constraint 'FRED', the relevant rows of these tables are\n> > deleted. There is only ever one copy of the constraint definition.\n> \n> This would work if we abandon the idea that a table cannot have\n> multiple constraints of the same name (which seems like an unnecessary\n> restriction to me anyway).\n\nI'm not sure it would. You could have two constraint_ids with the same\nname still as long as there's no constraint on constraint_name, both would\npresumably be deleted on a drop. Since rel_id is only part of\npg_relation_constraints you wouldn't want the constraint_name to be forced\nunique (barring the spec definition) anyway, so there'd be nothing to\nprevent you from naming all your constraints FRED, just you'd have a\nbetter way to refer to a particular constraint than its name internally.\n\n", "msg_date": "Mon, 14 May 2001 21:55:31 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Updating system catalogs after a tuple deletion " }, { "msg_contents": "> Actually this brings up a problem I'm having with ALTER TABLE ADD\n> CONSTRAINT and since it mostly affects you with DROP CONSTRAINT, I'll\n> bring it up here. If you have a table that has check constraints or\n> is inherited from multiple tables, what's the correct way to name an\n> added constraint that's being inherited? If it's $2 in the parent,\n> but the child already has a $2 defined, what should be done? The\n> reason this affects drop constraint is knowing what to drop in the\n> child. If you drop $2 on the parent, what constraint(s) on the child\n> get dropped?\n\nIt occurs to me that there's a solution to this problem. All you need to do\nis in heap.c in the piece of code I modified earlier for generating\nconstraint names and checking specified ones you simply make sure it is\nunique for the parent table and for ALL its children.\n\nThis will stop people (1) adding named constraints that aren't unique across\nall children, noting that these new constraints need to be added to the\nchildren as well as the parent and (2) dynamically generated constraint\nnames will be unique across all children and also can then be immediately\npropagated to inherited tables.\n\nWith this enforced, surely there is a _guaranteed_ match between the name of\na parent constraint and the same constraint in the inherited tables? The\nonly problem, I guess, would be when you import data from old versions of\nPostgreSQL into a new version that has this assumption/restriction.\n\nChris\n\n", "msg_date": "Mon, 21 May 2001 09:43:33 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Updating system catalogs after a tuple deletion " }, { "msg_contents": "(This machine still is having trouble with mx records :( )\n\nOn Mon, 21 May 2001, Christopher Kings-Lynne wrote:\n\n> > Actually this brings up a problem I'm having with ALTER TABLE ADD\n> > CONSTRAINT and since it mostly affects you with DROP CONSTRAINT, I'll\n> > bring it up here. If you have a table that has check constraints or\n> > is inherited from multiple tables, what's the correct way to name an\n> > added constraint that's being inherited? If it's $2 in the parent,\n> > but the child already has a $2 defined, what should be done? The\n> > reason this affects drop constraint is knowing what to drop in the\n> > child. If you drop $2 on the parent, what constraint(s) on the child\n> > get dropped?\n> \n> It occurs to me that there's a solution to this problem. All you need to do\n> is in heap.c in the piece of code I modified earlier for generating\n> constraint names and checking specified ones you simply make sure it is\n> unique for the parent table and for ALL its children.\n> \n> This will stop people (1) adding named constraints that aren't unique across\n> all children, noting that these new constraints need to be added to the\n> children as well as the parent and (2) dynamically generated constraint\n> names will be unique across all children and also can then be immediately\n> propagated to inherited tables.\n> \n> With this enforced, surely there is a _guaranteed_ match between the name of\n> a parent constraint and the same constraint in the inherited tables? The\n> only problem, I guess, would be when you import data from old versions of\n> PostgreSQL into a new version that has this assumption/restriction.\n\nActually, I realized that in the face of multiple inheritance, dynamically\ngenerated constraint names still fail with our current default naming\nscheme. What happens when two tables both have a $1 and then you inherit\nfrom both of them, at this point it's pretty much too late to rename the\nconstraint on one of the parents and I think right now the constraints get\nnamed $1 and $2. Either, we should punt, and make it so they both end up\n$1, or perhaps we should change $1 to something like <table>_$1 where\ntable is the table name of the table on which the constraint was defined.\nSo if you have table1 with an unnamed constraint, it and all of its\nchildren would see the constraint as table1_$1.\n\n\n", "msg_date": "Tue, 22 May 2001 12:43:00 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "RE: Updating system catalogs after a tuple deletion " }, { "msg_contents": "> Actually, I realized that in the face of multiple inheritance, dynamically\n> generated constraint names still fail with our current default naming\n> scheme. What happens when two tables both have a $1 and then you inherit\n> from both of them, at this point it's pretty much too late to rename the\n> constraint on one of the parents and I think right now the constraints get\n> named $1 and $2. Either, we should punt, and make it so they both end up\n> $1, or perhaps we should change $1 to something like <table>_$1 where\n> table is the table name of the table on which the constraint was defined.\n> So if you have table1 with an unnamed constraint, it and all of its\n> children would see the constraint as table1_$1.\n\nEven if we implemented this, it wouldn't fix the problem of duplicated user\nspecified constraint names under multiple inheritance. It seems a many-many\npg_constraint table it the only clean solution...\n\nChris\n\n", "msg_date": "Wed, 23 May 2001 10:41:41 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Updating system catalogs after a tuple deletion " }, { "msg_contents": "\nOn Wed, 23 May 2001, Christopher Kings-Lynne wrote:\n\n> > Actually, I realized that in the face of multiple inheritance, dynamically\n> > generated constraint names still fail with our current default naming\n> > scheme. What happens when two tables both have a $1 and then you inherit\n> > from both of them, at this point it's pretty much too late to rename the\n> > constraint on one of the parents and I think right now the constraints get\n> > named $1 and $2. Either, we should punt, and make it so they both end up\n> > $1, or perhaps we should change $1 to something like <table>_$1 where\n> > table is the table name of the table on which the constraint was defined.\n> > So if you have table1 with an unnamed constraint, it and all of its\n> > children would see the constraint as table1_$1.\n> \n> Even if we implemented this, it wouldn't fix the problem of duplicated user\n> specified constraint names under multiple inheritance. It seems a many-many\n> pg_constraint table it the only clean solution...\n\nI'm not sure that there is a workable solution for user specified names\nwithout going the constraint names should be unique throughout solution\n(which Tom doesn't want, and actually neither do I really even though I\nbring it up as a compliance issue). I think that users will have to be\nassumed to be smart enough not to screw themselves up with badly named\nconstraints.\n\nWe definately need better storage of our constraints. I liked the\nconstraint is stored once with pointers from referencing tables\nidea.\n\n", "msg_date": "Tue, 22 May 2001 19:52:49 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "RE: Updating system catalogs after a tuple deletion " } ]
[ { "msg_contents": "We found a little bug in gist code. Will do patch for 7.1.2 tomorrow.\nDo I have a time ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 14 May 2001 18:07:14 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "is't late to submit patch for 7.1.2 release ?" }, { "msg_contents": "\nI think so. I have heard no 7.1.2 date discussion yet.\n\n> We found a little bug in gist code. Will do patch for 7.1.2 tomorrow.\n> Do I have a time ?\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 11:23:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: is't late to submit patch for 7.1.2 release ?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think so. I have heard no 7.1.2 date discussion yet.\n\nSoon, I hope --- but we have a nasty EvalPlanQual bug to fix also,\nso it's not going to be today ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 12:04:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: is't late to submit patch for 7.1.2 release ? " } ]
[ { "msg_contents": "Bruce Momjian wrote:\n> \n> Can someone tell me what we use indislossy for? \n\nIIRC it means that if you get something by this index you must check\nagain in the actual data \n\nI think that at least the GIST intarray (actually intset) methods use\nit.\n\nSo you probably should _not_ remove it ;)\n\n------------------\nHannu\n", "msg_date": "Mon, 14 May 2001 23:34:12 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "Can someone tell me what we use indislossy for? The only comment I\ncould find was:\n\n\t/*\n\t * WITH clause reinstated to handle lossy indices. -- JMH, 7/22/96\n\t */\n\tforeach(pl, parameterList)\n\t{\n\t\tDefElem *param = (DefElem *) lfirst(pl);\n\n\t\tif (!strcasecmp(param->defname, \"islossy\"))\n\t\t\tlossy = true;\n\t\telse\n\t\t\telog(NOTICE, \"Unrecognized index attribute \\\"%s\\\" ignored\",\n\t\t\t\t param->defname);\n\t}\n\nand\n\n bool indislossy; /* do we fetch false tuples (lossy\n * compression)? */\n\nShould I remove this column?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 16:35:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "pg_index.indislossy" }, { "msg_contents": "On Mon, 14 May 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian wrote:\n> > >\n> > > Can someone tell me what we use indislossy for?\n> >\n> > IIRC it means that if you get something by this index you must check\n> > again in the actual data\n> >\n> > I think that at least the GIST intarray (actually intset) methods use\n> > it.\n> >\n> > So you probably should _not_ remove it ;)\n>\n> I did a search and found it used only a few places. I do not see it\n> used as part of GIST. My rememberance is that it is involved in partial\n> indexes, where you index only certain values in a column. It was an old\n> idea that was never working in PostgreSQL.\n\nLet's avoid removing things for the sake of removing them ... might be an\nold idea that, if someone takes the time to research, might prove useful\n...\n\n\n", "msg_date": "Mon, 14 May 2001 18:33:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Can someone tell me what we use indislossy for? \n> \n> IIRC it means that if you get something by this index you must check\n> again in the actual data \n> \n> I think that at least the GIST intarray (actually intset) methods use\n> it.\n> \n> So you probably should _not_ remove it ;)\n\nI did a search and found it used only a few places. I do not see it\nused as part of GIST. My rememberance is that it is involved in partial\nindexes, where you index only certain values in a column. It was an old\nidea that was never working in PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 17:49:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "> > > I think that at least the GIST intarray (actually intset) methods use\n> > > it.\n> > >\n> > > So you probably should _not_ remove it ;)\n> >\n> > I did a search and found it used only a few places. I do not see it\n> > used as part of GIST. My rememberance is that it is involved in partial\n> > indexes, where you index only certain values in a column. It was an old\n> > idea that was never working in PostgreSQL.\n> \n> Let's avoid removing things for the sake of removing them ... might be an\n> old idea that, if someone takes the time to research, might prove useful\n> ...\n\nYea, there is actually some code attached to this vs. the others that\nhad no code at all. Are we ever going to do partial indexes? I guess\nthat is the question.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 19:25:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "> > Let's avoid removing things for the sake of removing them ... might be an\n> > old idea that, if someone takes the time to research, might prove useful\n> > ...\n> \n> Yea, there is actually some code attached to this vs. the others that\n> had no code at all. Are we ever going to do partial indexes? I guess\n> that is the question.\n\nOne problem with keeping it is that interface coders are getting\nconfused by some of the unused system table columns, assuming they mean\nsomething, when in fact they don't. Both ODBC and JDBC have had this\nproblem that I fixed today.\n\nMaybe the best solution is to mark the code as NOT_USED and remove the\ncolumn. That way, the code stays around but no one sees it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 19:40:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Should I remove this column?\n\nNO. The fact that we don't currently have any lossy index types doesn't\nmean that we won't ever have them again. I've been wondering, for\nexample, whether the frequently heard requests for case-insensitive\nindexes might not be answerable by implementing 'em as lossy indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 19:53:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Should I remove this column?\n> \n> NO. The fact that we don't currently have any lossy index types doesn't\n> mean that we won't ever have them again. I've been wondering, for\n> example, whether the frequently heard requests for case-insensitive\n> indexes might not be answerable by implementing 'em as lossy indexes.\n\nIs lossy and partial indexes the same? I can't see how they were\nsupposed to be used. The _only_ mention of this field I see in\nindexcmd.c, around line 132. All the other mentions are just passing\naround parameters.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 19:57:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Should I remove this column?\n> \n> NO. The fact that we don't currently have any lossy index types doesn't\n> mean that we won't ever have them again. I've been wondering, for\n> example, whether the frequently heard requests for case-insensitive\n> indexes might not be answerable by implementing 'em as lossy indexes.\n\nI see it now. It is just looking for a flag in the index creation:\n\n\t\tif (!strcasecmp(param->defname, \"islossy\"))\n\nThat is the extent of the lossy code. There is optimizer code checking\nfor it. Seems an ifdef NOT_USED around those would document what needs\nto be done, and document it doesn't do anything now, or I can just leave\nit alone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 20:02:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is lossy and partial indexes the same?\n\nNo.\n\n> I can't see how they were\n> supposed to be used. The _only_ mention of this field I see in\n> indexcmd.c, around line 132.\n\nYou missed a few then --- the most critical being in createplan.c.\n\nAFAIK this is a working feature, and I am going to insist that you\nkeep your hands off it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 20:13:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is lossy and partial indexes the same?\n> \n> No.\n> \n> > I can't see how they were\n> > supposed to be used. The _only_ mention of this field I see in\n> > indexcmd.c, around line 132.\n> \n> You missed a few then --- the most critical being in createplan.c.\n> \n> AFAIK this is a working feature, and I am going to insist that you\n> keep your hands off it ...\n\nReally, it actually works? What are they?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 20:24:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> AFAIK this is a working feature, and I am going to insist that you\n>> keep your hands off it ...\n\n> Really, it actually works? What are they?\n\nRead create_indexscan_plan().\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 20:27:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Is lossy and partial indexes the same?\n> >\n> > No.\n> >\n> > > I can't see how they were\n> > > supposed to be used. The _only_ mention of this field I see in\n> > > indexcmd.c, around line 132.\n> >\n> > You missed a few then --- the most critical being in createplan.c.\n> >\n> > AFAIK this is a working feature, and I am going to insist that you\n> > keep your hands off it ...\n> \n> Really, it actually works? What are they?\n\n\nrom the readme of contrib/intarray/README.intarray\n\nThis is an implementation of RD-tree data structure using GiST interface\nof PostgreSQL. It has built-in lossy compression - must be declared\nin index creation - with (islossy). Current implementation provides\nindex\nsupport for one-dimensional array of int4's - gist__int_ops, suitable\nfor\nsmall and medium size of arrays (used on default), and gist__intbig_ops\nfor\nindexing large arrays (we use superimposed signature with length of 4096\nbits to represent sets).\n\n-----------------------\nHannu\n", "msg_date": "Tue, 15 May 2001 09:28:08 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "GiST certainly use it ! Please don't remove.\nGiST uses compression/decompression technique which could\nproduce false drops.\n\n\tOleg\nOn Mon, 14 May 2001, Bruce Momjian wrote:\n\n> Can someone tell me what we use indislossy for? The only comment I\n> could find was:\n>\n> \t/*\n> \t * WITH clause reinstated to handle lossy indices. -- JMH, 7/22/96\n> \t */\n> \tforeach(pl, parameterList)\n> \t{\n> \t\tDefElem *param = (DefElem *) lfirst(pl);\n>\n> \t\tif (!strcasecmp(param->defname, \"islossy\"))\n> \t\t\tlossy = true;\n> \t\telse\n> \t\t\telog(NOTICE, \"Unrecognized index attribute \\\"%s\\\" ignored\",\n> \t\t\t\t param->defname);\n> \t}\n>\n> and\n>\n> bool indislossy; /* do we fetch false tuples (lossy\n> * compression)? */\n>\n> Should I remove this column?\n>\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 15 May 2001 12:04:14 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" }, { "msg_contents": "> GiST certainly use it ! Please don't remove.\n> GiST uses compression/decompression technique which could\n> produce false drops.\n\nNo problem. It is unchanged. I was just asking.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 09:50:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.indislossy" } ]
[ { "msg_contents": "Hi,\n\nl see that a lot of people have the same problem and have posted questions\nbut I couldn't find an answer on the web.\n\nI have Postgresql 6.5.3 running on Debian Linux.\nAnd I use the iODBC driver manager 2.50.3 to make the ODBC calls.\n\nAm not able to insert data into Postgres tables across ODBC.\nI can do it using psql though.\n\nodbc.ini has an entry saying readonly = no.\nAlso from the C++ program, I set the attribute of connection object as\nwritable explicitly.\nIf I ping the the connection object it says it's a writable connection.\nAnd immediately i have a \nconnection.prepareStatement() \ncall that does an INSERT.\nWhen this line is executed I get an error saying \n\"Connection is readonly, only select statements are allowed.\"\n\nThis error msg comes WITHOUT the [ODBC] tag.\n\nAgain when I catch{} the exception, I ping the connection object.\nIt's still a writable connection.\n\nI should also mention that I use FreeODBC++ wrapper.\nBut I've searched through their source code and such an error message is not\ndefined in .h or .cpp files\n\nI have tried to connect using the postgres superuser as well. \nI have also tried to change my 'pg_hba.conf' file to allow connection as\ntrust for all.\nThis table has a primary key and permisions GRANTed to public for all\noperations.\n\nApprecitate your help.\n\nThanx,\nNisha", "msg_date": "Mon, 14 May 2001 12:31:24 -0700", "msg_from": "\"Nisha Srinivasan\" <nisha@arch.sel.sony.com>", "msg_from_op": true, "msg_subject": "error on INSERT - connection is read only " } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOk, since a release date (or even the fact of a release) for a v7.1.2 hasn't \nbeen set, I've gone ahead and uploaded the 7.1.1 set. There are a few \nchanges for this release, the most visible of which is the release numbering. \nI've gone to a system where the RPM release number will have a '.PGDG' \nappended -- this tags our RPM's versus the Polished Linux Distribution's \nRPM's, for example (the PLD's RPMset has a 'pl' after its release, but it \nwasa handy example). Further, our RPM's will have a Vendor: of 'PostgreSQL \nGlobal Development Group'. The next release will be cryptographically signed \nby me, once Iearn the steps.\n\nftp://ftp.postgresql.org/pub/binary/v7.1.1/RPMS/ is the place, and \nREADME.rpm-dist is the first file you want to read.\n\nA minimal PostgreSQL server installation will have the main postgresql RPM as \nwell as the postgresql-libs and postgresql-server RPM's installed. You need \nto install all three on one command line, or install postgresql-libs first.\n\nBinary RPMs for Red Hat's 6.2 and 7.1 are uploaded or uploading right as I \ncompose this message. The source RPM is already up.\n\nEnjoy!\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD4DBQE7AD4C5kGGI8vV9eERAraSAJjnQngeubzXMU7hLebgWISyftMDAJ9HjkaV\nHTGBV02eNLEw+3WBg9iHMQ==\n=CLrU\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 14 May 2001 16:20:13 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "7.1.1-2.PGDG RPMset." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Monday 14 May 2001 16:20, Lamar Owen wrote:\n> ftp://ftp.postgresql.org/pub/binary/v7.1.1/RPMS/ is the place, and\n> README.rpm-dist is the first file you want to read.\n\nWell, as others have seen, the main ftp server is asking for a username and \npassword. Thus, use one of the mirrors.\n\n> Binary RPMs for Red Hat's 6.2 and 7.1 are uploaded or uploading right as I\n> compose this message. The source RPM is already up.\n\nAlso, the pl/pgsql patch is NOT applied in this release.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7AK1e5kGGI8vV9eERApR4AKCjP0jYc/+8wMjL9zmJd/ZL8swZIwCg3zO9\ntXLkY+v/VZH6Q+Q4f0qkifM=\n=InMY\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 15 May 2001 00:15:23 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: [PORTS] 7.1.1-2.PGDG RPMset." } ]
[ { "msg_contents": "> Hi,\n> \n> l see that a lot of people have the same problem and have posted questions\n> but I couldn't find an answer on the web.\n> \n> I have Postgresql 6.5.3 running on Debian Linux.\n> And I use the iODBC driver manager 2.50.3 to make the ODBC calls.\n> \n> Am not able to insert data into Postgres tables across ODBC.\n> I can do it using psql though.\n> \n> odbc.ini has an entry saying readonly = no.\n> Also from the C++ program, I set the attribute of connection object as\n> writable explicitly.\n> If I ping the the connection object it says it's a writable connection.\n> And immediately i have a \n> connection.prepareStatement() \n> call that does an INSERT.\n> When this line is executed I get an error saying \n> \"Connection is readonly, only select statements are allowed.\"\n> \n> This error msg comes WITHOUT the [ODBC] tag.\n> \n> Again when I catch{} the exception, I ping the connection object.\n> It's still a writable connection.\n> \n> I should also mention that I use FreeODBC++ wrapper.\n> But I've searched through their source code and such an error message is\n> not defined in .h or .cpp files\n> \n> I have tried to connect using the postgres superuser as well. \n> I have also tried to change my 'pg_hba.conf' file to allow connection as\n> trust for all.\n> This table has a primary key and permisions GRANTed to public for all\n> operations.\n> \n> Apprecitate your help.\n> \n> Thanx,\n> Nisha", "msg_date": "Mon, 14 May 2001 13:35:20 -0700", "msg_from": "\"Nisha Srinivasan\" <nisha@arch.sel.sony.com>", "msg_from_op": true, "msg_subject": "error on INSERT - connection is read only " }, { "msg_contents": "Nisha Srinivasan wrote:\n> \n> > Hi,\n> >\n> > l see that a lot of people have the same problem and have posted questions\n> > but I couldn't find an answer on the web.\n> >\n> > I have Postgresql 6.5.3 running on Debian Linux.\n> > And I use the iODBC driver manager 2.50.3 to make the ODBC calls.\n> >\n> > Am not able to insert data into Postgres tables across ODBC.\n> > I can do it using psql though.\n> >\n> > odbc.ini has an entry saying readonly = no.\n\nI may be off the point because I'm not an user of psqlodbc\ndriver under unix. There seems to be several restrictions\nto describe odbc.ini though I don't know the reason.\n\n1) Do you mean ~/.odbc.ini(user DSN?) by odbc.ini ?\n2) Don't you have tabs between 'readonly' and '=' ?\n3) Current driver is case sensitive in reading odbc.ini\n and 'readonly' must be 'ReadOnly'.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 15 May 2001 13:15:02 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: error on INSERT - connection is read only" } ]
[ { "msg_contents": "Hi, got an error while sending this..so resending. \n\n> Hi,\n> l see that a lot of people have the same problem and have posted questions\n> but I couldn't find an answer on the web.\n> \n> I have Postgresql 6.5.3 running on Debian Linux.\n> And I use the iODBC driver manager 2.50.3 to make the ODBC calls.\n> \n> Am not able to insert data into Postgres tables across ODBC.\n> I can do it using psql though.\n> \n> odbc.ini has an entry saying readonly = no.\n> Also from the C++ program, I set the attribute of connection object as\n> writable explicitly.\n> If I ping the the connection object it says it's a writable connection.\n> And immediately i have a \n> connection.prepareStatement() \n> call that does an INSERT.\n> When this line is executed I get an error saying \n> \"Connection is readonly, only select statements are allowed.\"\n> \n> This error msg comes WITHOUT the [ODBC] tag.\n> \n> Again when I catch{} the exception, I ping the connection object.\n> It's still a writable connection.\n> \n> I should also mention that I use FreeODBC++ wrapper.\n> But I've searched through their source code and such an error message is\n> not defined in .h or .cpp files\n> \n> I have tried to connect using the postgres superuser as well. \n> I have also tried to change my 'pg_hba.conf' file to allow connection as\n> trust for all.\n> This table has a primary key and permisions GRANTed to public for all\n> operations.\n> \n> Apprecitate your help.\n> \n> Thanx,\n> Nisha", "msg_date": "Mon, 14 May 2001 16:43:00 -0700", "msg_from": "\"Nisha Srinivasan\" <nisha@arch.sel.sony.com>", "msg_from_op": true, "msg_subject": "error on INSERT - connection is read only " } ]
[ { "msg_contents": "\nI can easily have pg_index.indisclustered updated to 'true' if you ever\nCLUSTER the index. Is that useful to anyone? Remember, clustering\ndoesn't remain if you modify the table.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 21:26:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pg_index.isclustered can work" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I can easily have pg_index.indisclustered updated to 'true' if you ever\n> CLUSTER the index. Is that useful to anyone? Remember, clustering\n> doesn't remain if you modify the table.\n\nI don't see any value in it as long as CLUSTER is in the disreputable\nshape it's in. I don't really like giving people the impression that\nCLUSTER is a supported operation ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 22:29:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_index.isclustered can work " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I can easily have pg_index.indisclustered updated to 'true' if you ever\n> > CLUSTER the index. Is that useful to anyone? Remember, clustering\n> > doesn't remain if you modify the table.\n> \n> I don't see any value in it as long as CLUSTER is in the disreputable\n> shape it's in. I don't really like giving people the impression that\n> CLUSTER is a supported operation ;-)\n\nOK, I have an idea!\n\n1) Set pg_index.indisclustered during CLUSTER\n2) Clear pg_index.indisclustered during vacuum if any tuples are expired\n3) or, have vacuum auto-CLUSTER the table as part of vacuum\n4) Use pg_index.indisclustered in the optimizer\n\nOf course, this assumes we have all the CLUSTER problems fixed.\n\nFYI, we now have a CLUSTER section in the TODO list which says:\n\n* CLUSTER\n * cluster all tables at once\n * prevent lose of indexes, permissions, inheritance\n * Automatically keep clustering on a table\n * Keep statistics about clustering, perhaps during VACUUM ANALYZE\n [optimizer]\n\nDoesn't look too bad.\n\nFYI, the reference to pg_index.indisclustered in ODBC was assuming it\nmeant it was a hash index, which is just plain wrong, so that code is\nnot coming back.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 May 2001 22:51:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_index.isclustered can work" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n[snip]\n\n> \n> FYI, the reference to pg_index.indisclustered in ODBC was assuming it\n> meant it was a hash index,\n\nHmm where could I see it ?\n\n> which is just plain wrong, so that code is\n> not coming back.\n> \n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 15 May 2001 19:36:53 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: pg_index.isclustered can work" }, { "msg_contents": "[ Charset US-ASCII unsupported, converting... ]\n> Bruce Momjian wrote:\n> > \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> [snip]\n> \n> > \n> > FYI, the reference to pg_index.indisclustered in ODBC was assuming it\n> > meant it was a hash index,\n> \n> Hmm where could I see it ?\n> \n> > which is just plain wrong, so that code is\n> > not coming back.\n> > \n\nIt is in info.c, SQLStatistics():\n\n /*\n * Clustered index? I think non-clustered should be type\n * OTHER not HASHED\n */\n set_tuplefield_int2(&row->tuple[6], (Int2) (atoi(isclustered) ? \n\t\tSQL_INDEX_CLUSTERED : SQL_INDEX_OTHER));\n\nThe HASH mention has me confused. Is that code valid? Maybe so. What\ndoes ODBC think the column means, HASH or CLUSTER?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 09:54:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_index.isclustered can work" }, { "msg_contents": "[ Charset US-ASCII unsupported, converting... ]\n> Bruce Momjian wrote:\n> > \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> [snip]\n> \n> > \n> > FYI, the reference to pg_index.indisclustered in ODBC was assuming it\n> > meant it was a hash index,\n> \n> Hmm where could I see it ?\n> \n> > which is just plain wrong, so that code is\n> > not coming back.\n> > \n\nI now think the original ODBC code was right. It has defined as\npossible values:\n\t\n\t#define SQL_TABLE_STAT 0\n\t#define SQL_INDEX_CLUSTERED 1\n\t#define SQL_INDEX_HASHED 2\n\t#define SQL_INDEX_OTHER 3\n\nNot sure what SQL_TABLE_STAT is for, perhaps we should flag for\npg_statistics? Anyway, the test of the flag looks correct to me. Why\nthey would care only about HASH and CLUSTERED, I don't know.\n\nI will restore the code, and fix the HASH while I am at it.\n\nOf course, the cluster field is still alway false, but it will be ready\nif we ever get it working.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 10:34:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_index.isclustered can work" } ]
[ { "msg_contents": "At 19:50 14/05/01 -0700, Stephan Szabo wrote:\n>\n>If it's $2 in the parent,\n>but the child already has a $2 defined, what should be done? The\n>reason this affects drop constraint is knowing what to drop in the\n>child. If you drop $2 on the parent, what constraint(s) on the child\n>get dropped?\n\nAFAIK, it is not possible to derive this. pg_dump makes the assumption that\nif the constraint source is the same, and both names start with '$', then\nit is inherited.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 15 May 2001 13:26:55 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "RE: Updating system catalogs after a tuple deletion " } ]
[ { "msg_contents": "I know Tom has talked about doing schemas for 7.2. I have an idea.\n\nI did temp tables by doing the temp table mapping as part of cache\nlookups. Though this seems like a strange idea, the cache is the\ncentral location for name/tuple lookups, and is a natural place for\nother mappings, perhaps even SCHEMA.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 00:40:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "SCHEMA idea" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I know Tom has talked about doing schemas for 7.2. I have an idea.\n> I did temp tables by doing the temp table mapping as part of cache\n> lookups. Though this seems like a strange idea, the cache is the\n> central location for name/tuple lookups, and is a natural place for\n> other mappings, perhaps even SCHEMA.\n\nWell, actually, I hope that one of the side effects of implementing\nreal schemas is that the current hack for temp tables goes away ;-)\n\nThe problem with the temp table mechanism is that its state is not\nvisible: you can't see the logical names of temp tables in pg_class,\nyou can't find out the mapping to real table names, etc. In a proper\nschema implementation, all that stuff *will* be in system catalogs\nwhere people can query it.\n\nI'm hoping that temp tables will be reimplemented as a per-backend\nschema that sits at the front of the search path for table names.\nBut I haven't looked yet to see what's involved in making that happen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 01:26:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SCHEMA idea " } ]
[ { "msg_contents": "Hi all,\n\nCan't for the life of me figure out the problem here:\n\nCREATE TABLE \"b\" (\n \"id\" bigint,\n \"string\" text\n);\n\nCREATE INDEX \"b_pkey\" on \"b\" using btree ( \"id\" \"int8_ops\" );\n\nGiven 2000 tuples in b, vacuum verbose analyze:\n\ntest=# vacuum verbose analyze b;\nNOTICE: --Relation b--\nNOTICE: Pages 13: Changed 0, reaped 0, Empty 0, New 0; Tup 2002: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0, MinLen 48, MaxLen\n48; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU\n0.00s/0.01u sec.\nNOTICE: Index b_pkey: Pages 12; Tuples 2002. CPU 0.00s/0.03u sec.\nNOTICE: --Relation pg_toast_2140890--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen\n0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU\n0.00s/0.00u sec.\nNOTICE: Index pg_toast_2140890_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u\nsec.\nNOTICE: Analyzing...\nVACUUM\n\n\nSo, a select on b as follows:\n\nSELECT * FROM b WHERE id=1;\n\nshould not have an EXPLAIN like this:\n\ntest=# explain verbose select * from b where id=1;\nNOTICE: QUERY DUMP:\n\n{ SEQSCAN :startup_cost 0.00 :total_cost 38.02 :rows 2 :width 20\n:qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 20\n:restypmod -1 :resname id :reskey 0 :reskeyop 0 :ressortgroupref 0\n:resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 20 :vartypmod\n-1 :varlevelsup 0 :varnoold 1 :varoattno 1}} {\nTARGETENTRY :resdom { RESDOM :resno 2 :restype 25 :restypmod -1 :resname\nstring\n:reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR\n:varno 1 :varattno 2 :vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 2}}) :qpqual ({ EXPR :typeOid 16 :opType op :oper { OPER :opno\n416 :opid 474 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 1\n:vartype 20 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1} {\nCONST :consttype 23 :constlen 4 :constbyval true :constisnull false\n:constvalue 4 [ 1 0 0 0 ] })}) :lefttree <> :righttree <> :extprm\n() :locprm () :initplan <> :nprm 0 :scanrelid 1 }\nNOTICE: QUERY PLAN:\n\nSeq Scan on b (cost=0.00..38.02 rows=2 width=20)\n\nversion is 7.1.\n\nThanks\n\nGavin\n\n", "msg_date": "Tue, 15 May 2001 14:58:46 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "optimiser problem" }, { "msg_contents": "\nOn Tue, 15 May 2001, Gavin Sherry wrote:\n\n> Hi all,\n> \n> Can't for the life of me figure out the problem here:\n> \n> CREATE TABLE \"b\" (\n> \"id\" bigint,\n> \"string\" text\n> );\n> \n> CREATE INDEX \"b_pkey\" on \"b\" using btree ( \"id\" \"int8_ops\" );\n\nBecause of a problem with the typing of int constants, you'll need\nto explicitly cast your constant into an int8 in order to use the \nindex (where id=1::int8).\n\n> So, a select on b as follows:\n> \n> SELECT * FROM b WHERE id=1;\n\n\n", "msg_date": "Mon, 14 May 2001 22:12:16 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: optimiser problem" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> CREATE TABLE \"b\" (\n> \"id\" bigint,\n> \"string\" text\n> );\n\n> SELECT * FROM b WHERE id=1;\n\nTry \"WHERE id = 1::bigint\".\n\n(Hey Bruce, is there anything about this in the FAQ?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 01:31:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: optimiser problem " }, { "msg_contents": "> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > CREATE TABLE \"b\" (\n> > \"id\" bigint,\n> > \"string\" text\n> > );\n> \n> > SELECT * FROM b WHERE id=1;\n> \n> Try \"WHERE id = 1::bigint\".\n> \n> (Hey Bruce, is there anything about this in the FAQ?)\n\nNo, there is something in the TODO but not the FAQ. Can you give me\nsome text?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 10:00:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: optimiser problem" } ]
[ { "msg_contents": "Forgot to add that naturally, I have VACUUM ANALYZE'd.\n\nGavin\n\n\n", "msg_date": "Tue, 15 May 2001 15:06:36 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Re: optimiser problem" } ]
[ { "msg_contents": "Wouldn't it be nice, if there was a system table that linked the users with \nthere respective group, instead of using an array field in the group table? \nIt would be much easier to search for the group(s) on which a determined user \nis on.\n\nSaludos... :-)\n\n-- \nCualquier administra un NT.\nEse es el problema, que cualquier adminstre.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Tue, 15 May 2001 11:26:04 +0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "7.2 wish list (big change)" } ]
[ { "msg_contents": "Hi,\n\nwe found a problem in GiST with massive insert/update operations\nwith many NULLs ( inserting of NULL into indexed field cause\nERROR: MemoryContextAlloc: invalid request size)\nAs a workaround 'vacuum analyze' could be used.\n\nThis patch resolves the problem, please upply to 7.1.1 sources and\ncurrent cvs tree.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Tue, 15 May 2001 12:19:29 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "please apply patch for GiST (7.1.1, current cvs)" }, { "msg_contents": "\nApplied to 7.1.1 and 7.2. Thanks.\n\n> Hi,\n> \n> we found a problem in GiST with massive insert/update operations\n> with many NULLs ( inserting of NULL into indexed field cause\n> ERROR: MemoryContextAlloc: invalid request size)\n> As a workaround 'vacuum analyze' could be used.\n> \n> This patch resolves the problem, please upply to 7.1.1 sources and\n> current cvs tree.\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 10:14:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: please apply patch for GiST (7.1.1, current cvs)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Applied to 7.1.1 and 7.2. Thanks.\n\nYou seem to have missed the REL7_1_STABLE branch ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 11:02:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: please apply patch for GiST (7.1.1, current cvs) " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Applied to 7.1.1 and 7.2. Thanks.\n> \n> You seem to have missed the REL7_1_STABLE branch ...\n\nGot it now. Not sure how I missed the apply the first pass. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 13:09:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: please apply patch for GiST (7.1.1, current cvs)" } ]
[ { "msg_contents": "\n> > Let's avoid removing things for the sake of removing them ... might be an\n> > old idea that, if someone takes the time to research, might prove useful\n> > ...\n> \n> Yea, there is actually some code attached to this vs. the others that\n> had no code at all. Are we ever going to do partial indexes? I guess\n> that is the question.\n\nThe idea is very very good, and since there is an exaple implementation in \npg 4 it should probably be possible to reimplement. (DB2 has this feature also)\n\nIn real life, you would e.g. index a status column for rows, that need more work.\ncreate index deposit_status_index on deposit (status) where status <> 0;\n99% of your rows would have status = 0 thus the index would be extremely \nefficient for all select statements that search for a specific status other than 0. \n\nImho it would be a shame to give up that idea so easily.\n\nAndreas\n", "msg_date": "Tue, 15 May 2001 12:07:32 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: pg_index.indislossy" }, { "msg_contents": "> > Yea, there is actually some code attached to this vs. the others that\n> > had no code at all. Are we ever going to do partial indexes? I guess\n> > that is the question.\n> The idea is very very good, and since there is an exaple implementation in\n> pg 4 it should probably be possible to reimplement. (DB2 has this feature also)\n...\n> Imho it would be a shame to give up that idea so easily.\n\nAgreed. Another common example is to create an index on all non-null\nvalues of a column.\n\n - Thomas\n", "msg_date": "Tue, 15 May 2001 12:40:51 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy" } ]
[ { "msg_contents": "\n> One problem with keeping it is that interface coders are getting\n> confused by some of the unused system table columns, assuming they mean\n> something, when in fact they don't. Both ODBC and JDBC have had this\n> problem that I fixed today.\n\nImho the correct answer to this would be to implement the SQL standard \nsystem views, and make all interfaces use those.\n\nAndreas\n", "msg_date": "Tue, 15 May 2001 12:10:22 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: pg_index.indislossy" } ]
[ { "msg_contents": "\n> > > Can someone tell me what we use indislossy for? \n\nOk, so the interpretation of this field is:\n\tA match in the index needs to be reevaluated in the heap tuple data,\n\tsince a match in the index does not necessarily mean, that the heap tuple\n\tmatches.\n\tIf the heap tuple data matches, the index must always match.\n\nA very typical example for such an index is a hash index. This might explain the \nfact, that the ODBC driver misinterpreted that field as meaning that the index is a hash. \nThe field has nothing to do with partial index.\n\nAndreas\n", "msg_date": "Tue, 15 May 2001 14:34:39 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: pg_index.indislossy" }, { "msg_contents": "\nAdded to pg_index.h file as a comment.\n\n> \n> > > > Can someone tell me what we use indislossy for? \n> \n> Ok, so the interpretation of this field is:\n> \tA match in the index needs to be reevaluated in the heap tuple data,\n> \tsince a match in the index does not necessarily mean, that the heap tuple\n> \tmatches.\n> \tIf the heap tuple data matches, the index must always match.\n> \n> A very typical example for such an index is a hash index. This might explain the \n> fact, that the ODBC driver misinterpreted that field as meaning that the index is a hash. \n> The field has nothing to do with partial index.\n> \n> Andreas\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Jul 2001 14:35:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy" }, { "msg_contents": "Bruce Momjian writes:\n\n> > > > > Can someone tell me what we use indislossy for?\n> >\n> > Ok, so the interpretation of this field is:\n> > \tA match in the index needs to be reevaluated in the heap tuple data,\n> > \tsince a match in the index does not necessarily mean, that the heap tuple\n> > \tmatches.\n> > \tIf the heap tuple data matches, the index must always match.\n\nAFAIK, this is true for all indexes in PostgreSQL, because index rows\ndon't store the transactions status. Of course those are two different\nunderlying reasons why a heap lookup is always necessary, but there\nshouldn't be any functional difference in the current implementation.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 10 Jul 2001 16:53:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > > > > Can someone tell me what we use indislossy for?\n> > >\n> > > Ok, so the interpretation of this field is:\n> > > \tA match in the index needs to be reevaluated in the heap tuple data,\n> > > \tsince a match in the index does not necessarily mean, that the heap tuple\n> > > \tmatches.\n> > > \tIf the heap tuple data matches, the index must always match.\n> \n> AFAIK, this is true for all indexes in PostgreSQL, because index rows\n> don't store the transactions status. Of course those are two different\n> underlying reasons why a heap lookup is always necessary, but there\n> shouldn't be any functional difference in the current implementation.\n\nSeems it is something they added for the index abstraction and not for\npractical use by PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 10:56:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n> A match in the index needs to be reevaluated in the heap tuple data,\n> since a match in the index does not necessarily mean, that the heap tuple\n> matches.\n\n> AFAIK, this is true for all indexes in PostgreSQL, because index rows\n> don't store the transactions status.\n\nNot true at all. The tuple commit status needs to be rechecked, yes,\nbut with a normal index it is not necessary to recheck whether the index\nkey field actually satisfies the index qual conditions. With a lossy\nindex it *is* necessary to recheck --- the index may return more tuples\nthan the ones that match the given qual. For example, an r-tree index\napplied to a \"distance from point X <= D\" query might return all the\ntuples lying within a bounding box of the circle actually wanted.\n\nThe LIKE index optimization can also be thought of as using an index as\na lossy index: the index scan gives you all the tuples you want, plus\nsome you don't, so you have to evaluate the LIKE operator over again at\neach returned tuple.\n\nBasically, what this is good for is using an index for more kinds of\nWHERE conditions than it could otherwise support. It is *not* a useless\nabstraction. It occurs to me though that marking the index itself\nas lossy is the wrong way to think about it --- the right way is to\nassociate the \"lossy\" flag with use of a particular operator with an\nindex. So maybe the flag should be in pg_amop or pg_amproc, instead.\nSomeday I'd also like to see those tables extended so that the LIKE\nindex optimization is described by the tables, rather than being\nhard-wired into the planner as it is now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 11:47:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy " }, { "msg_contents": "Tom Lane writes:\n\n> Not true at all. The tuple commit status needs to be rechecked, yes,\n> but with a normal index it is not necessary to recheck whether the index\n> key field actually satisfies the index qual conditions. With a lossy\n> index it *is* necessary to recheck --- the index may return more tuples\n> than the ones that match the given qual.\n\nOkay, this is not surprising. I agree that storing this in the index\nmight be suboptimal.\n\nBut why is this called lossy? Shouldn't it be called \"exceedy\"?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 10 Jul 2001 18:20:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> But why is this called lossy? Shouldn't it be called \"exceedy\"?\n\nGood point ;-). \"lossy\" does sound like the index might \"lose\" tuples,\nwhich is exactly what it's not allowed to do; it must find all the\ntuples that match the query.\n\nThe terminology is correct by analogy to \"lossy compression\" --- the\nindex loses information, in the sense that its result isn't quite the\nresult you wanted. But I can see where it'd confuse the unwary.\nPerhaps we should consult the literature and see if there is another\nterm for this concept.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 13:36:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > But why is this called lossy? Shouldn't it be called \"exceedy\"?\n> \n> Good point ;-). \"lossy\" does sound like the index might \"lose\" tuples,\n> which is exactly what it's not allowed to do; it must find all the\n> tuples that match the query.\n> \n> The terminology is correct by analogy to \"lossy compression\" --- the\n> index loses information, in the sense that its result isn't quite the\n> result you wanted. But I can see where it'd confuse the unwary.\n> Perhaps we should consult the literature and see if there is another\n> term for this concept.\n\nSeeing how our ODBC driver refrences it in previous releases, we are\ngoing to have trouble changing it. I always thought it was \"lossy\" in\nterms of compression too.\n\nI don't see it mentioned now in ODBC, but I think it used to be there. \nI changed it recently to check for word \"hash\" instead.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Jul 2001 13:46:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy" }, { "msg_contents": "On Tue, Jul 10, 2001 at 01:36:33PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > But why is this called lossy? Shouldn't it be called \"exceedy\"?\n> \n> Good point ;-). \"lossy\" does sound like the index might \"lose\" tuples,\n> which is exactly what it's not allowed to do; it must find all the\n> tuples that match the query.\n> \n> The terminology is correct by analogy to \"lossy compression\" --- the\n> index loses information, in the sense that its result isn't quite the\n> result you wanted. But I can see where it'd confuse the unwary.\n> Perhaps we should consult the literature and see if there is another\n> term for this concept.\n\nHow about \"hinty\"? :-)\n\nSeriously, \"indislossy\" is a singularly poor name for a predicate.\nAlso, are we so poor that we can't afford whole words, or even word \nbreaks? I propose \"index_is_hint\". \n\nActually, is the \"ind[ex]\" part even necessary? \nHow about \"must_check_heap\"?\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 10 Jul 2001 12:26:22 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> Seriously, \"indislossy\" is a singularly poor name for a predicate.\n\nPerhaps, but it fits with the existing naming conventions for Postgres\ncatalog columns. Unless we want to indulge in wholesale renaming of\nthe system's catalog columns (and break an awful lot of applications)\nI'd resist any name for a pg_index column that's not of the form \"indFOO\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jul 2001 17:36:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: pg_index.indislossy " } ]
[ { "msg_contents": "\nOk, so this is off topic, but I know there are a few hams on this\nlist. I'm wondering if any of them are going to Dayton this weekend?\nProbably better to contact me off list.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 15 May 2001 10:40:41 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "I know there are few hams on this list..." } ]
[ { "msg_contents": "Hi!\n\nI'm having a problem with a SQL-sentence like this:\n\n\tSELECT * from DATABASE.Table;\n\nI get an error saying: 'parse error at or near \".\"'\n\nI've done this in MySQL and Oracle without any problem, and found it\nstrange that I couldn't do it in Postgres. \n\nAm I missing something, or do\nI have to use another syntax to be able to select from tables in another\ndatabases?\n", "msg_date": "Tue, 15 May 2001 17:24:33 +0200", "msg_from": "\"Trygve Falch\" <trf@ssb.no>", "msg_from_op": true, "msg_subject": "SELECT from a table in another database" }, { "msg_contents": "\n\nTrygve Falch wrote:\n> \n> Hi!\n> \n> I'm having a problem with a SQL-sentence like this:\n> \n> SELECT * from DATABASE.Table;\n> \n> I get an error saying: 'parse error at or near \".\"'\n> \n> I've done this in MySQL and Oracle without any problem, and found it\n> strange that I couldn't do it in Postgres.\n> \n> Am I missing something, or do\n> I have to use another syntax to be able to select from tables in another\n> databases?\n\nAFAIK cross database joins are not possible in PostgreSQL.\n\nRegards,\n\nNils Zonneveld\n", "msg_date": "Tue, 15 May 2001 18:29:38 +0200", "msg_from": "Nils Zonneveld <nils@mbit.nl>", "msg_from_op": false, "msg_subject": "Re: SELECT from a table in another database" }, { "msg_contents": "\"Nils Zonneveld\" <nils@mbit.nl> wrote in message\nnews:3B015964.AF072405@mbit.nl...\n\n> AFAIK cross database joins are not possible in PostgreSQL.\n\nHi! Thanks for the answer.\n\nActually I think I found 'Allow queries across multiple databases' in the\nTODO-list under something they call 'Exotic feature'. I thought that this\nfeature was relativly basic and standard-feature in most DB's.\n\nMaybe I am exotic to need it. *sob*\n\n\n\n\n", "msg_date": "Tue, 15 May 2001 22:26:14 +0200", "msg_from": "\"Trygve Falch\" <trf@ssb.no>", "msg_from_op": true, "msg_subject": "Re: SELECT from a table in another database" }, { "msg_contents": "\n\nTrygve Falch wrote:\n> \n> \"Nils Zonneveld\" <nils@mbit.nl> wrote in message\n> news:3B015964.AF072405@mbit.nl...\n> \n> > AFAIK cross database joins are not possible in PostgreSQL.\n> \n> Hi! Thanks for the answer.\n> \n> Actually I think I found 'Allow queries across multiple databases' in the\n> TODO-list under something they call 'Exotic feature'. I thought that this\n> feature was relativly basic and standard-feature in most DB's.\n> \n> Maybe I am exotic to need it. *sob*\n\nI don't know what you are using those database for, but nothing prevents\nyou from letting your clients connect to the different databases the\nsame time.\n\nI use for instance a MS Access front end (yes, I know but clients\nrequest etc.) to connect to different ODDBC sources and t works just fine.\n\nAnother solution is of course to integrate the tables that you need in\nyour joins in one database.\n\nHTH,\n\nNils\n", "msg_date": "Wed, 16 May 2001 00:26:03 +0200", "msg_from": "Nils Zonneveld <nils@mbit.nl>", "msg_from_op": false, "msg_subject": "Re: SELECT from a table in another database" }, { "msg_contents": "In article <3B01ACE4.2A47F2C6@mbit.nl>, \"Nils Zonneveld\" <nils@mbit.nl>\nwrote:\n\n>> Actually I think I found 'Allow queries across multiple databases' in\n>> the TODO-list under something they call 'Exotic feature'. I thought\n>> that this feature was relativly basic and standard-feature in most\n>> DB's.\n\n> I don't know what you are using those database for, but nothing prevents\n> you from letting your clients connect to the different databases the\n> same time.\n\nBut that requires me to make a new database connection for each database I\nneed to access.\n\nAnd putting 200+ tables in one single database is not an option.\n\n The application which needs to be able to do this is a\ncross-database-application (MSSQL, Oracle, Sybase) and I have almost no\nroom for doing major changes to the SQL which this application uses.\n\nBut the lack of this feature in Postgres makes it almost impossible to\nmake a structured database design for huge application. I know this\nquestion have been asked before in another postgres forum as early as\n1998, and what Bruce Momjian said then was that most commercial databases\ncouldn't do it, which was probably right for 1998, but today even MySQL\ncan do this! Sybase, Oracle and MSSQL can also do this. I think even DB2\nand Informix can.\n\nI was really suprised when I discovered that this was even an issue with\nPostgres, because everything else in this wonderful DBM is on an\nenterprise level of quality and functionality.\n\nSadly, this means I'll have to stick to Oracle (even if I really didn't\nwant to) until this issue is resolved in Postgres.\n\n(crossposted to comp.databases.postgresql.hackers).\n", "msg_date": "Wed, 16 May 2001 09:58:27 +0200", "msg_from": "\"Trygve Falch\" <trf@ssb.no>", "msg_from_op": true, "msg_subject": "Queries across multiple databases (was: SELECT from a table in\n\tanother database)." }, { "msg_contents": "\n\nTrygve Falch wrote:\n> \n> In article <3B01ACE4.2A47F2C6@mbit.nl>, \"Nils Zonneveld\" <nils@mbit.nl>\n> wrote:\n> \n> >> Actually I think I found 'Allow queries across multiple databases' in\n> >> the TODO-list under something they call 'Exotic feature'. I thought\n> >> that this feature was relativly basic and standard-feature in most\n> >> DB's.\n> \n> > I don't know what you are using those database for, but nothing prevents\n> > you from letting your clients connect to the different databases the\n> > same time.\n> \n> But that requires me to make a new database connection for each database I\n> need to access.\n> \n> And putting 200+ tables in one single database is not an option.\n> \n> The application which needs to be able to do this is a\n> cross-database-application (MSSQL, Oracle, Sybase) and I have almost no\n> room for doing major changes to the SQL which this application uses.\n> \n\nIf you have a cross-database-application you must already have multiple\nconnections to several database-engines at the same time. Or is that a\nsituation you want to get rid of?\n\n\n> But the lack of this feature in Postgres makes it almost impossible to\n> make a structured database design for huge application. I know this\n> question have been asked before in another postgres forum as early as\n> 1998, and what Bruce Momjian said then was that most commercial databases\n> couldn't do it, which was probably right for 1998, but today even MySQL\n> can do this! Sybase, Oracle and MSSQL can also do this. I think even DB2\n> and Informix can.\n> \n> I was really suprised when I discovered that this was even an issue with\n> Postgres, because everything else in this wonderful DBM is on an\n> enterprise level of quality and functionality.\n> \n> Sadly, this means I'll have to stick to Oracle (even if I really didn't\n> want to) until this issue is resolved in Postgres.\n> \n\nI'm not a PostgreSQL developer, just a humble user :-) If you have the\nmoney and resources to use Oracle, use Oracle if you really need schema\nsupport. If not, there are workarounds. At the moment PostgreSQL doesn't\nhave schema support. But there's light at the end of the tunnel: I've\nheard Tom Lane mention schema support several times (can you give us an\nestimate when schema support will be available in PostgreSQL Tom?).\n\nRegards,\n\nNils Zonneveld\n", "msg_date": "Wed, 16 May 2001 14:56:44 +0200", "msg_from": "Nils Zonneveld <nils@mbit.nl>", "msg_from_op": false, "msg_subject": "Re: Queries across multiple databases (was: SELECT from a table in\n\tanother database)." }, { "msg_contents": "Nils Zonneveld <nils@mbit.nl> writes:\n> support. If not, there are workarounds. At the moment PostgreSQL doesn't\n> have schema support. But there's light at the end of the tunnel: I've\n> heard Tom Lane mention schema support several times (can you give us an\n> estimate when schema support will be available in PostgreSQL Tom?).\n\nI'd like to see it happen in 7.2. When's 7.2? Who knows ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 May 2001 19:06:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Queries across multiple databases (was: SELECT from a table\n\tin another database)." }, { "msg_contents": "\"Trygve Falch\" <trf@ssb.no> writes:\n> And putting 200+ tables in one single database is not an option.\n\nWhy not?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 10:45:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Queries across multiple databases (was: SELECT from a table in\n\tanother database)." } ]
[ { "msg_contents": "About to be implemented, for your approval...\n\nVariable name: dynamic_library_path\n\nPermissions: superuser\n\nDefault value: empty string\n\nSpecification:\n\nWhen the dynamic loader attempts to load a file (initiated by create\nfunction, for example) and the file name does not contain a slash\n(anywhere) and this variable is not set to the empty string, the dynamic\nloader will look for the file in the search path specified by this\nvariable.\n\nThe search path is the usual colon-separated style. Empty components will\nbe ignored. If the directory name is not absolute, an error will be\nraised.\n\nIf no appropriate file is found in this path walk, the dynamic loader will\ntry to load the file as given, which may invoke a system-dependent lookup\nmechanism (e.g., LD_LIBRARY_PATH).\n\n(The fine points of this specification are intended to be compatible with\nLibtool's libltdl dynamic loading interface.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 15 May 2001 17:40:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Configurable path to look up dynamic libraries" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> About to be implemented, for your approval...\n> Variable name: dynamic_library_path\n> Permissions: superuser\n> Default value: empty string\n\nThis is of little value unless the default is intelligently chosen.\nThe default should be \"$PGLIB\", IMHO (inserted from configure's data).\nUnless there is a usable default, we cannot start recommending that\npeople not use absolute paths in CREATE FUNCTION commands.\n\nI do not believe that it's a good idea to allow the value to be changed\nat runtime, either --- do you expect that backends will remove\nalready-loaded libraries in response to a change in the variable?\nI think setting the path at postmaster start is necessary and sufficient.\n\nAlso, it'd be really nice if a platform-dependent suffix (.so, .sl,\netc) were automatically added to the given library name, if one's not\npresent already.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 11:58:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Tom Lane writes:\n\n> This is of little value unless the default is intelligently chosen.\n> The default should be \"$PGLIB\", IMHO (inserted from configure's data).\n\nThis default has little value as well. Users don't generally put their\nloadable modules in the same directory as a PostgreSQL installation.\nMaybe they do for general-purpose contrib-like stuff, but then they might\nas well use an absolute path. (Remember that a PostgreSQL installation\ncould well be under /usr/lib; think of all the things that reside there\nthat we have no business with.)\n\nThis also ties in somewhat with the fact that we have no default for\nPGDATA, on purpose. If we can have arbitrarily located data locations the\nsystem should not have a hard-wired in default for libraries (which are\nusually tied to particular databases in particular database clusters).\n\n> I do not believe that it's a good idea to allow the value to be changed\n> at runtime, either --- do you expect that backends will remove\n> already-loaded libraries in response to a change in the variable?\n\nNo, I would expect it to use the path for loading new libraries from then\non. People that use loadable libraries and C functions are superusers and\nexperienced enough to cope with this little (logical) fact. (Analogy:\nWhen I change the PATH in my shell, the shell does not kill all processes\nalready running.)\n\nThe way I think this is most useful is in third-party provided\nload_all_my_stuff.sql scripts, like:\n\nset dynamic_library_path='/usr/local/foo/lib'; -- inserted by the package's build process\n\ncreate function foo_xxx() ...\n\n(Yes, you could do the same \"inserted by package's build process\" into\neach of the create function's, but this way it's much cleaner.)\n\nI also envision this to be used as part of dump/restore. pg_dump might\nhave an option \"do not dump full path\", and it would insert a 'set\ndynamic_library_path'. This would work like the previous case, really.\n\nAlso think of a developer wanting to try out different sets of libraries\nwith a common load script.\n\nIf we make this parameter postmaster start only then we really don't gain\nanything. We don't even gain the minimal expected convenience in pg_dump\nbecause you would force all modules to reside in a certain place where\nadministrators would least like them to be.\n\n\n> Also, it'd be really nice if a platform-dependent suffix (.so, .sl,\n> etc) were automatically added to the given library name, if one's not\n> present already.\n\nYes.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 15 May 2001 18:47:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> This is of little value unless the default is intelligently chosen.\n>> The default should be \"$PGLIB\", IMHO (inserted from configure's data).\n\n> This default has little value as well. Users don't generally put their\n> loadable modules in the same directory as a PostgreSQL installation.\n\nThat's a sweeping statement with little to back it up. How do you know\nthat the usual procedure isn't to put things in $PGLIB? That's\ncertainly what all our contrib packages do. Even more to the point,\nthat's certainly where the PL call handler functions are. I will\nconsider this feature utterly without value unless it allows the\nstandard declaration of plpgsql_call_handler to become\ninstallation-independent, viz\n\tCREATE FUNCTION plpgsql_call_handler () RETURNS OPAQUE\n\tAS 'plpgsql' LANGUAGE 'C';\n\n> This also ties in somewhat with the fact that we have no default for\n> PGDATA, on purpose. If we can have arbitrarily located data locations the\n> system should not have a hard-wired in default for libraries (which are\n> usually tied to particular databases in particular database clusters).\n\nI'd be willing to accept a default path that points to somewhere under\n$PGDATA, although I consider this rather less useful. Maybe we could\nagree on a compromise two-entry default path: \"$PGDATA/functions:$PGLIB\"?\nThat would require some initdb-time shenanigans to set up, but if you\nwant to do it...\n\n> I also envision this to be used as part of dump/restore. pg_dump might\n> have an option \"do not dump full path\", and it would insert a 'set\n> dynamic_library_path'. This would work like the previous case, really.\n\nWhat? What value would it have for pg_dump to do a set path operation?\nThe dump script would be unlikely to actually invoke any of the\nfunctions it's loading. By the time anyone tries to use the functions,\nthey'd be running in a different backend with a different path setting,\nnamely the default for the installation.\n\n> If we make this parameter postmaster start only then we really don't gain\n> anything. We don't even gain the minimal expected convenience in pg_dump\n> because you would force all modules to reside in a certain place where\n> administrators would least like them to be.\n\nI fail to follow this claim also. The point as far as I'm concerned is\nthat paths mentioned in CREATE FUNCTION ought to be relative to\nsomeplace that's installation-dependent. That way, when you dump out\nand reload a CREATE FUNCTION command, the declaration is still good,\nyou just have to have put a copy of the function's shlib in the right\nplace for the new installation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 12:59:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Tom Lane writes:\n\n> The point as far as I'm concerned is that paths mentioned in CREATE\n> FUNCTION ought to be relative to someplace that's\n> installation-dependent. That way, when you dump out and reload a\n> CREATE FUNCTION command, the declaration is still good, you just have\n> to have put a copy of the function's shlib in the right place for the\n> new installation.\n\nOkay, I'm convinced that $libdir can be a useful default. But given the\ncase where users might want to *add* his directory to the path he needs to\nhave knowledge of what the default path is. (Unfortunately we can't do\nPATH=$PATH:xxx.) Perhaps it would be good to make the empty path\ncomponent equivalent to $libdir, e.g.,\n\n''\t\t\tdefault, search libdir\n':/my/own'\t\tsearch libdir before my own\n'/my/own:'\t\tsearch libdir after my own\n'/my/own'\t\tdon't seach libdir\n\nBut I think there are enough possibly useful applications for changing\nthis while the postmaster is running at no real harm.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 15 May 2001 20:10:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Perhaps it would be good to make the empty path\n> component equivalent to $libdir, e.g.,\n\nHmm, that would work, and also avoid having to figure out how to stuff\n$PGLIB into postgresql.conf during initdb.\n\nSold as far as I'm concerned ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 14:19:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Perhaps it would be good to make the empty path\n> > component equivalent to $libdir, e.g.,\n> \n> Hmm, that would work, and also avoid having to figure out how to stuff\n> $PGLIB into postgresql.conf during initdb.\n\nWhile on the subject of postgresql conf... shouldn't it be in\nsysconfdir instead of the database directory? And there's no switch to\nthe postmaster to tell it you've put it somewhere else either.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 May 2001 14:44:22 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> While on the subject of postgresql conf... shouldn't it be in\n> sysconfdir instead of the database directory?\n\nNo. That would (a) not allow different postmasters to have different\nconfig files; (b) not allow a person to create an unprivileged\ninstallation (assuming that sysconfdir is root-owned).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 14:49:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > While on the subject of postgresql conf... shouldn't it be in\n> > sysconfdir instead of the database directory?\n> \n> No. That would (a) not allow different postmasters to have different\n> config files;\n\nYou could search in a path... first sysconfdir, then datadir. \n\n> (b) not allow a person to create an unprivileged\n> installation (assuming that sysconfdir is root-owned).\n\nSysconfdir defaults to $prefix/etc, so that's not a problem.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 May 2001 14:56:32 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Tuesday 15 May 2001 14:44, Trond Eivind Glomsr�d wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > Perhaps it would be good to make the empty path\n> > > component equivalent to $libdir, e.g.,\n\n> > Hmm, that would work, and also avoid having to figure out how to stuff\n> > $PGLIB into postgresql.conf during initdb.\n\n> While on the subject of postgresql conf... shouldn't it be in\n> sysconfdir instead of the database directory? And there's no switch to\n> the postmaster to tell it you've put it somewhere else either.\n\nWhile I understand and, to an extent, agree with this sentiment, it would be \nunworkable at present unless the postgresql.conf file contained constructs \nthat differentiated between multiple datadirs. While the RPM currently \ndoesn't support that possibility (not that it will never support such a \npossibility :-)), there are many who do use PostgreSQL with multiple \npostmasters and datadirs.\n\nI personally wouldn't mind a construct similar to that of a webserver that \nsupported multiple domain hosting -- you have a master config file or config \nfile section that is in a standard place, and you have either a separate \nconfig file or config file section for each datadir -- in the case of the \nmultiple config files, the master would point to each one.\n\nBUT, given the current mindset in the postgresql.conf file, keeping it with \nthe datadir is presently the only practical option.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7AXvy5kGGI8vV9eERAhS+AKDGhUchvGN5AEkBqE11wEq8xskrGwCgmIFs\nFEDp+xn6e9rdVskMOlhtEKI=\n=B0tI\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 15 May 2001 14:56:46 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> You could search in a path... first sysconfdir, then datadir. \n\nSurely the other way around.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 15:11:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > You could search in a path... first sysconfdir, then datadir. \n> \n> Surely the other way around.\n\nWhich could work as well - or just a switch to postmaster to tell it\nwhich file to use.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 May 2001 15:13:22 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> You could search in a path... first sysconfdir, then datadir. \n>> \n>> Surely the other way around.\n\n> Which could work as well - or just a switch to postmaster to tell it\n> which file to use.\n\nI could live with a datadir-then-sysconfdir path search. (It should be\ndatadir first, since the sysconfdir file would serve as a system-wide\ndefault for multiple postmasters.) Given that approach I see no real\nneed for a postmaster switch.\n\nPossibly the same approach should apply to all the config files we\ncurrently store in datadir?\n\nThere is a security issue here: stuff stored in datadir is not visible\nto random other users on the machine (since datadir is mode 700), but\nI would not expect sysconfdir to be mode 700. We'd need to think about\nthe implications of allowing Postgres config files to be world-visible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 15:27:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> >> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > You could search in a path... first sysconfdir, then datadir. \n> >> \n> >> Surely the other way around.\n> \n> > Which could work as well - or just a switch to postmaster to tell it\n> > which file to use.\n> \n> I could live with a datadir-then-sysconfdir path search. (It should be\n> datadir first, since the sysconfdir file would serve as a system-wide\n> default for multiple postmasters.) Given that approach I see no real\n> need for a postmaster switch.\n> \n> Possibly the same approach should apply to all the config files we\n> currently store in datadir?\n> \n> There is a security issue here: stuff stored in datadir is not visible\n> to random other users on the machine (since datadir is mode 700), but\n> I would not expect sysconfdir to be mode 700. \n\nIt could be (the RPMs specify a sysconfdir of /etc/pgsql)\n\n> We'd need to think about the implications of allowing Postgres\n> config files to be world-visible. \n\nThe files doesn't need to be visible to others...\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 May 2001 15:30:53 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n>> There is a security issue here: stuff stored in datadir is not visible\n>> to random other users on the machine (since datadir is mode 700), but\n>> I would not expect sysconfdir to be mode 700. \n\n> It could be (the RPMs specify a sysconfdir of /etc/pgsql)\n\nThe usual install procedure would probably leave sysconfdir owned by\nroot, if one likes to install in such a way that the binaries are owned\nby root (ie make, su root, make install). I'd object to a setup that's\ninsecure for people who aren't using RPMs.\n\nThe real bottom line here, though, is that you haven't shown me any\npositive reason to move the config files out of datadir. They're not\nbroken where they are; and arguably they *are* data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 15:43:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n> > teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > > You could search in a path... first sysconfdir, then datadir.\n> >\n> > Surely the other way around.\n>\n> Which could work as well - or just a switch to postmaster to tell it\n> which file to use.\n\nMight as well use a symlink in this case.\n\nI could go for a solution that processed both files in order (possibly\neven ${sysconfdir}/postgresql.conf, $PGDATA/postgresql.conf,\n${sysconfdir}/postgresql.conf.fixed in order, � la PINE). It could be as\neasy as adding two or three lines in postmaster.c. However, I'm afraid\nusers will interpret a file in $sysconfdir as something clients should\nprocess as well, which is not part of this deal.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 15 May 2001 21:45:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> >> There is a security issue here: stuff stored in datadir is not visible\n> >> to random other users on the machine (since datadir is mode 700), but\n> >> I would not expect sysconfdir to be mode 700. \n> \n> > It could be (the RPMs specify a sysconfdir of /etc/pgsql)\n> \n> The usual install procedure would probably leave sysconfdir owned by\n> root, if one likes to install in such a way that the binaries are owned\n> by root (ie make, su root, make install). I'd object to a setup that's\n> insecure for people who aren't using RPMs.\n\nSo make the files unreadable, if so required.\n\n> The real bottom line here, though, is that you haven't shown me any\n> positive reason to move the config files out of datadir. \n\nIt conflicts with the FHS - and no, I don't consider configuration\nfiles and data as an identical item. \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 May 2001 15:47:00 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n>> The real bottom line here, though, is that you haven't shown me any\n>> positive reason to move the config files out of datadir. \n\n> It conflicts with the FHS -\n\nAFAIK, the FHS is not designed to support multiple instances of\nunprivileged daemons. I'm not interested in forcing Postgres into\nthat straitjacket ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 16:12:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> >> The real bottom line here, though, is that you haven't shown me any\n> >> positive reason to move the config files out of datadir. \n> \n> > It conflicts with the FHS -\n> \n> AFAIK, the FHS is not designed to support multiple instances of\n> unprivileged daemons.\n\nIt's OK to support such files, what I don't like is _requiring_ them.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "15 May 2001 16:15:44 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Tuesday 15 May 2001 16:12, Tom Lane wrote:\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> >> The real bottom line here, though, is that you haven't shown me any\n> >> positive reason to move the config files out of datadir.\n\n> > It conflicts with the FHS -\n\n> AFAIK, the FHS is not designed to support multiple instances of\n> unprivileged daemons.\n\nReally? I've never read that into the FHS.\n\nA '/etc/pgsql' directory can easily accomodate multiple config files, or a \nsingle config file with multiple sections. It can just as easily have a \nwhole tree of stuff under it. And it can be owned by a non-privileged user. \nIt just cannot be installed into without root's help -- and I know that that \nis the main objection here.\n\nBut I have a hard time understanding this all-or-nothing approach. So what \nif we have a configure-time option to allow a more FHS-compliant approach? \nIt won't interfere with the 'traditional' /usr/local/pgsql or /opt/pgsql or \nwhatever way of doing things. Further, it won't uglify the code in the least \n- -- it just allows more choice. Further, it won't take a hacker long to make \nit work. It won't even touch 'real' backend code, either -- this is a \npostmaster thingy. \n\nWhat's the opposition about? We have all the configure options already for \nmany things that the 'traditional' postgresql user cares nothing about.\n\nBut, on a more pragmatic note, I am contemplating the ability for the RPM's \ninitscript to allow multiple postmasters -- even up to sane behavior in the \npresence of postmasters that it didn't start -- with multiple datadirs, etc. \n\nI don't want the RPM's to 'editorialize' any more than anyone else might -- \nbut unless a more FHS-compliant approach is at least _allowed_ (NOT \nmandated!), I guess there will need to be some editorializing in the \nintscript as it will have to place its own config file somewhere in order to \nmake multiple postmasters happen.\n\nBut I don't want to go through that if I don't have to -- or if it's going to \nhappen anyway.\n\nAnd, I know, currently the '-D' postmaster directive does, indirectly, point \nto the location of postgresql.conf..... :-) I will be using that in the \ninitscript's logic if another option isn't done.\n\nBut, if I may editorialize a little myself, this is just indicative of a \n'Fortress PostgreSQL' attitude that is easy to get into. 'We've always done \nit this way' or 'This is the way PostgreSQL works' are pat answers to \nanything from 'Why can't I more smoothly upgrade?' to 'Why does PostgreSQL \nuse non-SQL-standard case-folding?' to 'Why does everything go in \n/usr/local/pgsql?' to 'Do I _really_ have to do an ASCII dump of my 100GB \ndatabase in order to go to the next major version?' to any number of other \nFAQ's.\n\nJust because we've always done it one way does not that one way correct make.\n\nWe're one component of a system -- and the PostgreSQL Group has done such a \ngood job of being platform agnostic that the platform and systems issues are \nalmost second-class citizens.\n\nWell, gotta get off the soap box now, and get to work producing some code, I \nguess. People are going to be expecting multiple postmaster support in the \nRPMset soon. :-)\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7AZ+15kGGI8vV9eERAu+MAJ9bk8mY8n1qIk8zKqWM1K188/530wCeJnwd\nZZDjAosFhRnTENBWJ+THju4=\n=mPC9\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 15 May 2001 17:29:23 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "> But, if I may editorialize a little myself, this is just indicative of a \n> 'Fortress PostgreSQL' attitude that is easy to get into. 'We've always done \n\nI have to admit I like the sound of 'Fortress PostgreSQL'. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 May 2001 17:53:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Tuesday 15 May 2001 17:53, Bruce Momjian wrote:\n> > But, if I may editorialize a little myself, this is just indicative of a\n> > 'Fortress PostgreSQL' attitude that is easy to get into. 'We've always\n> > done\n\n> I have to admit I like the sound of 'Fortress PostgreSQL'. :-)\n\nI don't moonlight as an English professor for nothing! :-)\n\nI figured that phrase might get attention....... Just a friendly dig.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7AalJ5kGGI8vV9eERAtG4AJwKR/28NtQMWQ5LgfXaegbzq/jO9ACgxs0E\n/DH7QEhwqV2jqHZ9wF5TG3c=\n=YLQ3\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 15 May 2001 18:10:14 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Cnfiguration file locations (was: Re: Configurable path to look up\n\tdynamic libraries)" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Just because we've always done it one way does not that one way correct make.\n\nSure.\n\n> We're one component of a system -- and the PostgreSQL Group has done such a \n> good job of being platform agnostic that the platform and systems issues are \n> almost second-class citizens.\n\nIndeed, that I think is the underlying issue here. \"It's FHS compliant\"\ncuts no ice with people who don't run FHS-layout systems, and I don't\nwant to improve FHS compliancy at the price of making life more\ndifficult for others. (Likewise for other RPM installation issues, as\nyou well know ;-))\n\nI do think that the notion of a configure file path search (datadir then\nsysconfdir) is reasonable if the security and file protection issues can\nbe ironed out. But that will require some thought about separating\nsecurity-critical data from not-critical data. I think we ought to keep\npg_hba.conf and subsidiary files (especially password files!) in datadir\n*only*. I'm not sure about the other config files; up to now no one's\npaid any attention to security issues for those files, knowing that they\nwere all kept in the same place. We might need to reorganize their\ncontents.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 May 2001 18:23:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries " }, { "msg_contents": "Lamar Owen writes:\n\n> What's the opposition about?\n\nThe /etc directory is for system configuration files. The\n$PGDATA/postgresql.conf file is a database cluster configuration file.\n\nThe RPM set only allows one database cluster per system, so it's\nappropriate to think of this database cluster as \"the\" system database\ncluster, and of the associated configuration file as \"the\" system\nconfiguration file. But since the RPM set creates this situation it is\nonly fitting that the RPM set resolve this situation. For example, it\nwould be trivial to symlink the file /var/lib/pgsql/data/postgresql.conf\nto somewhere in /etc.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 16 May 2001 00:27:51 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "On Tue, May 15, 2001 at 05:53:36PM -0400, Bruce Momjian wrote:\n> > But, if I may editorialize a little myself, this is just indicative of a \n> > 'Fortress PostgreSQL' attitude that is easy to get into. 'We've always\n> \n> I have to admit I like the sound of 'Fortress PostgreSQL'. :-)\n\nYe Olde PostgreSQL Shoppe\nThe PostgreSQL of Giza\nOur Lady of PostgreSQL, Ascendant\nPostgreSQL International Airport\nPostgreSQL Galactica\nPostgreSQL's Tavern\n", "msg_date": "Tue, 15 May 2001 15:42:06 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Tuesday 15 May 2001 18:23, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > We're one component of a system -- and the PostgreSQL Group has done such\n> > a good job of being platform agnostic that the platform and systems\n> > issues are almost second-class citizens.\n\n> Indeed, that I think is the underlying issue here. \"It's FHS compliant\"\n> cuts no ice with people who don't run FHS-layout systems, and I don't\n> want to improve FHS compliancy at the price of making life more\n> difficult for others. \n\nBut that's my point -- it wouldn't, AFAICT, make anyone's life more diffcult \nin the userbase. The existing layout would remain as the default, with the \nFHS behavior a configure-time or run-time option. Even being configured in \nan FHS compliant mode would not need to force FHS-compliance, just allow it.\n\nBut we currently are forcing FHS noncompliance on people. \n\n>(Likewise for other RPM installation issues, as\n> you well know ;-))\n\nAs well I know.... ;-)\n\n> But that will require some thought about separating\n> security-critical data from not-critical data. I think we ought to keep\n> pg_hba.conf and subsidiary files (especially password files!) in datadir\n> *only*. I'm not sure about the other config files; up to now no one's\n> paid any attention to security issues for those files, knowing that they\n> were all kept in the same place. We might need to reorganize their\n> contents.\n\nGood points. However, now that we have breached the issue, GUC in al reality \ndoesn't 100% unify configuration. There's the sticky pg_hba.conf issue, the \npassword files issue, etc.\n\n<BRAIN_STORM>\n\nSo, what you have are basically two sets of config data:\n1.)\tSitewide multiple-postmaster settings\n2.)\tPer-postmaster (and therefore per-datadir) settings.\n\nThere can be overlap here. While I might want the capability to syslog be \nsitewide, the syslog level and facility might be a per-postmaster thing. I \nmight not want any logging on certain thoroughly tested high-volume databases \n(such as those that back discussion forums on websites). Likewise with TCP \nconnections. address:port and datadir settings are of course a per-datadir \nsetting. \n\nI personally like the config file structure for the web statistics package \n'analog'. First, there are configured-in defaults set in a header file. \nThen there is the master config file. But you can then specify a secondary \nconfig file on the command line that automatically includes the master \nconfig, but can then override any setting at will. You can also specify \nwhere to find the master config.\n\nHOWEVER, while we are somewhat unique amongst databases in that there is \nbuilt-in tcp-wrappers-like functionality, maybe, just maybe, that needs to be \nlooked at a little closer.\n\nSO, if postgresql.conf is already sitting in datadir, why not include the \npg_hba.conf settings in the GUC? Passwords could be stored similarly to the \n/etc/passwd method in a postgresql.conf section. If we're going to have \nUnified Configuration, why not go whole-hog on it and really unify the \nconfiguration? The default will be to place this file in the datadir, which \nby default is mode 700, so, the default installation will be secure.\n\nOf course, currently postgresql.conf doesn't really have 'sections' except in \nstyle. The '.ini' format is well-known and easily parsed, and is common on \nmultiple platforms. But in today's climate, an XML config might be better \nreceived. \n\nAll that's left to do to satisfy my wish-list is an include directive (which \nmay already exist -- the similarity to C headers in that file is striking) \nand a command-line switch to postmaster (or pg_ctl) to grab this file from \nanother location, with the default being $PGDATA or the -D setting, if \nspecified. With an include directive, a master config setup is easily made \nwithout unnecessarily complicating matters. Oh, and tell initdb to not \noverwrite a postgresql.conf that already exists. :-).\n\nMaking this work with a single master config and multiple postmasters would \nbe easily accomplished by having the ability to 'name' the 'virtual' \ndatabases and specify the settings for that virtual database in its own named \nsection. You then can start postmaster with pg_ctl and specify a \n- --config=/etc/pgsql/postgresql.conf --name=client1 and get the right datadir \nand the right settings. Default settings set in the unnamed portion would be \noverridden by the named section's settings.\n\nBut, you could just as easily use individual configs that included a master \n(or not) and not use 'named' posmasters. And the config files could reside \nwherever the admin wanted them to reside.\n\nIt's not far from this to making pg_ctl have the ability to start all the \npostmasters in a given config file with one command. Directives to initdb if \nthe datadir doesn't have a database present would simplify installation and \ninitial startup, particularly for newbies.\n\n</BRAIN_STORM>\n\nAnd all of this would be optional usage. You don't use the feature? The \nexisting behavior would be the only reasonable default.\n\nAnd symlinks are just a Band-Aid patch, and not a solution, as trivial as \nthey may be.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7Aqck5kGGI8vV9eERAjmfAKDPY2U41TA5rvxEzG/eyo2TfjknmQCeOpmx\n1XzcsCRzQ7Eq9p3fagQJSVY=\n=SOxC\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 16 May 2001 12:13:20 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Tuesday 15 May 2001 18:27, Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > What's the opposition about?\n\n> The /etc directory is for system configuration files. The\n> $PGDATA/postgresql.conf file is a database cluster configuration file.\n\nWhich is part of the system. /etc/mail/aliases is only an email config, but \nis system-wide. I have multiple bind instances running on my main server -- \nit was relatively easy to tell bind through named.conf where to find the \nparticular zone files for the private side (I run NAT here and must maintain \nan inside global DNS as well as an inside local DNS), and it was just as easy \nto tell named to use named.conf.private for the private DNS side. And all \nthose files reside in /etc/named and /etc/named.private.\n\n> The RPM set only allows one database cluster per system, so it's\n> appropriate to think of this database cluster as \"the\" system database\n> cluster, and of the associated configuration file as \"the\" system\n> configuration file. But since the RPM set creates this situation it is\n> only fitting that the RPM set resolve this situation. For example, it\n> would be trivial to symlink the file /var/lib/pgsql/data/postgresql.conf\n> to somewhere in /etc.\n\nI can resolve the RPM issues. But, since talk is being made of changing the \ncore behavior, I wanted to weigh in on what I'd like to see. I may not have \nhigh expectations of what I am likely to actually see happen, but the hope IS \nthere. If the changes are shot down, I can still cope with the issue.\n\nBut symlinks aren't the fix, as this is not an RPM-only issue -- there are \nmore than just RPM users who might want an FHS-compliant installation with \nthe capacity for multiple postmasters.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7AqlW5kGGI8vV9eERAqHZAJ4lJW0ndi+0aSSu5GQu12yAPkEDvACg4w9u\nVfLdVIODenUU1GL4K4kf9OU=\n=OtDH\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 16 May 2001 12:22:43 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "Lamar Owen writes:\n\n> I have multiple bind instances running on my main server -- it was\n> relatively easy to tell bind through named.conf where to find the\n> particular zone files for the private side (I run NAT here and must\n> maintain an inside global DNS as well as an inside local DNS), and it\n> was just as easy to tell named to use named.conf.private for the\n> private DNS side. And all those files reside in /etc/named and\n> /etc/named.private.\n\nFunny, I was going to pull this example, because my zone files are in\n/var/named.\n\n> But symlinks aren't the fix, as this is not an RPM-only issue -- there are\n> more than just RPM users who might want an FHS-compliant installation with\n> the capacity for multiple postmasters.\n\nFHS-compliancy is only going to get you so far. Where does it stop?\nNext thing somebody comes around and claims that BSD hier(7) wants it all\ndifferently. At some point you're going to have to present usability\narguments. And I notice that no one besides the RPM maintainer(s) have\never complained about this, presumably because the current approach is\nrather usable.\n\nI don't mind a global configuration file that sets the defaults for or\noverrides the local ones, because this adds a possibly useful feature.\nBut spreading out the local configuration files over the disk does not\nhelp anyone.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 16 May 2001 18:56:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "Lamar Owen writes:\n\n> SO, if postgresql.conf is already sitting in datadir, why not include the\n> pg_hba.conf settings in the GUC?\n\nBecause the two have completely different structure.\n\n> But in today's climate, an XML config might be better received.\n\nNot in this lifetime.\n\n> Oh, and tell initdb to not overwrite a postgresql.conf that already\n> exists. :-).\n\nInitdb doesn't even start if one already exists.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 16 May 2001 19:02:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "Lamar Owen writes:\n\n> It's not far from this to making pg_ctl have the ability to start all the\n> postmasters in a given config file with one command.\n\nI think there is merit in the idea to let pg_ctl start more than one\npostmaster. But it probably ought to be in a separate config file, since\npg_ctl is a separate program. Maybe simply have a file with one PGDATA\nvalue per line, and pg_ctl runs in a loop and starts a postmaster for each\ndirectory. But what if you want to run several postmasters under several\nuser names (probably a good idea, otherwise, why are you running separate\npostmasters)? Needs more thought.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 16 May 2001 19:15:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 16 May 2001 13:02, Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > SO, if postgresql.conf is already sitting in datadir, why not include the\n> > pg_hba.conf settings in the GUC?\n\n> Because the two have completely different structure.\n\nCurrently. :-) Didn't postmaster.opts.sample (or whatever that pg_ctl \nseparate config file for 7.0.x was called) have a different structure from \nthe current postgresql.conf? :-) So the structure needs to change to \nimplement -- there is precedent.\n\n> > But in today's climate, an XML config might be better received.\n\n> Not in this lifetime.\n\nYou mean not in your lifetime, right? :-) I'm not a fan of XML either -- \nRealServer configures that way. :-(\n\n> > Oh, and tell initdb to not overwrite a postgresql.conf that already\n> > exists. :-).\n\n> Initdb doesn't even start if one already exists.\n\nI know. Makes it more painful to package a default one. But that's an RPM \nissue, not a generic one.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7ArfY5kGGI8vV9eERAqU6AKDTeiLzP82V/8Ej8YLeEssECae2rwCfR5zn\naHQebNNQfL2bqn2q4jtKp1I=\n=WWQ8\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 16 May 2001 13:24:37 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 16 May 2001 13:15, Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > It's not far from this to making pg_ctl have the ability to start all the\n> > postmasters in a given config file with one command.\n\n> I think there is merit in the idea to let pg_ctl start more than one\n> postmaster. But it probably ought to be in a separate config file, since\n> pg_ctl is a separate program.\n\nIsn't pg_ctl's separate config file in 7.0.x one of the reasons for GUC in \nthe first place? :-) Although that particular example is a little \ncontrived.....\n\n> Maybe simply have a file with one PGDATA\n> value per line, and pg_ctl runs in a loop and starts a postmaster for each\n> directory. But what if you want to run several postmasters under several\n> user names (probably a good idea, otherwise, why are you running separate\n> postmasters)? Needs more thought.\n\nThen you can have separate files. Flexibility!\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7ArhH5kGGI8vV9eERAhjQAJ9BDjf1vmWFEPEH9dGN3ZDRuCJJRACgykmY\nwKQvcH3Le9bexMbjkkdHFA0=\n=N7yV\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 16 May 2001 13:26:29 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 16 May 2001 12:56, Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > I have multiple bind instances running on my main server -- it was\n> > relatively easy to tell bind through named.conf where to find the\n> > particular zone files for the private side (I run NAT here and must\n> > maintain an inside global DNS as well as an inside local DNS), and it\n> > was just as easy to tell named to use named.conf.private for the\n> > private DNS side. And all those files reside in /etc/named and\n> > /etc/named.private.\n\n> Funny, I was going to pull this example, because my zone files are in\n> /var/named.\n\nWhich is arguably a better place to put them, FHS-wise. But the point is \nthis -- I can tell named where to look in a very flexible manner. The bind \ncache, OTOH, is variable data and should go on the var filesystem....\n\n> differently. At some point you're going to have to present usability\n> arguments. And I notice that no one besides the RPM maintainer(s) have\n> ever complained about this, presumably because the current approach is\n> rather usable.\n\nI'm not complaining. However, I would think Oliver Elphick would like \nsimilar things for Debian.\n\nAs I said before, I can implement an RPM-specific solution. But if it can \nbenefit the general userbase, it shoudn't be an RPM-specific solution.\n\n> I don't mind a global configuration file that sets the defaults for or\n> overrides the local ones, because this adds a possibly useful feature.\n\nGood.\n\n> But spreading out the local configuration files over the disk does not\n> help anyone.\n\nFlexibility! The admin should be allowed flexibility in installation, no? \nOf course, there are other directions the flexibility argument could go, but \nI'll not instigate _that_ battle...... :-)\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7Arno5kGGI8vV9eERAkPSAKDBqXIeeV7D7L4PV6dhp7b3gYq8hACg0jS5\nzegguNNxir0at+WBJ9Aexa8=\n=TOrY\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 16 May 2001 13:33:24 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "Lamar Owen writes:\n\n> > Initdb doesn't even start if one already exists.\n>\n> I know. Makes it more painful to package a default one. But that's an RPM\n> issue, not a generic one.\n\nInitdb puts the \"sample\" file from share/ in place. Change that.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 16 May 2001 19:41:13 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Configurable path to look up dynamic libraries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 16 May 2001 13:41, Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > > Initdb doesn't even start if one already exists.\n\n> > I know. Makes it more painful to package a default one. But that's an\n> > RPM issue, not a generic one.\n\n> Initdb puts the \"sample\" file from share/ in place. Change that.\n\nI was hoping to leave it more standard (so that an initdb to an alternate \nlocation would work without forcing the same default config on all), but I \ncan deal with that. Hey, if I must I can always patch initdb (although I do \nNOT want to do that.....).\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7Ar2K5kGGI8vV9eERApaXAJ4qf0dxbRpH/kyyVkBrmDi8e3I17wCgrGl6\neUul7QmsD0fYoFhNV59IL1M=\n=8BSz\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 16 May 2001 13:48:55 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Configurable path to look up dynamic libraries" } ]
[ { "msg_contents": "CURRENT CVS: compiling dfmgr failed due to RTLD_LAZY missing definition.\n\nI made the following patch (The comments may not be true, but I'm not \nsure.\n\nIndex: src/backend/port/dynloader/freebsd.h\n===================================================================\nRCS file: \n/home/projects/pgsql/cvsroot/pgsql/src/backend/port/dynloader/freebsd.h,v\nretrieving revision 1.9\ndiff -c -r1.9 freebsd.h\n*** src/backend/port/dynloader/freebsd.h 2001/05/14 21:45:53 \n1.9\n--- src/backend/port/dynloader/freebsd.h 2001/05/15 16:03:04\n***************\n*** 20,25 ****\n--- 20,28 ----\n\n #include \"utils/dynamic_loader.h\"\n\n+ /***** NEED CONFIGURE CHECK *****/\n+ #include <dlfcn.h>\n+ /********************************/\n /* dynloader.c */\n /*\n * Dynamic Loader on NetBSD 1.0.\n\nLet me know how you want to handle this.\n\nLER\n", "msg_date": "Tue, 15 May 2001 16:09:51 GMT", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "FreeBSD Dynloader: needs dlfcn.h..." }, { "msg_contents": "Larry Rosenman writes:\n\n> CURRENT CVS: compiling dfmgr failed due to RTLD_LAZY missing definition.\n\nWhoops. I've fixed it. No check needed, see freebsd.c file. Thanks.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 15 May 2001 18:57:12 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: FreeBSD Dynloader: needs dlfcn.h..." } ]
[ { "msg_contents": "Oh,\nI forgot that without any contrib you can select rows whose papth to the root {1,2,3}:\n\nselect * from test where lineage[1:3] = '{1,2,3}';\n\nbut the gist indexing (intarray) performs a significant speed increase.\n\nBTW, lineage represents egde-list of a directed graph or a tree ?\n> insert into test (id,lineage) values ('8','{1,2,3}');\n------------------------------------------------|-^\n> insert into test (id,lineage) values ('9','{1,3,7}');\nIt seems to me that node 3 can be accessed from both node 1 and node 3 directly, or it's just a mistake?\n\n----- Original Message ----- \nFrom: \"Gyozo Papp\" <pgerzson@freestart.hu>\nTo: \"PostgreSQL-General\" <pgsql-general@postgresql.org>; \"Lincoln Yeoh\" <lyeoh@pop.jaring.my>\nSent: 2001. �prilis 15. 13:23\nSubject: Re: [GENERAL] index ops for _int4 and trees?\n\n\nHello,\n\nhave a look at contrib/intarray directory!\nThere is a pretty good index-support for one-dimensional integer array - solution for your 1st question.\n... and it also includes two simple operator @ (= 'contains' similiar to AND) and && ( = 'overlap' similiar to OR) to check array values against an other array.\n\nThere is a short README.intarray file telling you what to do. \nThere is another contrib in contrib/array for more support to check array values. But I don't know this contibution can profit from the other's index technique.\n\nAfter you 've installed these contribs your query can be written:\n> select * from test where lineage like '{1,2,3,%';\nselect * from test where lineage[1:3] @ '{1,2,3}';\n\nI think these satisfy you. \nBest,\n\nPapp Gyozo \n- pgerzson@freestart.hu\n\n----- Original Message ----- \nFrom: \"Lincoln Yeoh\" <lyeoh@pop.jaring.my>\nTo: \"PostgreSQL-General\" <pgsql-general@postgresql.org>\nSent: 2001. m�jus 15. 10:20\nSubject: [GENERAL] index ops for _int4 and trees?\n\n\n> Hi,\n> \n> Say I have the following table:\n> \n> create table test (\n> id int,\n> lineage integer[]\n> );\n> \n> insert into test (id,lineage) values ('8','{1,2,3}');\n> insert into test (id,lineage) values ('9','{1,3,7}');\n> insert into test (id,lineage) values ('10','{1,2,3}');\n> insert into test (id,lineage) values ('11','{1,2,3,10}');\n> insert into test (id,lineage) values ('12','{1,3,7,9}');\n> \n> 1) How do I create an index on integer[] aka _int4?\n> \n> 2) Is it possible to do something similar to the following select?\n> \n> select * from test where lineage like '{1,2,3,%';\n> \n> I'm basically using this as a method of fetching rows in a particular\n> branch of a whole tree, without having to do recursion and multiple selects.\n> \n> If 1 or 2 are not possible then I'll stick with using text and converting\n> ids to zeropadded hexadecimal <sigh>.\n> \n> I'm thinking that there should be a quick way to do branches and trees,\n> after all there's a btree index type, so... ;).\n> \n> Using text works but is rather crude, any working suggestions?\n> \n> Thanks,\n> Link.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n\n\n", "msg_date": "Tue, 15 May 2001 18:29:26 +0200", "msg_from": "\"Gyozo Papp\" <pgerzson@freestart.hu>", "msg_from_op": true, "msg_subject": "Re: index ops for _int4 and trees?" }, { "msg_contents": "At 06:29 PM 15-05-2001 +0200, Gyozo Papp wrote:\n>Oh,\n>I forgot that without any contrib you can select rows whose papth to the\nroot {1,2,3}:\n>\n>select * from test where lineage[1:3] = '{1,2,3}';\n>but the gist indexing (intarray) performs a significant speed increase.\n\nThanks.\n\n>BTW, lineage represents egde-list of a directed graph or a tree ?\n>> insert into test (id,lineage) values ('8','{1,2,3}');\n>------------------------------------------------|-^\n>> insert into test (id,lineage) values ('9','{1,3,7}');\n>It seems to me that node 3 can be accessed from both node 1 and node 3\ndirectly, or it's just a mistake?\n\nIt's a mistake in my example. \n\nAside but related:\nOleg Bartunov also mentioned that subset searches are possible with gist:\n\nselect * from table <TABLE> where <array_field> @ '{1,2,3}'\n\nSo I've asked him whether his work on gist indexing int arrays can be used\nto do substring indexing on text, as a built-in to Postgresql.\n\nI think it can be done. Then subtext_ops here we come :). \n\nIf not I'll resort to converting text characters to their code values and\nstuffing them into int arrays. Ugly :). Not sure what happens when the\narrays get large.\n\nCheerio,\nLink.\n\n", "msg_date": "Wed, 16 May 2001 09:41:47 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: index ops for _int4 and trees?" } ]
[ { "msg_contents": "Hi,\n\nI want to learn, how the pl/plsql-parser/compiler works. Therefore I planned \nto implement a simple ELSIF, like oracle does.\n\nI added the following K_ELSIF branch to gram.y, in the hope that, when ELSIF \nis parsed, simply another if-structure in inserted.\n\n---------------------------------------------------------\nstmt_else\t\t:\n\t{\n\t PLpgSQL_stmts\t*new;\n\t\t\n\t\tnew = malloc(sizeof(PLpgSQL_stmts));\n\t\tmemset(new, 0, sizeof(PLpgSQL_stmts));\n\t\t$$ = new;\n\t\telog(NOTICE, \"empty ELSE detected\");\n\t}\n\t| K_ELSIF expr_until_then proc_sect stmt_else\n\t{\n\t\tPLpgSQL_stmt_if *new;\n\t\telog(NOTICE, \"ELSIF detected\");\n\t\tnew = malloc(sizeof(PLpgSQL_stmt_if));\n\t\tmemset(new, 0, sizeof(PLpgSQL_stmt_if));\n\t\tnew->cmd_type\t= PLPGSQL_STMT_IF;\n\t\t// new->lineno\t\t= $2;\n\t\tnew->cond\t\t= $2;\n\t\tnew->true_body\t= $3;\n\t\tnew->false_body = $4;\n\n\t\t$$ = (PLpgSQL_stmts *)new;\t\n\n\t\t\t\t\t\n\t}\n\t| K_ELSE proc_sect\n\t{\n \t $$ = $2;\t\t\t\t\n \t elog(NOTICE, \"ELSE detected (%s)\", \n\t\t strdup(yytext));\n\t}\n\t;\n--------------------------------------------------------------\n\nA testprocedure, which looks like that:\n--------------------------------------------------------------\n\nDECLARE\n iPar1PI ALIAS FOR $1;\n iTmp integer;\n iResult varchar;\n\nBEGIN\n\n iTmp = iPar1PI;\n \n raise notice '1.0';\n\n if iTmp IS NULL then\n raise notice '2.0';\n iResult = 'Echt NULL';\n\n else\n if iTmp = 0 then\n raise notice '2.1.0';\n iResult = 'Null (0)';\n\n elsif (iTmp < 0) THEN\n raise notice '2.1.1';\n iResult = 'Negativ';\n\n elsif (iTmp > 0) THEN\n raise notice '2.1.2';\n iResult = 'Positiv';\n\n else\n raise notice '2.1.3';\n iResult = 'Gibts nicht!';\n end if;\n end if;\n \n raise notice '3.0';\n\n return iResult;\nEND;\n--------------------------------------------------------------\n\nis dumped in this way ...\n\n--------------------------------------------------------------\nExecution tree of successfully compiled PL/pgSQL function kr_test:\n \nFunctions data area:\n entry 0: VAR $1 type int4 (typoid 23) atttypmod -1\n entry 1: VAR found type bool (typoid 16) atttypmod -1\n entry 2: VAR itmp type int4 (typoid 23) atttypmod -1\n entry 3: VAR iresult type varchar (typoid 1043) atttypmod -1\n \nFunctions statements:\n 8:BLOCK <<*unnamed*>>\n 10: ASSIGN var 2 := 'SELECT $1 {$1=0}'\n 12: RAISE ''1.0''\n 14: IF 'SELECT $1 IS NULL {$1=2}' THEN\n 15: RAISE ''2.0''\n 16: ASSIGN var 3 := 'SELECT 'Echt NULL''\n ELSE\n 19: IF 'SELECT $1 = 0 {$1=2}' THEN\n 20: RAISE ''2.1.0''\n 21: ASSIGN var 3 := 'SELECT 'Null (0)''\n ELSE\n ENDIF\n ENDIF\n 37: RAISE ''3.0''\n 39: RETURN 'SELECT $1 {$1=3}'\n END -- *unnamed*\n \nEnd of execution tree of function kr_test\n--------------------------------------------------------------\n\nSo my question is: Why does my inserted \nPLpgSQL_stmt_if *new;\nis not executed, because I do it in the same way like stmt_if does?\n\nWho can halp me (Maybe Jan??)\n\nRegards, Klaus\n", "msg_date": "Wed, 16 May 2001 12:29:16 +0200", "msg_from": "Klaus Reger <K.Reger@gmx.de>", "msg_from_op": true, "msg_subject": "Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Klaus Reger wrote:\n> Hi,\n>\n> I want to learn, how the pl/plsql-parser/compiler works. Therefore I planned\n> to implement a simple ELSIF, like oracle does.\n>\n> I added the following K_ELSIF branch to gram.y, in the hope that, when ELSIF\n> is parsed, simply another if-structure in inserted.\n>\n> ---------------------------------------------------------\n> stmt_else :\n> {\n> PLpgSQL_stmts *new;\n>\n> new = malloc(sizeof(PLpgSQL_stmts));\n> memset(new, 0, sizeof(PLpgSQL_stmts));\n> $$ = new;\n> elog(NOTICE, \"empty ELSE detected\");\n> }\n> | K_ELSIF expr_until_then proc_sect stmt_else\n> {\n> PLpgSQL_stmt_if *new;\n> elog(NOTICE, \"ELSIF detected\");\n> new = malloc(sizeof(PLpgSQL_stmt_if));\n> memset(new, 0, sizeof(PLpgSQL_stmt_if));\n> new->cmd_type = PLPGSQL_STMT_IF;\n> // new->lineno = $2;\n> new->cond = $2;\n> new->true_body = $3;\n> new->false_body = $4;\n>\n> $$ = (PLpgSQL_stmts *)new;\n\n Here it is. stmt_else is defined as type <stmts>, not <stmt>.\n The PLpgSQL_stmt_if struct has a condition query and two\n statement lists (type <stmts>). You're trying to put a single\n statement into the else part instead of a list of statements.\n\n Maybe it'll work if you surround it with another\n PLpgSQL_stmts struct where your new PLpgSQL_stmt_if is the\n only statement in it's list. Since I have some bigger work\n outstanding for PL/pgSQL, send the resulting patch (if you\n get it to work) directly to me.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 16 May 2001 10:10:41 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Am Mittwoch, 16. Mai 2001 16:10 schrieb Jan Wieck:\n> Here it is. stmt_else is defined as type <stmts>, not <stmt>.\n> The PLpgSQL_stmt_if struct has a condition query and two\n> statement lists (type <stmts>). You're trying to put a single\n> statement into the else part instead of a list of statements.\nThank you for the hint! That was it.\n\n> Maybe it'll work if you surround it with another\n> PLpgSQL_stmts struct where your new PLpgSQL_stmt_if is the\n> only statement in it's list. Since I have some bigger work\n> outstanding for PL/pgSQL, send the resulting patch (if you\n> get it to work) directly to me.\nThe patch follows this message. May you tell me what kind of work it is, \ncause I'm so curous :-). By the way, the next thing I try is a\nEXCEPTION WHEN OTHER-clause, like in Oracle. Let's look if I'm successful.\n\nCiao, Klaus\n\n----------------------------------------------------------------------------\n\ndiff -Naurb src/gram.y src.elsif/gram.y\n--- src/gram.y\tWed May 16 18:00:53 2001\n+++ src.elsif/gram.y\tWed May 16 17:39:19 2001\n@@ -147,6 +147,7 @@\n %token\tK_DIAGNOSTICS\n %token\tK_DOTDOT\n %token\tK_ELSE\n+%token\tK_ELSIF\n %token\tK_END\n %token\tK_EXCEPTION\n %token\tK_EXECUTE\n@@ -544,6 +545,7 @@\n \t\t\t\t\t\t\t\tnew->stmts[0] = (struct PLpgSQL_stmt *)$1;\n \n \t\t\t\t\t\t\t\t$$ = new;\n+\n \t\t\t\t\t\t}\n \t\t\t\t;\n \n@@ -721,8 +723,53 @@\n \t\t\t\t\t\t\tmemset(new, 0, sizeof(PLpgSQL_stmts));\n \t\t\t\t\t\t\t$$ = new;\n \t\t\t\t\t}\n+\t\t\t\t| K_ELSIF lno expr_until_then proc_sect stmt_else\n+\t\t\t\t\t{\n+\t\t\t\t\t /*\n+\t\t\t\t\t * Translate the structure:\t into:\n+\t\t\t\t\t *\n+\t\t\t\t\t * IF c1 THEN\t\t\t\t\t IF c1 THEN\t\t \n+\t\t\t\t\t *\t ...\t\t\t\t\t\t ...\t\t\t\t \n+\t\t\t\t\t * ELSIF c2 THEN\t\t\t\t ELSE \n+\t\t\t\t\t *\t\t\t\t\t\t\t\t IF c2 THEN\t\n+\t\t\t\t\t *\t ...\t\t\t\t\t\t\t ...\t\t\t\t \n+\t\t\t\t\t * ELSE\t\t\t\t\t\t\t ELSE\t\t\t\t \n+\t\t\t\t\t *\t ...\t\t\t\t\t\t\t ...\t\t\t\t \n+\t\t\t\t\t * END IF\t\t\t\t\t\t\t END IF\t\t\t \n+\t\t\t\t\t *\t\t\t\t\t\t\t END IF\n+\t\t\t\t\t * \n+\t\t\t\t\t */\n+\n+\t\t\t\t\t\tPLpgSQL_stmts\t*new;\n+\t\t\t\t\t\tPLpgSQL_stmt_if *new_if;\n+\n+\t\t\t\t\t\t/* first create a new if-statement */\n+\t\t\t\t\t\tnew_if = malloc(sizeof(PLpgSQL_stmt_if));\n+\t\t\t\t\t\tmemset(new_if, 0, sizeof(PLpgSQL_stmt_if));\n+\n+\t\t\t\t\t\tnew_if->cmd_type\t= PLPGSQL_STMT_IF;\n+\t\t\t\t\t\tnew_if->lineno\t\t= $2;\n+\t\t\t\t\t\tnew_if->cond\t\t= $3;\n+\t\t\t\t\t\tnew_if->true_body\t= $4;\n+\t\t\t\t\t\tnew_if->false_body\t= $5;\n+\t\t\t\t\t\t\n+\t\t\t\t\t\t/* this is a 'container' for the if-statement */\n+\t\t\t\t\t\tnew = malloc(sizeof(PLpgSQL_stmts));\n+\t\t\t\t\t\tmemset(new, 0, sizeof(PLpgSQL_stmts));\n+\t\t\t\t\t\t\n+\t\t\t\t\t\tnew->stmts_alloc = 64;\n+\t\t\t\t\t\tnew->stmts_used\t = 1;\n+\t\t\t\t\t\tnew->stmts = malloc(sizeof(PLpgSQL_stmt *) * new->stmts_alloc);\n+\t\t\t\t\t\tnew->stmts[0] = (struct PLpgSQL_stmt *)new_if;\n+\n+\t\t\t\t\t\t$$ = new;\n+\t\t\t\t\t\t\n+\t\t\t\t\t}\n+\n \t\t\t\t| K_ELSE proc_sect\n-\t\t\t\t\t{ $$ = $2; }\n+\t\t\t\t\t{\n+\t\t\t\t\t\t$$ = $2;\t\t\t\t\n+\t\t\t\t\t}\n \t\t\t\t;\n \n stmt_loop\t\t: opt_label K_LOOP lno loop_body\n@@ -1271,7 +1318,6 @@\n \t\t\t\tbreak;\n \t\t}\n \t}\n-\n \texpr = malloc(sizeof(PLpgSQL_expr) + sizeof(int) * nparams - sizeof(int));\n \texpr->dtype\t\t\t= PLPGSQL_DTYPE_EXPR;\n \texpr->query\t\t\t= strdup(plpgsql_dstring_get(&ds));\ndiff -Naurb src/scan.l src.elsif/scan.l\n--- src/scan.l\tWed May 16 18:01:36 2001\n+++ src.elsif/scan.l\tTue May 15 12:49:43 2001\n@@ -99,6 +99,7 @@\n default\t\t\t{ return K_DEFAULT;\t\t\t}\n diagnostics\t\t{ return K_DIAGNOSTICS;\t\t}\n else\t\t\t{ return K_ELSE;\t\t\t}\n+elsif { return K_ELSIF; }\n end\t\t\t\t{ return K_END;\t\t\t\t}\n exception\t\t{ return K_EXCEPTION;\t\t}\n execute\t\t\t{ return K_EXECUTE;\t\t\t}\n", "msg_date": "Wed, 16 May 2001 18:57:13 +0200", "msg_from": "Klaus Reger <K.Reger@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Klaus Reger wrote:\n> Am Mittwoch, 16. Mai 2001 16:10 schrieb Jan Wieck:\n> > Here it is. stmt_else is defined as type <stmts>, not <stmt>.\n> > The PLpgSQL_stmt_if struct has a condition query and two\n> > statement lists (type <stmts>). You're trying to put a single\n> > statement into the else part instead of a list of statements.\n> Thank you for the hint! That was it.\n>\n> > Maybe it'll work if you surround it with another\n> > PLpgSQL_stmts struct where your new PLpgSQL_stmt_if is the\n> > only statement in it's list. Since I have some bigger work\n> > outstanding for PL/pgSQL, send the resulting patch (if you\n> > get it to work) directly to me.\n> The patch follows this message. May you tell me what kind of work it is,\n> cause I'm so curous :-). By the way, the next thing I try is a\n> EXCEPTION WHEN OTHER-clause, like in Oracle. Let's look if I'm successful.\n\n complete CURSOR support. With some enhancements in SPI plus a\n little fix in ProcessPortalFetch() I have up to now\n\n Explicit CURSOR:\n\n DECLARE\n <cursor_name> CURSOR [(<arg> <type> [, ...])]\n IS <select_statement>;\n\n The select statement can use any so far declared\n variable or functions arguments in addition to the\n optional cursor arguments. These will be evaluated at\n OPEN time.\n\n There is a new datatype 'refcursor'. The above\n declaration will create a local variable of that type\n with a default \"value\" of the variables name. This\n \"value\" will be used for the global cursors name\n during OPEN.\n\n BEGIN\n OPEN <cursor_name> [(<expression> [, ...])];\n\n FETCH <cursor_name>\n INTO {<record> | <row> | <var> [, ...]};\n\n CLOSE <cursor_name>;\n\n The FETCH command sets the global FOUND variable, so\n a typical loop over a resultset looks like\n\n LOOP\n FETCH mycur INTO myrec;\n EXIT WHEN NOT FOUND;\n -- Process the row\n END LOOP;\n\n Reference CURSOR:\n\n DECLARE\n <cursor_name> REFCURSOR [ := <string> ];\n\n BEGIN\n OPEN <cursor_name> FOR <select_statement>;\n\n OPEN <cursor_name> FOR EXECUTE <string>;\n\n The new datatype 'refcursor' can be used to pass any cursor\n around between functions and the application. Cursors used\n inside of functions only don't need transaction blocks. Of\n course, they'll not survive the current transactions end, but\n if funcA() creates a cursor and then calls funcB(refcursor)\n with it, there does not need to be a transaction block around\n it.\n\n What I need to do now is fixing some memory allocation issues\n in PL/pgSQL and move FOR loops to use implicit cursors\n internally. Your patch looks like it doesn't conflict with\n any of my work. I'll commit it soon.\n\n For the EXCEPTIONS thing, well that's another issue. We could\n of course simulate/generate some of the exceptions like \"no\n data found\" and the other one I forgot (telling that a SELECT\n INTO returned multiple results). But we cannot catch a\n duplicate key error, a division by zero or a referential\n integrity violation, because when it happens a statement is\n half way done and the only way to cleanup is rolling back the\n entire transaction (for now, Vadim is working on savepoints).\n So I suggest you don't spend much of your time before we have\n them.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 16 May 2001 15:29:40 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Am Mittwoch, 16. Mai 2001 21:29 schrieb Jan Wieck:\n> For the EXCEPTIONS thing, well that's another issue. We could\n> of course simulate/generate some of the exceptions like \"no\n> data found\" and the other one I forgot (telling that a SELECT\n> INTO returned multiple results). But we cannot catch a\n> duplicate key error, a division by zero or a referential\n> integrity violation, because when it happens a statement is\n> half way done and the only way to cleanup is rolling back the\n> entire transaction (for now, Vadim is working on savepoints).\n> So I suggest you don't spend much of your time before we have\n> them.\nOK, I understand. For the beginning I only would like to have a possibility, \nto catch any exception and create my own error handling, ignoring any \ntransaction-stuff. Because I have to port Procedures from Oracle to \nPostgreSQL, I am looking, to imitate the way Oracle takes.\n\nAs I understand with my actual knowledge, this would mean, that every(!) call \nof elog, which terminates the process, has to be caught. But this seems to \ngreat for a new Postgres-hacker, like I am. Or do you see any other \npossibility (maybe extending PLpgSQL_execstate)?\n\nCU, Klaus\n", "msg_date": "Fri, 18 May 2001 15:31:35 +0200", "msg_from": "Klaus Reger <K.Reger@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Klaus Reger wrote:\n> Am Mittwoch, 16. Mai 2001 21:29 schrieb Jan Wieck:\n> > For the EXCEPTIONS thing, well that's another issue. We could\n> > of course simulate/generate some of the exceptions like \"no\n> > data found\" and the other one I forgot (telling that a SELECT\n> > INTO returned multiple results). But we cannot catch a\n> > duplicate key error, a division by zero or a referential\n> > integrity violation, because when it happens a statement is\n> > half way done and the only way to cleanup is rolling back the\n> > entire transaction (for now, Vadim is working on savepoints).\n> > So I suggest you don't spend much of your time before we have\n> > them.\n> OK, I understand. For the beginning I only would like to have a possibility,\n> to catch any exception and create my own error handling, ignoring any\n> transaction-stuff. Because I have to port Procedures from Oracle to\n> PostgreSQL, I am looking, to imitate the way Oracle takes.\n>\n> As I understand with my actual knowledge, this would mean, that every(!) call\n> of elog, which terminates the process, has to be caught. But this seems to\n> great for a new Postgres-hacker, like I am. Or do you see any other\n> possibility (maybe extending PLpgSQL_execstate)?\n\n Every(!) call to elog with ERROR (or more severe) level\n causes finally a longjump() back into the tcop mainloop.\n PL/pgSQL and PL/Tcl do catch it - PL/pgSQL to tell something\n on DEBUG level and PL/Tcl mainly to unwind the Tcl\n interpreters call stack. But the backend is in an\n inconsistent state at that time, and any subsequent call to\n access methods could cause unpredictable results up to\n complete database corruption. There is no other way right now\n than to go ahead and continue with transaction abort.\n\n Imitation of the Oracle way is something many ppl around here\n would appreciate, but not at the risk of corrupting the\n entire database - that's too high a price.\n\n That said, you'll have little to no chance of getting this\n feature applied to the CVS. Doing EXCEPTIONS requires\n savepoints and a real \"back to statements start state\"\n functionality. The recent approach of \"simulating CURSOR\"\n just on the PL grammar level without real cursor support in\n the SPI layer failed for exactly the same reason. Before we\n know \"how\" to do it, we cannot decide on the exact\n appearence, because implementation details might \"require\" at\n least some difference to other databases. If we accept such a\n \"fake\" only approach, we introduce backward compatibility\n problems, since somebody will tell us for sure \"I used the\n syntax of 7.2 already and porting my +20K line application\n now ...\". In PostgreSQL it's easier to get new *features*\n added than existing features ripped out - and people rely on\n that.\n\n\nJan\n\nPS:\n Aber la� mal den Kopf nicht h�ngen, wir finden bestimmt was\n wo Du Dich austoben kannst :-)\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 18 May 2001 15:29:50 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Klaus Reger wrote:\n> Am Mittwoch, 16. Mai 2001 16:10 schrieb Jan Wieck:\n> > Here it is. stmt_else is defined as type <stmts>, not <stmt>.\n> > The PLpgSQL_stmt_if struct has a condition query and two\n> > statement lists (type <stmts>). You're trying to put a single\n> > statement into the else part instead of a list of statements.\n> Thank you for the hint! That was it.\n>\n> > Maybe it'll work if you surround it with another\n> > PLpgSQL_stmts struct where your new PLpgSQL_stmt_if is the\n> > only statement in it's list. Since I have some bigger work\n> > outstanding for PL/pgSQL, send the resulting patch (if you\n> > get it to work) directly to me.\n> The patch follows this message. May you tell me what kind of work it is,\n> cause I'm so curous :-). By the way, the next thing I try is a\n> EXCEPTION WHEN OTHER-clause, like in Oracle. Let's look if I'm successful.\n\n Patch applied.\n\n Really a smart solution, just touching the gram.y and scan.l\n files and using the existing instruction code. Thanks for\n the contribution.\n\n What about a CASE ... WHEN ...? I think that could be\n implemented in a similar way. Don't do it right now, I might\n be ready to commit the cursor stuff early next week and it'll\n touch alot more than just the grammar and scanner.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 18 May 2001 17:28:01 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Patch applied.\n\nHow about some documentation?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 18:07:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y " }, { "msg_contents": "Am Samstag, 19. Mai 2001 00:07 schrieb Tom Lane:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Patch applied.\n>\n> How about some documentation?\n>\n> \t\t\tregards, tom lane\nGood idea! It will follow. \n\nRegards, Klaus\n", "msg_date": "Sat, 19 May 2001 10:20:36 +0200", "msg_from": "Klaus Reger <K.Reger@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Am Samstag, 19. Mai 2001 00:07 schrieben Sie:\n\n> How about some documentation?\n\nHer is some documentation. Because I don't have the tools and scripts \ninstalled, to format sgml, I never have written SGML-docs and my english is \nbad, please revisit and correct the additions.\n\nThank you\n\nKlaus\n\n-- \nTWC GmbH\nSchlossbergring 9\n79098 Freiburg i. Br.\nhttp://www.twc.de", "msg_date": "Tue, 22 May 2001 14:42:21 +0200", "msg_from": "Klaus Reger <K.Reger@twc.de>", "msg_from_op": false, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" }, { "msg_contents": "Here is the patch I applied. I cleaned it up a bit. Thanks a lot.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Am Samstag, 19. Mai 2001 00:07 schrieben Sie:\n> \n> > How about some documentation?\n> \n> Her is some documentation. Because I don't have the tools and scripts \n> installed, to format sgml, I never have written SGML-docs and my english is \n> bad, please revisit and correct the additions.\n> \n> Thank you\n> \n> Klaus\n> \n> -- \n> TWC GmbH\n> Schlossbergring 9\n> 79098 Freiburg i. Br.\n> http://www.twc.de\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: plsql.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/plsql.sgml,v\nretrieving revision 2.32\ndiff -w -b -i -B -c -r2.32 plsql.sgml\n*** plsql.sgml\t2001/05/17 21:50:16\t2.32\n--- plsql.sgml\t2001/05/22 12:38:49\n***************\n*** 880,891 ****\n <title>Conditional Control: IF statements</title>\n \n <para>\n! <function>IF</function> statements let you take action\n! according to certain conditions. PL/pgSQL has three forms of\n! IF: IF-THEN, IF-THEN-ELSE, IF-THEN-ELSE IF. NOTE: All\n! PL/pgSQL IF statements need a corresponding <function>END\n! IF</function> statement. In ELSE-IF statements you need two:\n! one for the first IF and one for the second (ELSE IF).\n </para>\n \n <variablelist>\n--- 880,890 ----\n <title>Conditional Control: IF statements</title>\n \n <para>\n! \t<function>IF</function> statements let you execute commands based on\n! certain conditions. PL/PgSQL has four forms of IF: IF-THEN, IF-THEN-ELSE,\n! IF-THEN-ELSE IF, IF-THEN-ELSIF-THEN-ELSE. NOTE: All PL/PgSQL IF statements need\n! \ta corresponding <function>END IF</function> clause. With ELSE-IF statements,\n! you need two: one for the first IF and one for the second (ELSE IF).\n </para>\n \n <variablelist>\n***************\n*** 979,984 ****\n--- 978,1018 ----\n </para>\n </listitem>\n </varlistentry>\n+ \n+ <varlistentry>\n+ <term>\n+ IF-THEN-ELSIF-ELSE\n+ </term>\n+ \n+ <listitem>\n+ <para>\n+ IF-THEN-ELSIF-ELSE allows you test multiple conditions\n+ in one statement. Internally it is handled as nested \n+ IF-THEN-ELSE-IF-THEN commands. The optional ELSE\n+ branch is executed when none of the conditions are met.\n+ </para>\n+ \n+ <para>\n+ Here is an example:\n+ </para>\n+ \n+ <programlisting>\n+ IF number = 0 THEN\n+ result := ''zero'';\n+ ELSIF number &lt; 0 THEN\n+ result := ''negative'';\n+ ELSIF number &gt; 0 THEN \n+ result := ''negative'';\n+ ELSE\n+ -- now it seems to be NULL\n+ result := ''NULL'';\n+ END IF;\n+ </programlisting>\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n+ \n </variablelist>\n </sect3>", "msg_date": "Tue, 22 May 2001 09:52:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Grammar-problems with pl/pgsql in gram.y" } ]
[ { "msg_contents": "Hi all (I hope this is the correct list),\n\nUnder Oracle there is v$parameter which list ALL config varables. Under\npsql there is the SHOW command, but this only lists 1 variable. I have\nwritten a shell script (attached) that shows ALL know variables. My\nquestions is can this script get included under contrib directory and is\nthere a way to make it into a view. I believe this kind of info will\nhelp in trouble shooting problems.\n\nthanks\nJim", "msg_date": "Wed, 16 May 2001 12:53:33 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Running config vars" }, { "msg_contents": "\nI will try to get this feature into 7.2, though as backend code.\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Hi all (I hope this is the correct list),\n> \n> Under Oracle there is v$parameter which list ALL config varables. Under\n> psql there is the SHOW command, but this only lists 1 variable. I have\n> written a shell script (attached) that shows ALL know variables. My\n> questions is can this script get included under contrib directory and is\n> there a way to make it into a view. I believe this kind of info will\n> help in trouble shooting problems.\n> \n> thanks\n> Jim\n> \n> \n\n[ application/octet-stream is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 May 2001 10:39:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Running config vars" }, { "msg_contents": "Bruce Momjian writes:\n\n> I will try to get this feature into 7.2, though as backend code.\n\nI've long been thinking that the SET/RESET/SHOW commands could be\nimplemented as stored procedures and/or functions. That way you could\ncleanly return the settings to the client application and compose new\nvalues out of arbitrary expressions. If ever functions can return\nmultiple rows you could also arrange for them to show all current\nsettings.\n\n>\n>\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > Hi all (I hope this is the correct list),\n> >\n> > Under Oracle there is v$parameter which list ALL config varables. Under\n> > psql there is the SHOW command, but this only lists 1 variable. I have\n> > written a shell script (attached) that shows ALL know variables. My\n> > questions is can this script get included under contrib directory and is\n> > there a way to make it into a view. I believe this kind of info will\n> > help in trouble shooting problems.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 18 May 2001 17:59:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Running config vars" }, { "msg_contents": "> Bruce Momjian writes:\n>\n> > I will try to get this feature into 7.2, though as backend code.\n>\n> I've long been thinking that the SET/RESET/SHOW commands could be\n> implemented as stored procedures and/or functions. That way you could\n> cleanly return the settings to the client application and compose new\n> values out of arbitrary expressions. If ever functions can return\n> multiple rows you could also arrange for them to show all current\n> settings.\n\nI added a function to the c library that I sent to patches the other day --\ncalled \"get_var\". It implements what you describe above (that is, one\nvariable at a time). I'd like to get this into contrib, or do the work\nnecessary to get it into the backend -- any interest in letting me give that\na try?\n\nThanks,\n\nJoe", "msg_date": "Fri, 18 May 2001 10:54:59 -0700", "msg_from": "\"Joe Conway\" <joe@conway-family.com>", "msg_from_op": false, "msg_subject": "Re: Running config vars" } ]
[ { "msg_contents": "Hi all (I hope this is the correct list),\n\nUnder Oracle there is v$parameter which list ALL config varables. Under\npsql there is the SHOW command, but this only lists 1 variable. I have\nwritten a shell script (attached) that shows ALL know variables. My\nquestions is can this script get included under contrib directory and is\nthere a way to make it into a view. I believe this kind of info will\nhelp in trouble shooting problems.\n\nthanks\nJim", "msg_date": "Wed, 16 May 2001 13:02:29 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@tylerdrive.org>", "msg_from_op": true, "msg_subject": "Fw: Running config vars" }, { "msg_contents": "\nI think the way to do this is for SHOW ALL to show all setttings.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> Hi all (I hope this is the correct list),\n> \n> Under Oracle there is v$parameter which list ALL config varables. Under\n> psql there is the SHOW command, but this only lists 1 variable. I have\n> written a shell script (attached) that shows ALL know variables. My\n> questions is can this script get included under contrib directory and is\n> there a way to make it into a view. I believe this kind of info will\n> help in trouble shooting problems.\n> \n> thanks\n> Jim\n> \n> \n> \n> \n> \n> \n\n[ application/octet-stream is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 16 May 2001 13:17:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fw: Running config vars" } ]
[ { "msg_contents": "I was looking for some way via standard SQL (I use perl DBI) to list\nthese variables. I don't believe the SHOW command is available via DBI\n\nJim\n\n> \n> I think the way to do this is for SHOW ALL to show all setttings.\n> \n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > \n> > Hi all (I hope this is the correct list),\n> > \n> > Under Oracle there is v$parameter which list ALL config varables. \nUnder\n> > psql there is the SHOW command, but this only lists 1 variable. I\nhave\n> > written a shell script (attached) that shows ALL know variables. My\n> > questions is can this script get included under contrib directory\nand is\n> > there a way to make it into a view. I believe this kind of info\nwill\n> > help in trouble shooting problems.\n> > \n> > thanks\n> > Jim\n> > \n> > \n> > \n> > \n> > \n> > \n> \n> [ application/octet-stream is not supported, skipping... ]\n> \n> > \n> > ---------------------------(end of\nbroadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n> \n> \n\n\n", "msg_date": "Wed, 16 May 2001 13:20:40 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@tylerdrive.org>", "msg_from_op": true, "msg_subject": "Re: Fw: Running config vars" } ]
[ { "msg_contents": "Try this with current sources:\n\nAdd this line\n\nupdate pg_proc set prosrc = 'foobar' where proname = 'check_primary_key';\n\nto the end of regress/sql/create_function_1.sql. (The function is created\nearlier in this file.) This simulates the situation where the\nDynamic_library_path (under development) has become bad between creation\nand use of a function.\n\nThen run the regression tests (I ran installcheck). You will get a\nsegfault in the 'triggers' test which looks something like this:\n\nDEBUG: StartTransactionCommand\nDEBUG: query: insert into fkeys2 values (10, '1', 1);\nDEBUG: ProcessQuery\nERROR: Can't find function foobar in file /home/peter/pgsql/src/test/regress/../../../contrib/spi/refint.so\nDEBUG: AbortCurrentTransaction\nDEBUG: StartTransactionCommand\nDEBUG: query: insert into fkeys2 values (30, '3', 2);\nDEBUG: ProcessQuery\npg-install/bin/postmaster: reaping dead processes...\n\nThe core file ends like this:\n\n#0 0x7f7f7f7f in ?? ()\n#1 0x80f01ad in ExecBRInsertTriggers (estate=0x82a14e8, rel=0x4035bb34,\n trigtuple=0x82a1b0c) at trigger.c:900\n\n(I don't know what the #0 is trying to tell me.)\n\nMy best guess is that the trigger fmgr lookup in trigger.c:846\n\n\tif (trigger->tgfunc.fn_oid == InvalidOid)\n\t\tfmgr_info(trigger->tgfoid, &trigger->tgfunc);\n\nmight be reading out of an incompletely initialized trigger->tgfunc\nstructure. This is supported by the fact that if I move the\n\nfinfo->fn_oid = functionId;\n\nassignment in fmgr_info() to the very end of that function, this passes\ngracefully (because the elog comes before the assignment).\n\nThis might even be a workable fix, but I'm wondering whether elog(ERROR)\nshould not flush the trigger cache.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 16 May 2001 19:39:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Triggers might be caching broken fmgr info" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> My best guess is that the trigger fmgr lookup in trigger.c:846\n> \tif (trigger->tgfunc.fn_oid == InvalidOid)\n> \t\tfmgr_info(trigger->tgfoid, &trigger->tgfunc);\n> might be reading out of an incompletely initialized trigger->tgfunc\n> structure.\n\nYes, that's the problem.\n\n> This is supported by the fact that if I move the\n> finfo->fn_oid = functionId;\n> assignment in fmgr_info() to the very end of that function, this passes\n> gracefully (because the elog comes before the assignment).\n\nI think this is an OK fix, since utils/fmgr/README documents as valid\nthe technique trigger.c is using:\n\n\tfn_oid = InvalidOid can be used\n\tto denote a not-yet-initialized FmgrInfo struct.\n\nfmgr_info is clearly failing to live up to the implications of that\ncommitment. Please move the fn_oid assignment, and insert a note\nthat it must be last...\n\n> This might even be a workable fix, but I'm wondering whether elog(ERROR)\n> should not flush the trigger cache.\n\nThe cache in question is part of the relcache, and no we don't want to\nflush it on every elog(ERROR).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 May 2001 15:07:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Triggers might be caching broken fmgr info " } ]
[ { "msg_contents": "I've been talking with SGI tech support about my problem with installing\nPostgres 7.1.1 on the SGI (IRIX 6.5.12 using the MIPSPro 7.3 compiler).\nFortunately, one of my SGI's (an octane) built PG without any problem so\nthis is just academic now (but probably useful for others wanting to\ninstall PG on the SGI). The other SGI (an o2) seems to lose definitions\nof strdup and timeval and some other structures. On the specific\nquestion of strdup, the SGI person told me this:\n\n\n> Hi Tony,\n>\n> From my research I came across this:\n>\n> strdup is not part of ISO C, either C 89, C90, C95, or the\n> latest, C99.\n>\n> As a a result, there is no strdup() prototype visible, so\n> the compiler assumes strdup returns an int.\n> An int cannot be transformed to a char * without a cast,\n> so the compiler gives a diagnostic.\n>\n>\n> I noticed in your code string.h is not included. The man page for strdup\n> specifies the\n> inclusion of this header. Please advise.\n>\n>\n\nAny comments?\n\n-Tony\n\n\n", "msg_date": "Wed, 16 May 2001 11:02:01 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Followup to IRIX6.5 installation of PG 7.1.1" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n>> I noticed in your code string.h is not included. The man page for strdup\n>> specifies the inclusion of this header. Please advise.\n\n> Any comments?\n\n<string.h> is included in every Postgres source file (via c.h).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 May 2001 15:14:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Followup to IRIX6.5 installation of PG 7.1.1 " }, { "msg_contents": "Tom Lane wrote:\n\n> <string.h> is included in every Postgres source file (via c.h).\n>\n\nYep. That's what I expected.\n\nSGI technical support seems to think that the problem is with the POSIX flag.\n\n\n\" Have you defined any POSIX variables, such as -D_POSIX_SOURCE\n or included pthread.h? When you enable POSIX you will incur a lot of\n undefined smbols that are not POSIX compliant. You can check the symbols\n\n in the POSIX standards. \"\n\n\nI can't say that I understand this at all, but I believe she is saying that\nthe -D_POSIX_SOURCE flag has caused some function declarations to be hidden\nfrom the compiler (?).\n\nIn any case, since I have a working copy of 7.1.1 for IRIX, I'll leave all of\nit alone for now as it has gone past my comprehension. SGI has a freeware site\nwith PostgreSQL 7.0.3. I'm sure that they'll figure this out when they try to\nbuild 7.1.1 for the site.\n\n-Tony\n\n\n", "msg_date": "Wed, 16 May 2001 14:27:49 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: Followup to IRIX6.5 installation of PG 7.1.1" } ]
[ { "msg_contents": "Why doesn't pgaccess use pgtksh instead of wish? That way we don't have\nto play with the shared library path in pgaccess, and we exercise pgtksh\nsome.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 16 May 2001 20:58:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "pgaccess and pgtksh" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI am loathe to even bring this up, but with two messages today about it, I am \ngoing to be short and sweet:\n\nWe don't have a reasonable upgrade path. ASCII dump->install \nnew->initdb->restore is not a reasonable upgrade. It is confusing to the \nnewbie, and should be fixed. We used to have an upgrade path in pg_upgrade \n- -- but it no longer works (but it was most definitely a fine effort for its \ntime, Bruce!). Furthermore, the dump/restore cycle is a pain in the neck when \ntables get larger than a few hundred megabytes. It's worse when the newer \nversion won't properly restore from the old dump -- and you have to edit, \npotentially by hand, a multi-gigabyte dump to get it to restore.\n\nA seamless binary upgrade utility will require Deep Knowledge -- of the kind \nthat very few pgsql-hackers have. Deeper knowledge than I have, that's for \nsure -- or I would have already done it.\n\nI am not going to beat it into the ground this time -- I have argued the \nissue at length before (some would say for far too long! :-)). But I am \ngoing to drop reminders as we get bug reports and compliants. And Iwould \nremind the group that MySQL does this easily -- it has utilities to migrate \nbetween its different table types. \n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7Augl5kGGI8vV9eERAgBbAKCjS5yOyYFjTYMBEf5+I3s6uvoSTQCeKhCm\nfyqa45WqVjUgZF26YZ0/M2w=\n=4TMK\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 16 May 2001 16:50:42 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Upgrade issue (again)." }, { "msg_contents": "> I am not going to beat it into the ground this time -- I have argued the \n> issue at length before (some would say for far too long! :-)). But I am \n> going to drop reminders as we get bug reports and compliants. And Iwould \n> remind the group that MySQL does this easily -- it has utilities to migrate \n> between its different table types. \n\nIt is my understanding that the MySQL heap format hasn't changed since\n1991. I think that helps them upgrade. We add features too quickly. :-)\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 16 May 2001 18:32:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": "On Wed, 16 May 2001, Lamar Owen wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> I am loathe to even bring this up, but with two messages today about it, I am\n> going to be short and sweet:\n>\n> We don't have a reasonable upgrade path. ASCII dump->install\n> new->initdb->restore is not a reasonable upgrade. It is confusing to the\n> newbie, and should be fixed. We used to have an upgrade path in pg_upgrade\n> - -- but it no longer works (but it was most definitely a fine effort for its\n> time, Bruce!). Furthermore, the dump/restore cycle is a pain in the neck when\n> tables get larger than a few hundred megabytes. It's worse when the newer\n> version won't properly restore from the old dump -- and you have to edit,\n> potentially by hand, a multi-gigabyte dump to get it to restore.\n\nPersonally ... I just upgraded 13 gig worth of databases using\ndump/restore, didn't have to end a single file, in less then 1.5hrs from\nstart to finish ... *shrug*\n\n\n", "msg_date": "Wed, 16 May 2001 20:05:27 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": ">> I am loathe to even bring this up, but with two messages today about\n>> it, I am going to be short and sweet:\n>> \n>> We don't have a reasonable upgrade path.\n\nThis is one of many, many things that need work. It happens to be a\nthing that requires a *lot* of work for, well, not so much payback\n(certainly not a benefit you'd get every day you use Postgres).\nNot to mention that it's a lot of pretty boring work.\n\nSo, personally, there are many other things that I will get to before\nI worry about this. Sorry that my priorities don't square with yours,\nbut that's how it is. I'm not standing in the way of someone else\ntaking up the problem ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 May 2001 19:34:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again). " }, { "msg_contents": "At 04:50 PM 16-05-2001 -0400, Lamar Owen wrote:\n>-----BEGIN PGP SIGNED MESSAGE-----\n>Hash: SHA1\n>\n>I am loathe to even bring this up, but with two messages today about it, I\nam \n>going to be short and sweet:\n>\n>We don't have a reasonable upgrade path. ASCII dump->install \n>new->initdb->restore is not a reasonable upgrade. It is confusing to the \n>newbie, and should be fixed. We used to have an upgrade path in pg_upgrade \n\n>time, Bruce!). Furthermore, the dump/restore cycle is a pain in the neck\nwhen \n>tables get larger than a few hundred megabytes. It's worse when the newer \n\nI won't mind a better upgrade method. But so far I pipe the dump into gzip,\nand the resulting file is of manageable size.\n\nWhat I find annoying is that pg_dumpall doesn't support username and\npassword. So far I just do a pg_dump of the relevant databases, and\nrecreate the users manually when installing.\n\nAlso to reload the file I do:\nzcat gzippedusernameandpassword.gz dbfile.gz | psql\n\nThat's a bit ugly :).\n\nCheerio,\nLink.\n\n", "msg_date": "Thu, 17 May 2001 13:36:57 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Wed, 16 May 2001, Lamar Owen wrote:\n> \n> > -----BEGIN PGP SIGNED MESSAGE-----\n> > Hash: SHA1\n> >\n> > I am loathe to even bring this up, but with two messages today about it, I am\n> > going to be short and sweet:\n> >\n> > We don't have a reasonable upgrade path. ASCII dump->install\n> > new->initdb->restore is not a reasonable upgrade. It is confusing to the\n> > newbie, and should be fixed. We used to have an upgrade path in pg_upgrade\n> > - -- but it no longer works (but it was most definitely a fine effort for its\n> > time, Bruce!). Furthermore, the dump/restore cycle is a pain in the neck when\n> > tables get larger than a few hundred megabytes. It's worse when the newer\n> > version won't properly restore from the old dump -- and you have to edit,\n> > potentially by hand, a multi-gigabyte dump to get it to restore.\n> \n> Personally ... I just upgraded 13 gig worth of databases using\n> dump/restore, didn't have to end a single file, in less then 1.5hrs from\n> start to finish ... *shrug*\n\nOn the otherhand, I just had a user/adminsitrator decide to upgrade an\ninstance of postgres on a seconadry server for testing. But he was using\nX and never noticed that he was upgrading our primary production server.\nIn the end, no data was lost. But I had to pick up the pieces, and it\ntook me a somewhat more more than 1.5 hours to do so, as no plans for\ndowntime or transition had been made at all. I would have preferred a\nclean upgrade path.\n\nOf course, the adminstrator should have known better. And you could\nargue that the sysadmin has demonstrated that he has too many privelges.\nAt least one of those statments is true. But I still would rather have a\nseamless upgrade.\n\n-- \nKarl\n", "msg_date": "Thu, 17 May 2001 09:06:17 -0400", "msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": "Lincoln Yeoh writes:\n\n> What I find annoying is that pg_dumpall doesn't support username and\n> password.\n\nWorking on that. Lots of merge failures...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 17 May 2001 16:44:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Upgrade issue (again)." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 16 May 2001 18:32, Bruce Momjian wrote:\n> > And Iwould\n> > remind the group that MySQL does this easily -- it has utilities to\n> > migrate between its different table types.\n\n> It is my understanding that the MySQL heap format hasn't changed since\n> 1991. I think that helps them upgrade. We add features too quickly. :-)\n\nWhich is a two-edged sword. :-) The features are great -- commercial grade. \nThe migration isn't so great.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7A/gD5kGGI8vV9eERAsHhAJ9mYGq2pl1gqWjFNkxwAP36xfMyQwCgrUS8\njBjLoYf4UIzPxKdAmHpSbMQ=\n=Jgar\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 17 May 2001 12:10:40 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 16 May 2001 19:34, Tom Lane wrote:\n> >> I am loathe to even bring this up, but with two messages today about\n> >> it, I am going to be short and sweet:\n> >> We don't have a reasonable upgrade path.\n\n> This is one of many, many things that need work. It happens to be a\n> thing that requires a *lot* of work for, well, not so much payback\n> (certainly not a benefit you'd get every day you use Postgres).\n> Not to mention that it's a lot of pretty boring work.\n\nAll the above are a little too true. And I wish, I really wish, I had a \nready solution to make it less work on everybody concerned.\n\n> So, personally, there are many other things that I will get to before\n> I worry about this. Sorry that my priorities don't square with yours,\n> but that's how it is. I'm not standing in the way of someone else\n> taking up the problem ...\n\nNo need to apologize -- your top-notch skills are in wide demand all across \nthe backend. :-)\n\nAs are the particular skills of each of the core and key hackers.\n\nAs I said, I was not really enjoying the thought of bringing it up, but I \nfelt I had to do my duty to the userbase.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7A/lu5kGGI8vV9eERAvRLAKDDGkLYVthOX5sCGA6DrSy2H6SxEACgqa5R\nQ7C+14jxqpNY3L4WSdopZUY=\n=ezlw\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 17 May 2001 12:16:43 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Wednesday 16 May 2001 19:05, The Hermit Hacker wrote:\n> On Wed, 16 May 2001, Lamar Owen wrote:\n> > I am loathe to even bring this up, but with two messages today about it,\n> > I am going to be short and sweet:\n\n> > We don't have a reasonable upgrade path. ASCII dump->install\n> > new->initdb->restore is not a reasonable upgrade. Furthermore, the \n> > dump/restore cycle is a\n> > pain in the neck when tables get larger than a few hundred megabytes. \n\n> Personally ... I just upgraded 13 gig worth of databases using\n> dump/restore, didn't have to end a single file, in less then 1.5hrs from\n> start to finish ... *shrug*\n\nAnd 1.5 hours of downtime wasn't a problem? *raised eyebrow* :-) Or did you \nmigrate to a different box running the new version? Or were you running more \nthan one version on the one box? Some don't have that choice as easy, nor \nare they as experienced as you and I. Nor do they desire that much downtime.\n\nMy vision of an ideal upgrade:\nStop old version postmaster.\nInstall new verison.\nStart new version postmaster.\nNew version migrates in place while being able to give access to the old data \nwith the only downtime being stopping the old postmaster and starting the new.\n\nThis is hard. And, as Tom so well put it: it's a lot of boring work to get \nit to work right. But, IMHO, this is one of the most gifted and talented set \nof hackers in any open source project -- surely we could find both a concept \nand implementation of a way to actually do the inplace seamless upgrade, \ncouldn't we?\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7A/sx5kGGI8vV9eERApiwAKDRzYZSmwpcwlsRcexuGovNA77uNACeLIOS\ng2O3Q0KP4+ODIuqjjvzu3gY=\n=WRxi\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 17 May 2001 12:24:14 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": "Best way to upgrade might bee to do something as simple as get the\nmaster to master replication working.\n\nOld version on one box, new version on another. Fire up for\nreplication -- done automatically -- with both boxes acting as\nmasters. Change all the code to use the new server (or new database\non same server), then remove the old one from the queue and turn off\nreplication.\n\n0 down time, no data migration issues (handled internally by postgres\nreplication), etc. Worst part is you'll use double the disk space for\nthe period of time with 2 masters, and the system will run slower but\natleast it's up.\n\nThis is long term future though. Need master to master replication\nfirst as both servers have to be able to update the other with the new\ninformation while the code that uses it is being changed around. It\nalso means that replication will need to be version independent.\n\nOf course, I'm not the one doing this but it sure seems like the\neasiest way to go about it since most of the features involved are on\nthe drawing board already.\n\n--\nRod Taylor\n\n----- Original Message -----\nFrom: \"Lamar Owen\" <lamar.owen@wgcr.org>\nTo: \"The Hermit Hacker\" <scrappy@hub.org>\nCc: <pgsql-hackers@postgresql.org>\nSent: Thursday, May 17, 2001 12:24 PM\nSubject: Re: [HACKERS] Upgrade issue (again).\n\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Wednesday 16 May 2001 19:05, The Hermit Hacker wrote:\n> > On Wed, 16 May 2001, Lamar Owen wrote:\n> > > I am loathe to even bring this up, but with two messages today\nabout it,\n> > > I am going to be short and sweet:\n>\n> > > We don't have a reasonable upgrade path. ASCII dump->install\n> > > new->initdb->restore is not a reasonable upgrade. Furthermore,\nthe\n> > > dump/restore cycle is a\n> > > pain in the neck when tables get larger than a few hundred\nmegabytes.\n>\n> > Personally ... I just upgraded 13 gig worth of databases using\n> > dump/restore, didn't have to end a single file, in less then\n1.5hrs from\n> > start to finish ... *shrug*\n>\n> And 1.5 hours of downtime wasn't a problem? *raised eyebrow* :-) Or\ndid you\n> migrate to a different box running the new version? Or were you\nrunning more\n> than one version on the one box? Some don't have that choice as\neasy, nor\n> are they as experienced as you and I. Nor do they desire that much\ndowntime.\n>\n> My vision of an ideal upgrade:\n> Stop old version postmaster.\n> Install new verison.\n> Start new version postmaster.\n> New version migrates in place while being able to give access to the\nold data\n> with the only downtime being stopping the old postmaster and\nstarting the new.\n>\n> This is hard. And, as Tom so well put it: it's a lot of boring work\nto get\n> it to work right. But, IMHO, this is one of the most gifted and\ntalented set\n> of hackers in any open source project -- surely we could find both a\nconcept\n> and implementation of a way to actually do the inplace seamless\nupgrade,\n> couldn't we?\n> - --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.0.4 (GNU/Linux)\n> Comment: For info see http://www.gnupg.org\n>\n> iD8DBQE7A/sx5kGGI8vV9eERApiwAKDRzYZSmwpcwlsRcexuGovNA77uNACeLIOS\n> g2O3Q0KP4+ODIuqjjvzu3gY=\n> =WRxi\n> -----END PGP SIGNATURE-----\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Thu, 17 May 2001 12:43:49 -0400", "msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": "On Thu, 17 May 2001, Lamar Owen wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On Wednesday 16 May 2001 19:05, The Hermit Hacker wrote:\n> > On Wed, 16 May 2001, Lamar Owen wrote:\n> > > I am loathe to even bring this up, but with two messages today about it,\n> > > I am going to be short and sweet:\n>\n> > > We don't have a reasonable upgrade path. ASCII dump->install\n> > > new->initdb->restore is not a reasonable upgrade. Furthermore, the\n> > > dump/restore cycle is a\n> > > pain in the neck when tables get larger than a few hundred megabytes.\n>\n> > Personally ... I just upgraded 13 gig worth of databases using\n> > dump/restore, didn't have to end a single file, in less then 1.5hrs from\n> > start to finish ... *shrug*\n>\n> And 1.5 hours of downtime wasn't a problem? *raised eyebrow* :-) Or\n> did you migrate to a different box running the new version? Or were\n> you running more than one version on the one box? Some don't have\n> that choice as easy, nor are they as experienced as you and I. Nor do\n> they desire that much downtime.\n\nWasn't a problem ... I pre-warned all our clients, some of which make such\nheavy use of the DB that we have to run vacuum on it every few hours, and,\nto them, the benefits outweigh'd the brief bit of downtime ... *shrug*\n\n", "msg_date": "Thu, 17 May 2001 13:59:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": ">>>>> \"Bruce\" == Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n >> I am not going to beat it into the ground this time -- I have\n >> argued the issue at length before (some would say for far too\n >> long! :-)). But I am going to drop reminders as we get bug\n >> reports and compliants. And Iwould remind the group that MySQL\n >> does this easily -- it has utilities to migrate between its\n >> different table types.\n\n Bruce> It is my understanding that the MySQL heap format hasn't\n Bruce> changed since 1991. I think that helps them upgrade. We\n Bruce> add features too quickly. :-)\n\nJust to clear this little bit of particularly unconstructive FUD from\nthe air , no they haven't changed NISAM but created a new format :-\n\n Upgrading from a 3.22 version to 3.23\n -------------------------------------\n\n *MySQL* 3.23 supports tables of the new `MyISAM' type and the old\n `NISAM' type. You don't have to convert your old tables to use these\n with 3.23. By default, all new tables will be created with type\n `MyISAM' (unless you start `mysqld' with the\n `--default-table-type=isam' option. You can change an `ISAM' table to a\n `MyISAM' table with `ALTER TABLE' or the Perl script\n `mysql_convert_table_format'.\n\nSincerely,\n\nAdrian Phillips\n\n-- \nYour mouse has moved.\nWindows NT must be restarted for the change to take effect.\nReboot now? [OK]\n", "msg_date": "18 May 2001 15:07:14 +0200", "msg_from": "Adrian Phillips <adrianp@powertech.no>", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again)." }, { "msg_contents": "On Thu, May 17, 2001 at 12:43:49PM -0400, Rod Taylor wrote:\n> Best way to upgrade might bee to do something as simple as get the\n> master to master replication working.\n\nMaster-to-master replication is not simple, and (fortunately) isn't \nstrictly necessary. The minimal sequence is,\n\n1. Start a backup and a redo log at the same time.\n2. Start the new database and read the backup.\n3. Get the new database consuming the redo logs.\n4. When the new database catches up, make it a hot failover for the old.\n5. Turn off the old database and fail over.\n\nThe nice thing about this approach is that all the parts used are \nessential parts of an enterprise database anyway, regardless of their \nusefulness in upgrading. \n\nMaster-to-master replication is nice for load balancing, but not\nnecessary for failover. Its chief benefit, there, is that you wouldn't \nneed to abort the uncompleted transactions on the old database when \nyou make the switch. But master-to-master replication is *hard* to\nmake work, and intrusive besides.\n\nNathan Myers\nncm@zembu.com\n\n", "msg_date": "Fri, 18 May 2001 10:36:00 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Upgrade issue (again)." } ]
[ { "msg_contents": "I would like to bring my 2 grains of salt to the discussion.\n\nThe advantage of putting configuration files in etc is that the\nadministrator knows that all his configuration files are there. For backup\nor upgrade purpose, it is easy to review the /etc directory and make\nmodifcation if needed. It is rather a pain to look everywhere where the\nconfiguration files are...\n\nPG could look first in datadir and if nothing is found then go in /etc. This\nwould allow to run multiple instance of postifix (conf in data dir) or one\ninstance with a a standard place to put the configuration file...\n\nI would add also, that with the extension mechanism, it would be nice if PG\ncould look for .so in the standard library path.\n\nFranck Martin\nNetwork and Database Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: franck@sopac.org <mailto:franck@sopac.org> \nWeb site: http://www.sopac.org/\n<http://www.sopac.org/> Support FMaps: http://fmaps.sourceforge.net/\n<http://fmaps.sourceforge.net/> \n\nThis e-mail is intended for its addresses only. Do not forward this e-mail\nwithout approval. The views expressed in this e-mail may not be necessarily\nthe views of SOPAC.\n\n\n", "msg_date": "Thu, 17 May 2001 10:48:13 +1200", "msg_from": "Franck Martin <Franck@sopac.org>", "msg_from_op": true, "msg_subject": "RE: Configurable path to look up dynamic libraries" }, { "msg_contents": "> The advantage of putting configuration files in etc is that the\n> administrator knows that all his configuration files are there. For backup\n> or upgrade purpose, it is easy to review the /etc directory and make\n> modifcation if needed. It is rather a pain to look everywhere where the\n> configuration files are...\n>\n> PG could look first in datadir and if nothing is found then go in\n> /etc. This\n> would allow to run multiple instance of postifix (conf in data dir) or one\n> instance with a a standard place to put the configuration file...\n\nSurely you would want config files in /usr/local/etc - definitely not /etc\n...\n\nChris\n\n", "msg_date": "Thu, 17 May 2001 10:00:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Configurable path to look up dynamic libraries" } ]
[ { "msg_contents": "\nFor those of you who have missed it, here\n\nhttp://www.google.com/search?q=cache:web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf+clark+end+to+end&hl=en\n\nis the paper some of us mention, \"END-TO-END ARGUMENTS IN SYSTEM DESIGN\"\nby Saltzer, Reed, and Clark.\n\nThe abstract is:\n\n This paper presents a design principle that helps guide placement of\n functions among the modules of a distributed computer system. The\n principle, called the end-to-end argument, suggests that functions\n placed at low levels of a system may be redundant or of little value\n when compared with the cost of providing them at that low level.\n Examples discussed in the paper include bit error recovery, security\n using encryption, duplicate message suppression, recovery from\n system crashes, and delivery acknowledgement. Low level mechanisms\n to support these functions are justified only as performance\n enhancements.\n\nIt was written in 1981 and is undiminished by the subsequent decades.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 17 May 2001 00:24:28 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": true, "msg_subject": "\"End-to-end\" paper" }, { "msg_contents": "At 12:24 AM 17-05-2001 -0700, Nathan Myers wrote:\n>\n>For those of you who have missed it, here\n>\n>http://www.google.com/search?q=cache:web.mit.edu/Saltzer/www/publications/e\nndtoend/endtoend.pdf+clark+end+to+end&hl=en\n>\n>is the paper some of us mention, \"END-TO-END ARGUMENTS IN SYSTEM DESIGN\"\n>by Saltzer, Reed, and Clark.\n>\n>The abstract is:\n>\n> This paper presents a design principle that helps guide placement of\n> functions among the modules of a distributed computer system. The\n> principle, called the end-to-end argument, suggests that functions\n> placed at low levels of a system may be redundant or of little value\n> when compared with the cost of providing them at that low level.\n> Examples discussed in the paper include bit error recovery, security\n> using encryption, duplicate message suppression, recovery from\n> system crashes, and delivery acknowledgement. Low level mechanisms\n> to support these functions are justified only as performance\n> enhancements.\n>\n>It was written in 1981 and is undiminished by the subsequent decades.\n>\n\nMaybe I don't understand the paper.\n\nThe end-to-end argument might be true if taking the monolithic approach. I\nfind more useful ideas gleaned from the RFCs, TCP/IP and the OSI 7 layer\nmodel: modularity, \"useful standard interfaces\", \"Be liberal in what you\naccept, and conservative in what you send\" and so on.\n\nWithin a module I figure the end to end argument might hold, but the author\nkeeps talking about networks and networking.\n\nSSL and TCP are useful. The various CRC checks down the IP stack to the\ndatalink layer have their uses too.\n\nBy splitting stuff up at appropriate points, adding or substituting objects\nat various layers becomes so much easier. People can download Postgresql\nover token ring, Gigabit ethernet, X.25 and so on.\n\nSplitting stuff up does mean that the bits and pieces now do have a certain\nresponsibility. If those responsibilities involve some redundancies in\nerror checking or encryption or whatever, so be it, because if done well\npeople can use those bits and pieces in interesting ways never dreamed of\ninitially.\n\nFor example SSL over TCP over IPSEC over encrypted WAP works (even though\nIPSEC is way too complicated :)). There's so much redundancy there, but at\nthe same time it's not a far fetched scenario - just someone ordering\nonline on a notebook pc.\n\nBut if a low level module never bothered with error\ncorrection/detection/handling or whatever and was optimized for an\napplication specific purpose, it's harder to use it for other purposes. And\nif you do, some chap could post an article to Bugtraq on it, mentioning\nexploit, DoS or buffer overflow.\n\nCheerio,\nLink.\n\n\n\n", "msg_date": "Thu, 17 May 2001 18:04:54 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: \"End-to-end\" paper" }, { "msg_contents": "On Thu, May 17, 2001 at 06:04:54PM +0800, Lincoln Yeoh wrote:\n> At 12:24 AM 17-05-2001 -0700, Nathan Myers wrote:\n> >\n> >For those of you who have missed it, here\n> >\n> >http://www.google.com/search?q=cache:web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf+clark+end+to+end&hl=en\n> >\n> >is the paper some of us mention, \"END-TO-END ARGUMENTS IN SYSTEM DESIGN\"\n> >by Saltzer, Reed, and Clark.\n> >\n> >The abstract is:\n> >\n> > This paper presents a design principle that helps guide placement\n> > of functions among the modules of a distributed computer system.\n> > The principle, called the end-to-end argument, suggests that\n> > functions placed at low levels of a system may be redundant or\n> > of little value when compared with the cost of providing them\n> > at that low level. Examples discussed in the paper include\n> > bit error recovery, security using encryption, duplicate\n> > message suppression, recovery from system crashes, and delivery\n> > acknowledgement. Low level mechanisms to support these functions\n> > are justified only as performance enhancements.\n> >\n> >It was written in 1981 and is undiminished by the subsequent decades.\n>\n> Maybe I don't understand the paper.\n\nYes. It bears re-reading.\n\n> The end-to-end argument might be true if taking the monolithic\n> approach. I find more useful ideas gleaned from the RFCs, TCP/IP and\n> the OSI 7 layer model: modularity, \"useful standard interfaces\", \"Be\n> liberal in what you accept, and conservative in what you send\" and so\n> on.\n\nThe end-to-end principle has had profound effects on the design of \nInternet protocols, perhaps most importantly in keeping them simpler \nthan OSI's.\n\n> Within a module I figure the end to end argument might hold,\n\nThe end-to-end principle isn't particularly applicable within a module.\nIt's a system-design principle. Its prescription for individual modules\nis: don't imagine that anybody else gets much value from your complex\nerror recovery shenanigans; they have to do their own error recovery\nanyway. You provide more value by making a good effort.\n\n> but the author keeps talking about networks and networking.\n\nOf course networking is just an example, but it's a particularly\ngood example. Data storage (e.g. disk) is another good example; in\nthe context of the paper it may be thought of as a mechanism for\ncommunicating with other (later) times. The point there is that the CRCs\nand ECC performed by the disk are not sufficient to ensure reliability\nfor the system (e.g. database service); for that, end-to-end measures\nsuch as hot-failover, backups, redo logs, and block- or record-level\nCRCs are needed. The purpose of the disk CRCs is not reliability, a job\nthey cannot do alone, but performance: they help make the need to use\nthe backups and redo logs infrequent enough to be tolerable.\n\n> SSL and TCP are useful. The various CRC checks down the IP stack to\n> the datalink layer have their uses too.\n\nYes, of course they are useful. The authors say so in the paper, and\nthey say precisely how (and how not).\n\n> By splitting stuff up at appropriate points, adding or substituting\n> objects at various layers becomes so much easier. People can download\n> Postgresql over token ring, Gigabit ethernet, X.25 and so on.\n\nAs noted in the paper, the principle is most useful in helping to decide\nwhat goes in each layer.\n\n> Splitting stuff up does mean that the bits and pieces now do have\n> a certain responsibility. If those responsibilities involve some\n> redundancies in error checking or encryption or whatever, so be\n> it, because if done well people can use those bits and pieces in\n> interesting ways never dreamed of initially.\n>\n> For example SSL over TCP over IPSEC over encrypted WAP works (even\n> though IPSEC is way too complicated :)). There's so much redundancy\n> there, but at the same time it's not a far fetched scenario - just\n> someone ordering online on a notebook pc.\n\nThe authors quote a similar example in the paper, even though it was\nwritten twenty years ago.\n\n> But if a low level module never bothered with error\n> correction/detection/handling or whatever and was optimized for\n> an application specific purpose, it's harder to use it for other\n> purposes. And if you do, some chap could post an article to Bugtraq on\n> it, mentioning exploit, DoS or buffer overflow.\n\nThe point is that leaving that stuff _out_ is how you keep low-level\nmechanisms useful for a variety of purposes. Putting in complicated\nerror-recovery stuff might suit it better for a particular application,\nbut make it less suitable for others.\n\nThis is why, at the IP layer, packets get tossed at the first sign of\ncongestion. It's why TCP connections often get dropped at the first sign\nof a data-format violation. This is a very deep principle; understanding\nit thoroughly will make you a much better system designer.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 17 May 2001 14:52:27 -0700", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": true, "msg_subject": "Re: Re: \"End-to-end\" paper" }, { "msg_contents": "On OBSD from cvs source, clean checkout:\n\ngcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../src/include -DLIBDIR=\\\"/usr/local/pgsql/lib\\\n\" -DDLSUFFIX=\\\".so\\\" -c -o dfmgr.o dfmgr.c\ndfmgr.c: In function `load_external_function':\ndfmgr.c:118: `RTLD_GLOBAL' undeclared (first use in this function)\ndfmgr.c:118: (Each undeclared identifier is reported only once\ndfmgr.c:118: for each function it appears in.)\ngmake[4]: *** [dfmgr.o] Error 1\ngmake[4]: Leaving directory\n`/home/bpalmer/APPS/pgsql/src/backend/utils/fmgr'\n\n\n?? RTLD_GLOBAL problems?\n\n- b\n\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Sat, 19 May 2001 20:03:50 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "cvs snapshot compile problems" }, { "msg_contents": "This seems to have been broken for a few days ~monday night\n\n- brandon\n\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n\n", "msg_date": "Sat, 19 May 2001 20:05:28 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "cvs snapshot compile problems" }, { "msg_contents": "On Sat, May 19, 2001 at 08:03:50PM -0400, bpalmer wrote:\n> On OBSD from cvs source, clean checkout:\n> \n> gcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../../src/include -DLIBDIR=\\\"/usr/local/pgsql/lib\\\n> \" -DDLSUFFIX=\\\".so\\\" -c -o dfmgr.o dfmgr.c\n> dfmgr.c: In function `load_external_function':\n> dfmgr.c:118: `RTLD_GLOBAL' undeclared (first use in this function)\n> dfmgr.c:118: (Each undeclared identifier is reported only once\n> dfmgr.c:118: for each function it appears in.)\n> gmake[4]: *** [dfmgr.o] Error 1\n> gmake[4]: Leaving directory\n> `/home/bpalmer/APPS/pgsql/src/backend/utils/fmgr'\n> \n> \n> ?? RTLD_GLOBAL problems?\n\nNot a solution, but a few data points: I had a successful build from cvs of\nMay 19 16:21 GMT under NetBSD/i386, and for me RTLD_GLOBAL is defined in\n/usr/include/dlfcn.h ie., system header file, not postgresql.\n\nHope that helps,\n\nPatrick\n", "msg_date": "Mon, 21 May 2001 12:08:17 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: cvs snapshot compile problems" }, { "msg_contents": "\nThis was added by Peter E to allow PL/Perl to compile.\n\n> On Sat, May 19, 2001 at 08:03:50PM -0400, bpalmer wrote:\n> > On OBSD from cvs source, clean checkout:\n> > \n> > gcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations\n> > -I../../../../src/include -DLIBDIR=\\\"/usr/local/pgsql/lib\\\n> > \" -DDLSUFFIX=\\\".so\\\" -c -o dfmgr.o dfmgr.c\n> > dfmgr.c: In function `load_external_function':\n> > dfmgr.c:118: `RTLD_GLOBAL' undeclared (first use in this function)\n> > dfmgr.c:118: (Each undeclared identifier is reported only once\n> > dfmgr.c:118: for each function it appears in.)\n> > gmake[4]: *** [dfmgr.o] Error 1\n> > gmake[4]: Leaving directory\n> > `/home/bpalmer/APPS/pgsql/src/backend/utils/fmgr'\n> > \n> > \n> > ?? RTLD_GLOBAL problems?\n> \n> Not a solution, but a few data points: I had a successful build from cvs of\n> May 19 16:21 GMT under NetBSD/i386, and for me RTLD_GLOBAL is defined in\n> /usr/include/dlfcn.h ie., system header file, not postgresql.\n> \n> Hope that helps,\n> \n> Patrick\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 May 2001 00:09:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs snapshot compile problems" } ]
[ { "msg_contents": "Hello Kids, i'am new here, excuseme by my english, but i'm amateur\n\ni need to know how i can increase speed on the query (any query) without use \nvacumm??\n\n\ntoo, which is the procedure for made the trigger always update the table \npg_statistic when i do any transaction.\n\nby the way, too need use SPI (Server Programming interface) for save plans \nin tables.\n\nThanks, bye\n\nPedro Pablo Figueroa Miranda.\n\n\nnote : If you want, you can write me in Spanish language, because is MY \nLANGUAGE.\n\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n", "msg_date": "Thu, 17 May 2001 13:05:05 ", "msg_from": "\"Pedro Pablo Figueroa Miranda\" <pepafimi@hotmail.com>", "msg_from_op": true, "msg_subject": "Hello Kids" } ]
[ { "msg_contents": "Hello everyone:\n\n I am a novice in postgreSQL.So i want to get ODBC driver\nto connect with my program.Is there somebody can tell me\nwhere the driver can download.Or how to connect postgreSQL\nwith PHP page in linux.Thanks.\n\n\n\n\n\n--\n________________________________\n\nJACKY HSU\nMail:u8924356@cc.nkfust.ed.tw\nStudy in NKFUST\n________________________________\n\n\n", "msg_date": "Thu, 17 May 2001 23:19:06 +0800", "msg_from": "\"jacky_shu\" <u8924356@cc.nkfust.edu.tw>", "msg_from_op": true, "msg_subject": "Need Postgresql ODBC Driver" }, { "msg_contents": "pgsql-hackers is for folks who are developing pgsql...\nConsider posting to -general which is for people using pgsql..\n\n\"jacky_shu\" <u8924356@cc.nkfust.edu.tw> wrote in message\nnews:9e0q9v$q0p$1@bbs.nkfu.edu.tw...\n> Hello everyone:\n>\n> I am a novice in postgreSQL.So i want to get ODBC driver\n> to connect with my program.Is there somebody can tell me\n> where the driver can download.Or how to connect postgreSQL\n> with PHP page in linux.Thanks.\n>\n>\n>\n>\n>\n> --\n> ________________________________\n>\n> JACKY HSU\n> Mail:u8924356@cc.nkfust.ed.tw\n> Study in NKFUST\n> ________________________________\n>\n>\n\n\n", "msg_date": "Thu, 17 May 2001 12:26:08 -0400", "msg_from": "\"August Zajonc\" <augustz@bigfoot.com>", "msg_from_op": false, "msg_subject": "Re: Need Postgresql ODBC Driver" }, { "msg_contents": "\"jacky_shu\" <u8924356@cc.nkfust.edu.tw> writes:\n\n> I am a novice in postgreSQL.So i want to get ODBC driver\n> to connect with my program.\n\nhttp://www.unixodbc.org/\n\n> Is there somebody can tell me where the driver can download.Or how\n> to connect postgreSQL with PHP page in linux.\n\nYou can find PHP rpms with ODBC support in Rawhide (\nftp://ftp.redhat.com/pub/redhat/linux/rawhide/i386/RedHat/RPMS/ \n) - I tested it with unixODBC and postgresql 7.1.1 (also in that\ndirectory), and it worked fine for me.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "18 May 2001 11:08:21 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Need Postgresql ODBC Driver" }, { "msg_contents": "Hello Jacky,\n\nDo you whish to connect to PostgreSQL from Linux or Windows ?\nWindows odbc driver: ftp://ring.asahi-net.or.jp/pub/misc/db/postgresql,\nPHP 4.04 with PostgreSQL 7.1 drivers: http://rpms.arvin.dk/php/\n\nPractically, you don't need ODBC drivers to connect from PHP to PostgreSQL.\n\nAlso, try the other list: http://fts.postgresql.org/db/mw/ and choose \nINTERFACE in LIST:all.\nThis will enable you to query the INTERFACE mailing list.\n\nIf you need a good administration and development software:\nhttp://www.greatbridge.org/project/pgadmin/projdisplay.php\n\nGreetings from Jean-Michel POURE, Paris, France\n\nAt 23:19 17/05/01 +0800, you wrote:\n>Hello everyone:\n>\n>I am a novice in postgreSQL.So i want to get ODBC driver\n>to connect with my program.Is there somebody can tell me\n>where the driver can download.Or how to connect postgreSQL\n>with PHP page in linux.Thanks.\n>\n>\n>\n>\n>\n>--\n>________________________________\n>\n>JACKY HSU\n>Mail:u8924356@cc.nkfust.ed.tw\n>Study in NKFUST\n>________________________________\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Sun, 20 May 2001 09:46:03 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: Need Postgresql ODBC Driver" } ]
[ { "msg_contents": "Hello,\n\nI have spend some thinking about implementation of DOMAIN capability.\nHere are my ideas.\n\nWhat is a domain? It is an alias for a type with size, constraints and\ndefault values. It is like one column of a table. And this is the main\nidea of my \"implementation\". It should be possible to implement it using\nexisting system tables.\n\nNew rules for grammar can be easily created from already existing pieces\n(column definition of a table).\n\nHow to store information about a domain in system tables?\nWhen a new domain is created it will:\n- put a record into pg_type with typnam = domain name, new code for\ntyptype = 'd' and typrelid = oid of a new record in pg_class (next line)\n- put a record into pg_class to create a fictional table with a new\nrelkind ('d'?), relnatts = 1, relname can be system generated\n(pg_d_<domainname>)\n- put a records into pg_attribute and pg_attrdef with \"column\n(attribute) definition\" - real type, size, default value etc., owner\nwill the fictional table from the previous step\n\nThen it will be required to modify functions that works with types. When\ntyptype of a retrieved type is 'd' then it will perform lookups into\npg_class, pg_attribute and pg_attrdef to find the real definition of the\ndomain. These additional lookups will also create a performace penalty\nof using domains. But every feature has its costs. I know this paragraph\nabout the real implementation is very short, but I think there are\npeople that know the \"type mechanism\" better then I know. And can easier\ntell if it is possible to go this way.\n\nI hope you understand my explanation. It is also possible that I don't\nknow some aspects of the backend code that makes my idea wrong.\n\n\t\t\tDan\n\n----------------------------------------------\nIng. Daniel Horak\nnetwork and system administrator\ne-mail: horak@sit.plzen-city.cz\nprivat e-mail: dan.horak@email.cz ICQ:36448176\n----------------------------------------------\n", "msg_date": "Thu, 17 May 2001 17:22:05 +0200", "msg_from": "=?iso-8859-2?Q?Hor=E1k_Daniel?= <horak@sit.plzen-city.cz>", "msg_from_op": true, "msg_subject": "possible DOMAIN implementation" }, { "msg_contents": "=?iso-8859-2?Q?Hor=E1k_Daniel?= <horak@sit.plzen-city.cz> writes:\n> When a new domain is created it will:\n> - put a record into pg_type with typnam = domain name, new code for\n> typtype = 'd' and typrelid = oid of a new record in pg_class (next line)\n> - put a record into pg_class to create a fictional table with a new\n> relkind ('d'?), relnatts = 1, relname can be system generated\n> (pg_d_<domainname>)\n\nUgh. Don't overload pg_class with things that are not tables. I see no\nreason that either pg_class or pg_attribute should be involved in the\ndefinition of a domain. Make new system tables if you need to, but\ndon't confuse the semantics of critical tables.\n\n> - put a records into pg_attribute and pg_attrdef with \"column\n> (attribute) definition\" - real type, size, default value etc., owner\n> will the fictional table from the previous step\n\n> Then it will be required to modify functions that works with types. When\n> typtype of a retrieved type is 'd' then it will perform lookups into\n> pg_class, pg_attribute and pg_attrdef to find the real definition of the\n> domain. These additional lookups will also create a performace penalty\n> of using domains.\n\nWhy shouldn't this info be directly available from the pg_type row?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 May 2001 13:58:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: possible DOMAIN implementation " }, { "msg_contents": "Hi,\n\nHaven't looked at this for a while, but I think some larger issues might raise\ntheir (ugly?) heads here.\n\nDomains are effectively types that inherit attributes of parent type, with\nsome additional information, so should be handled at the level of pg_type.\nHowever might make sense to look at some other matters at the same time - I'm\nthinking specifically of general inheritance and abstract data types. AFAICT,\nthese are all closely related. I started looking at this a while ago, but was\nside-tracked by the winds of change ;-)\n\nTom Lane wrote:\n\n> =?iso-8859-2?Q?Hor=E1k_Daniel?= <horak@sit.plzen-city.cz> writes:\n> > When a new domain is created it will:\n> > - put a record into pg_type with typnam = domain name, new code for\n> > typtype = 'd' and typrelid = oid of a new record in pg_class (next line)\n\n> - put a record into pg_class to create a fictional table with a new\n> > relkind ('d'?), relnatts = 1, relname can be system generated\n> > (pg_d_<domainname>)\n>\n> Ugh. Don't overload pg_class with things that are not tables. I see no\n> reason that either pg_class or pg_attribute should be involved in the\n> definition of a domain. Make new system tables if you need to, but\n> don't confuse the semantics of critical tables.\n\nThis is required due to the way inheritance is currently handled?\n\n> > - put a records into pg_attribute and pg_attrdef with \"column\n> > (attribute) definition\" - real type, size, default value etc., owner\n> > will the fictional table from the previous step\n\nditto?\n\n> > Then it will be required to modify functions that works with types. When\n> > typtype of a retrieved type is 'd' then it will perform lookups into\n> > pg_class, pg_attribute and pg_attrdef to find the real definition of the\n> > domain. These additional lookups will also create a performace penalty\n> > of using domains.\n>\n> Why shouldn't this info be directly available from the pg_type row?\n\n From what I can remember inheritance works in postgresql at the class level.\nC.J. Date et al *strongly* argue that inheritance should be based on types,\nnot relations/classes. This is still the case in 7.1? If the inheritance\nmechanism could be changed to support types, the concept of inheritance for\nclasses should not be broken as these have entries in pg_type - possible some\ncode might be though :-(\n\nPlease note that I'm looking at forest scales (and also through the haze of\nmemory) - the trees might have an entirely viewpoint ;-)\n\ncheers,\nJohn\n--\n----------------------------------------------------------------------\njohn reid e-mail john_reid@uow.edu.au\ntechnical officer room G02, building 41\nschool of geosciences phone +61 02 4221 3963\nuniversity of wollongong fax +61 02 4221 4250\n\nuproot your questions from their ground and the dangling roots will be\nseen. more questions!\n -mentat zensufi\n\napply standard disclaimers as desired...\n----------------------------------------------------------------------\n\n\n", "msg_date": "Fri, 18 May 2001 16:28:47 +1000", "msg_from": "John Reid <jgreid@uow.edu.au>", "msg_from_op": false, "msg_subject": "Re: possible DOMAIN implementation" }, { "msg_contents": "John Reid <jgreid@uow.edu.au> writes:\n>> Ugh. Don't overload pg_class with things that are not tables. I see no\n>> reason that either pg_class or pg_attribute should be involved in the\n>> definition of a domain. Make new system tables if you need to, but\n>> don't confuse the semantics of critical tables.\n\n> This is required due to the way inheritance is currently handled?\n\nNot inheritance specifically. I'nm just looking at it on general design\nprinciples: all the rows of a table should be the same kind of thing.\nWe shade that a little to allow views, sequences, etc, in pg_class, but\nat least they're all things that have columns and so forth.\n\n> From what I can remember inheritance works in postgresql at the class level.\n> C.J. Date et al *strongly* argue that inheritance should be based on types,\n> not relations/classes. This is still the case in 7.1?\n\nPostgres doesn't really distinguish between tables and composite types\n--- there's a one-for-one relationship between 'em. So we haven't had\nto think hard about that point. If we did allow composite types without\nassociated tables, we probably would want tables to inherit from 'em\n(which would mean some rethinking of the inheritance representation).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 May 2001 03:07:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: possible DOMAIN implementation " }, { "msg_contents": "Hi,\n\nTom Lane wrote:\n> \n> John Reid <jgreid@uow.edu.au> writes:\n> >> Ugh. Don't overload pg_class with things that are not tables. I see no\n> >> reason that either pg_class or pg_attribute should be involved in the\n> >> definition of a domain. Make new system tables if you need to, but\n> >> don't confuse the semantics of critical tables.\n> \n> > This is required due to the way inheritance is currently handled?\n> \n> Not inheritance specifically. I'nm just looking at it on general design\n> principles: all the rows of a table should be the same kind of thing.\n> We shade that a little to allow views, sequences, etc, in pg_class, but\n> at least they're all things that have columns and so forth.\n\nThese could actually be defined in pg_type (or an inherited class\npg_class_def)?\n\n> \n> > From what I can remember inheritance works in postgresql at the class level.\n> > C.J. Date et al *strongly* argue that inheritance should be based on types,\n> > not relations/classes. This is still the case in 7.1?\n> \n> Postgres doesn't really distinguish between tables and composite types\n> --- there's a one-for-one relationship between 'em. So we haven't had\n> to think hard about that point. If we did allow composite types without\n> associated tables, we probably would want tables to inherit from 'em\n> (which would mean some rethinking of the inheritance representation).\n\nYes. I had a superficial look at SQL99 abstract data types a while\nback, but didn't get very far. I didn't raise any of the issues I came\nacross at the time as everyone was busy with the 7.1 release. My\ninterest is primarily in GIS data storage, which is a bit more involved\nthan most applications. Ability to define complex types without having\nto instantiate them (or else implement them as user defined type when\nthey are really a class) would be especially handy for GIS schemas. Not\nquite sure what else yet ;-) \n\nIMHO, it is probably worth looking at this further - it seems to me that\nthese issues will have a significant impact when dealing with\nimplementation of the SQL99 standard, so probably easier to deal with\nthem now/soon?\n\nFWIW, some *really sketchy* ideas from when I looked at this:\n1) pg_inherits should point at pg_type\n2) some (most?) of the functionality of pg_class should be moved into\npg_type ((2a) maybe using inherited class pg_class_def?)\n3) pg_class should purely contain relation specific stuff only (type,\nindexes, owner)\n\nanother alternative would be introduce a new system table pg_relation\nfor relations, making pg_class the equivalent of pg_type but used for\nhandling complex types. Then again, this is effectively the same as\n(2a)? Might make sense to think about renaming the tables anyway, as to\nme pg_class seems to imply the class definition, rather than the\ninstantiation. Then we would have\n\npg_type\npg_class inherits pg_type\npg_relation\u0018\n\nI could forsee some real chicken or the egg problems in system\ninitialization. How are these handled currently?\n \ncheers,\nJohn\n", "msg_date": "Sun, 20 May 2001 14:44:32 +1000", "msg_from": "John Reid <jgreid@uow.edu.au>", "msg_from_op": false, "msg_subject": "Re: possible DOMAIN implementation" } ]