threads
listlengths
1
2.99k
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\tmeskes@postgresql.org\t02/05/19 16:00:53\n\nModified files:\n\tsrc/interfaces/ecpg: ChangeLog \n\tsrc/interfaces/ecpg/preproc: ecpg_keywords.c keywords.c pgc.l \n\t preproc.y \n\nLog message:\n\t- Fixed reduce/reduce conflict in parser.\n\t- Synced preproc.y with gram.y.\n\t- Synced pgc.l with scan.l.\n\t- Synced keywords.c.\n\n", "msg_date": "Sun, 19 May 2002 16:00:53 -0400 (EDT)", "msg_from": "meskes@postgresql.org (Michael Meskes)", "msg_from_op": true, "msg_subject": "pgsql/src/interfaces/ecpg ChangeLog preproc/ec ..." }, { "msg_contents": "meskes@postgresql.org (Michael Meskes) writes:\n> \t- Fixed reduce/reduce conflict in parser.\n> \t- Synced preproc.y with gram.y.\n\nGood, but now I get:\n\n$ make\nbison -y -d preproc.y\npreproc.y:5330: fatal error: maximum table size (32767) exceeded\nmake: *** [preproc.h] Error 1\n\nThis is with\n\n$ bison -V\nGNU Bison version 1.28\n\nSurprised the heck out of me --- I thought GNU tools weren't supposed\nto have arbitrary limits in them. Perhaps there's some error in the\npreproc.y file that's triggering this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 May 2002 16:14:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql/src/interfaces/ecpg ChangeLog preproc/ec ... " }, { "msg_contents": "[Moved to hackers}\n\nOn Sun, May 19, 2002 at 04:14:47PM -0400, Tom Lane wrote:\n> meskes@postgresql.org (Michael Meskes) writes:\n> > \t- Fixed reduce/reduce conflict in parser.\n> > \t- Synced preproc.y with gram.y.\n> \n> Good, but now I get:\n> \n> $ make\n> bison -y -d preproc.y\n> preproc.y:5330: fatal error: maximum table size (32767) exceeded\n> make: *** [preproc.h] Error 1\n> \n> This is with\n> \n> $ bison -V\n> GNU Bison version 1.28\n\nI'm using bison 1.35, but get the same error.\n\n> Surprised the heck out of me --- I thought GNU tools weren't supposed\n> to have arbitrary limits in them. Perhaps there's some error in the\n> preproc.y file that's triggering this?\n\nI wish it was. Here's what I found using google:\n\n...\n\n>\"sqlparser.y\", line 12054: maximum table size (32767) exceeded\n>\n> After doing some research, we found out that in the source code for\n>bison v.1.25 there is a #define MAXTABLE 32767 in machine.h. We can\n>modify that value, but does anyone now what would the consequences be?\n\nI would look to see where the value is used, and be sure any tables\nlimited to that size are not addressed by short int's. The limit\nprobably reflects an assumed int size for the DOS target, making it\nsafe to change, but I would still check for short's.\n\n>Is there another way to overcome this bison 32K limitation?\n\nPush more of the work into the scanner? You must have one Hell of a\ngrammar.\n...\n\nIt seems that there are only one or two projects ever to hit that limit.\nBut it appears to be a hardcoded limit inside bison. It seems I hit that\nlimit with the latest changes.\n\nRight now I'm removing some simple rules to get under it again, but we\nwill certainly hit it again in the very near future.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 20 May 2002 11:08:50 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/interfaces/ecpg ChangeLog preproc/ec ..." }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> [Moved to hackers}\n> On Sun, May 19, 2002 at 04:14:47PM -0400, Tom Lane wrote:\n>> preproc.y:5330: fatal error: maximum table size (32767) exceeded\n>>\n>> This is with\n>> $ bison -V\n>> GNU Bison version 1.28\n\n> I'm using bison 1.35, but get the same error.\n\n1.35? Sounds like bison development has become active again. It was\nstuck at 1.28 for a *long* time.\n\n> It seems that there are only one or two projects ever to hit that limit.\n> But it appears to be a hardcoded limit inside bison. It seems I hit that\n> limit with the latest changes.\n> Right now I'm removing some simple rules to get under it again, but we\n> will certainly hit it again in the very near future.\n\nYes. Maybe we should contact the Bison developers and lobby for an\nincrease in the standard value. I don't mind saying \"you must use\nBison 1.36 or newer to rebuild the Postgres grammar\" ... but having to\nsay \"you must use a nonstandardly patched Bison\" would really suck :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 May 2002 10:31:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/interfaces/ecpg ChangeLog preproc/ec ... " }, { "msg_contents": "On Mon, May 20, 2002 at 10:31:57AM -0400, Tom Lane wrote:\n> Yes. Maybe we should contact the Bison developers and lobby for an\n> increase in the standard value. I don't mind saying \"you must use\n> Bison 1.36 or newer to rebuild the Postgres grammar\" ... but having to\n> say \"you must use a nonstandardly patched Bison\" would really suck :-(\n\nI fully agree.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 21 May 2002 09:23:18 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/interfaces/ecpg ChangeLog preproc/ec ..." } ]
[ { "msg_contents": "I'm writing a function that accepts a table name and digs some information\nout about it. I'm developing on 7.3cvs w/schemas, and wanted my function to\nuse schemas.\n\nIf the user gives a full schema name (s.table), I find the table in pg_class\nby comparing the ns oid and relname.\n\nHowever, if the user doesn't give the full schema name, I have to find which\ntable they're looking for by examining current_schemas, iterating over each\nschema in this path, and trying it.\n\nIs there a function already in the backend to return a class oid, given a\nname, by looking up the table in the current_schemas path? Would it make\nsense for us to expose this, or write one, so that this small wheel doesn't\nhave to be re-invented everytime someone wants to find a table by just the\nname?\n\nSomething like:\n\n findtable(text) returns oid\n findtable(\"foo\") -> oid of foo (given current search path)\n findtable(\"s.foo\") -> oid of s.foo\n\nI can write something in plpgsql (iterating over the array, checking each,\netc.), however, it would be nice if something was already there.\n\nAny ideas?\n\nThanks!\n\n- J.\n\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Sun, 19 May 2002 17:00:54 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": true, "msg_subject": "Exposed function to find table in schema search list?" }, { "msg_contents": "Joel Burton wrote:\n> Is there a function already in the backend to return a class oid, given a\n> name, by looking up the table in the current_schemas path? Would it make\n> sense for us to expose this, or write one, so that this small wheel doesn't\n> have to be re-invented everytime someone wants to find a table by just the\n> name?\n> \n> Something like:\n> \n> findtable(text) returns oid\n> findtable(\"foo\") -> oid of foo (given current search path)\n> findtable(\"s.foo\") -> oid of s.foo\n> \n> I can write something in plpgsql (iterating over the array, checking each,\n> etc.), however, it would be nice if something was already there.\n\nI think this already exists:\n\ntest=# select 'joe.foo'::regclass::oid;\n oid\n--------\n 125532\n(1 row)\n\ntest=# select 'public.foo'::regclass::oid;\n oid\n--------\n 125475\n(1 row)\n\ntest=# select 'foo'::regclass::oid;\n oid\n--------\n 125475\n(1 row)\n\ntest=# select current_schema();\n current_schema\n----------------\n public\n(1 row)\n\nJoe\n\n", "msg_date": "Sun, 19 May 2002 14:25:16 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Exposed function to find table in schema search list?" }, { "msg_contents": "> -----Original Message-----\n> From: Joe Conway [mailto:mail@joeconway.com]\n> Sent: Sunday, May 19, 2002 5:25 PM\n> To: Joel Burton\n> Cc: Pgsql-Hackers@Postgresql. Org\n> Subject: Re: [HACKERS] Exposed function to find table in schema search\n> list?\n>\n>\n> Joel Burton wrote:\n> > Is there a function already in the backend to return a class\n> oid, given a\n> > name, by looking up the table in the current_schemas path? Would it make\n> > sense for us to expose this, or write one, so that this small\n> wheel doesn't\n> > have to be re-invented everytime someone wants to find a table\n> by just the\n> > name?\n> >\n> > Something like:\n> >\n> > findtable(text) returns oid\n> > findtable(\"foo\") -> oid of foo (given current search path)\n> > findtable(\"s.foo\") -> oid of s.foo\n> >\n> > I can write something in plpgsql (iterating over the array,\n> checking each,\n> > etc.), however, it would be nice if something was already there.\n>\n> I think this already exists:\n>\n> test=# select 'joe.foo'::regclass::oid;\n> oid\n> --------\n> 125532\n> (1 row)\n>\n> test=# select 'public.foo'::regclass::oid;\n> oid\n> --------\n> 125475\n> (1 row)\n>\n> test=# select 'foo'::regclass::oid;\n> oid\n> --------\n> 125475\n> (1 row)\n>\n> test=# select current_schema();\n> current_schema\n> ----------------\n> public\n> (1 row)\n\nPerfect! I was hoping to avoid re-creating the wheel. Thanks, Joe.\n\nIs the use of regclass going to prove to be very implementation-specific?\nWould it make sense for us to create a function that abstracts this?\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Sun, 19 May 2002 17:28:16 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": true, "msg_subject": "Re: Exposed function to find table in schema search list?" }, { "msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n> Is the use of regclass going to prove to be very\n> implementation-specific?\n\nSure, but so would any other API for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 May 2002 18:39:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Exposed function to find table in schema search list? " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Sunday, May 19, 2002 6:40 PM\n> To: Joel Burton\n> Cc: Joe Conway; Pgsql-Hackers@Postgresql. Org\n> Subject: Re: [HACKERS] Exposed function to find table in schema search\n> list?\n>\n>\n> \"Joel Burton\" <joel@joelburton.com> writes:\n> > Is the use of regclass going to prove to be very\n> > implementation-specific?\n>\n> Sure, but so would any other API for it.\n\nWell, sort of, but if we had been promoting a function tableoid(text)\nreturns oid, we wouldn't have to make any change for the move to regclass,\nwould we? I mean, it's specific to PG, but a simple wrapper might outlive\nthe next under-the-hood change.\n\nOn a related note: is there an easy way to use this ::regclass conversion to\ntest if a table exists in a non-error returning way? (Can I use it in a\nselect statement, for instance, returning a true or false value for the\nexistence or non-existence of a table?)\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Fri, 24 May 2002 11:40:15 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": true, "msg_subject": "Re: Exposed function to find table in schema search list? " }, { "msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n> Well, sort of, but if we had been promoting a function tableoid(text)\n> returns oid, we wouldn't have to make any change for the move to regclass,\n> would we? I mean, it's specific to PG, but a simple wrapper might outlive\n> the next under-the-hood change.\n\nI think you miss the point: regclass is that wrapper. tableoid(text)\nis only syntactically different --- and for that matter there's nothing\nstopping you from writing regclass('tablename').\n\n> On a related note: is there an easy way to use this ::regclass conversion to\n> test if a table exists in a non-error returning way? (Can I use it in a\n> select statement, for instance, returning a true or false value for the\n> existence or non-existence of a table?)\n\nAt the moment regclass conversion raises an error if the item isn't\nfound; this follows the historical behavior of regproc. We could\npossibly have it return 0 (InvalidOid) instead, but I'm not convinced\nthat that's better. In the case of regproc, not erroring out would\nlose some important error checking during initdb.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 13:33:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Exposed function to find table in schema search list? " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, May 24, 2002 1:33 PM\n> To: Joel Burton\n> Cc: Pgsql-Hackers@Postgresql. Org\n> Subject: Re: [HACKERS] Exposed function to find table in schema search\n> list?\n>\n> At the moment regclass conversion raises an error if the item isn't\n> found; this follows the historical behavior of regproc. We could\n> possibly have it return 0 (InvalidOid) instead, but I'm not convinced\n> that that's better. In the case of regproc, not erroring out would\n> lose some important error checking during initdb.\n\nFair enough. Is there any way to handle this error and return a false?\n(People frequently ask \"how can I check to see if a table exists\", and not\nall interfaces handle errors the same way, but everyone should know how to\ndeal with a table result, so that we can provide a 7.3 version of \"SELECT 1\nFROM pg_class where relname='xxx'\".\n\nThanks!\n\n- J\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Fri, 24 May 2002 15:11:43 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": true, "msg_subject": "Re: Exposed function to find table in schema search list? " }, { "msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n>> At the moment regclass conversion raises an error if the item isn't\n>> found; this follows the historical behavior of regproc. We could\n>> possibly have it return 0 (InvalidOid) instead, but I'm not convinced\n>> that that's better. In the case of regproc, not erroring out would\n>> lose some important error checking during initdb.\n\n> Fair enough. Is there any way to handle this error and return a false?\n> (People frequently ask \"how can I check to see if a table exists\", and not\n> all interfaces handle errors the same way, but everyone should know how to\n> deal with a table result, so that we can provide a 7.3 version of \"SELECT 1\n> FROM pg_class where relname='xxx'\".\n\nWell, I have no strong objection to providing an alternate API that\ndoes things that way. I was thinking the other day that we need\ntext-to-regclass, regclass-to-text, etc conversion functions (these\ndon't come for free given the I/O routines, sadly enough). Perhaps we\ncould define, say, text-to-regclass to return NULL instead of throwing\nan error on bad input.\n\nThis might be a tad awkward to use though, since you'd have to write\nsomething like\n\t'nosuchtable'::text::regclass\nor\n\tregclass('nosuchtable'::text)\nto get a literal parsed that way rather than fed directly to the\nregclass input converter (which we're assuming will still raise an\nerror).\n\nAs far as I'm concerned none of this is set in stone, so I'm open to\nbetter ideas if anyone's got one ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 17:28:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Exposed function to find table in schema search list? " } ]
[ { "msg_contents": "Hi all,\n\nI started getting these errors today on my test database (pg\n7.2.1). I have been vacuum/reindex/analyze(ing) the table\nall day (after updating 100000+ rows) and wondering what\ncould have caused this. \n\nThanks\nJim\n\n\n2002-05-19 18:16:18 [1673] NOTICE: \nbt_getroot[billed_features_btn_idx2]: fixing root page\n2002-05-19 18:16:18 [1673] ERROR: bt_fixroot: not valid\nold root page\n\n", "msg_date": "Sun, 19 May 2002 18:24:35 -0300", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "bt_fixroot: not valid old root page" } ]
[ { "msg_contents": "As I mentioned in passing awhile ago, I'm thinking of simplifying the\nindexscan API and arranging for heapscans and indexscans to have\nessentially identical APIs. Here's the plan:\n\nheap_beginscan and index_beginscan will have near-identical signatures:\n\nHeapScanDesc heap_beginscan(Relation relation, Snapshot snapshot,\n int nkeys, ScanKey key)\nIndexScanDesc index_beginscan(Relation relation, Snapshot snapshot,\n int nkeys, ScanKey key)\n\nThis differs from the existing signatures in that the atend and\nscanFromEnd parameters are removed --- they are useless and have only\nfostered confusion. (heap_beginscan currently ignores atend entirely\nanyway. index_beginscan looks like it does something with the parameter,\nbut in point of fact all the index AMs determine scan direction on the\nbasis of the direction first passed to index_getnext.) Also, a snapshot\nwill now be passed to index_beginscan and saved in the IndexScanDesc\nstructure.\n\nSimilarly, the heap_rescan and index_rescan routines will drop their\nscanFromEnd parameters.\n\nThe getnext routines will have the signatures\n\nHeapTuple heap_getnext(HeapScanDesc scan, ScanDirection direction)\nHeapTuple index_getnext(IndexScanDesc scan, ScanDirection direction)\n\nwith semantics in both cases essentially identical to the present\nheap_getnext: return the next tuple that passes the scankeys and\nsnapshot tests, scanning in the specified direction; or return NULL\nif no more tuples remain. This will simplify callers of index_getnext\nquite a bit.\n\nI also intend to provide a lower-level routine\n\nbool index_getnext_indexitem(IndexScanDesc scan, ScanDirection direction)\n\nwhich finds the next index item passing the scankeys, but ignores time\nqualification issues. The actual TID of the index tuple and its\nreferenced heap tuple will be available from IndexScanDesc fields if\nTRUE is returned. This routine will satisfy those few callers of\nindex_getnext who actually wanted to see the individual index entries.\n(In current code the only users of this routine will be the index AMs'\nown bulk-delete routines, so I'm not sure there's any need to export\nsuch an API at all --- but will provide it just in case.)\n\nThe interfaces from these routines to the underlying index AMs will\nhave parallel changes. In particular, the index AMs' gettuple routines\nwill change to have signatures equivalent to index_getnext_indexitem:\nrather than having to palloc and pfree an IndexRetrieveResult object\non each cycle, they'll just return TRUE or FALSE, passing the TID info\nin the IndexScanDesc.\n\nAlthough I think these changes are worth making just on grounds of\ncode beautification, the real motivation for doing this is to centralize\ntime-qual checking for indexscans into index_getnext, rather than having\nit scattered over all the callers as at present. Once that's done we\nwill have a single point of attack for addressing the problem of killing\ndead index tuples in advance of VACUUM. What I am looking to do once this\nAPI change is in place is to make index_getnext look like\n\n\tfor (;;)\n\t{\n\t\tget next index tuple;\n\t\tif (no more tuples)\n\t\t\treturn NULL;\n\t\theap_fetch corresponding heap tuple;\n\t\tif (HeapTupleSatisfies(tuple, scan->snapshot))\n\t\t\treturn tuple;\n\t\t/*\n\t\t * If we can't see it, maybe no one can.\n\t\t */\n\t\tif (HeapTupleSatisfiesVacuum(tuple) == HEAPTUPLE_DEAD)\n\t\t\tkill the index entry;\n\t}\n\nwhere \"kill the index entry\" involves informing the index AM that it can\nsomehow mark the index entry uninteresting and not to be returned at all\nduring future indexscans. (For performance reasons this'll probably get\nmerged into the next \"get next index tuple\" operation, but that remains\nto be designed in detail.)\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 May 2002 18:22:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Indexscan API cleanup proposal" }, { "msg_contents": "Tom Lane wrote:\n> where \"kill the index entry\" involves informing the index AM that it can\n> somehow mark the index entry uninteresting and not to be returned at all\n> during future indexscans. (For performance reasons this'll probably get\n> merged into the next \"get next index tuple\" operation, but that remains\n> to be designed in detail.)\n> \n> Comments?\n> \n\nIs this a step toward being able to VACUUM indexes?\n\nJoe\n\n\n\n", "msg_date": "Sun, 19 May 2002 15:43:03 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Indexscan API cleanup proposal" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> where \"kill the index entry\" involves informing the index AM that it can\n>> somehow mark the index entry uninteresting and not to be returned at all\n>> during future indexscans.\n\n> Is this a step toward being able to VACUUM indexes?\n\nYou mean collapse indexes? No, that's an entirely different issue.\nThis is about reducing scan overhead when an index contains lots of\npointers to dead-but-not-yet-vacuumed tuples.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 May 2002 18:52:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Indexscan API cleanup proposal " }, { "msg_contents": "Hello Mr. Lane,\n\nI am a novice postgres developer. Can you please shed some light on this\nproblem I am having.\n\nBasically, I am making a small change in executePlan() function of\nexecutor/execMain.c. Right after a tupleslot is retrieved, I try to find\nout the oid of the tuple that has been retrieved.\n\n/* code that retrieves tupleslot */\n/* snip */\n\tif (estate->es_useEvalPlan)\n\t{\n \t\tslot = EvalPlanQualNext(estate);\n \t\tif (TupIsNull(slot))\n \t\tslot = ExecProcNode(plan, NULL);\n\t}\n\telse\n\t\tslot = ExecProcNode(plan, NULL);\n/* end of snip */\n\nRight after this, I insert my code.\ntupleOid = slot->val->t_data->t_oid;\n\nFor some reason, this assignment always results in tupleOid being 0. My\ndatabase has oid's enabled and I can see that an oid is assigned to each\ntuple.\n\n From what I understood, t_data is a pointer to the ondisk tuple so t_oid\nshould not be 0.\n\nCan you please tell me what wrong assumptions I am making. Thank you for\nyour time and help.....\n\nA budding postgresql developer,\nDruv\n\n\n", "msg_date": "Sun, 19 May 2002 19:24:28 -0400 (EDT)", "msg_from": "Dhruv Pilania <dhruv@cs.sunysb.edu>", "msg_from_op": false, "msg_subject": "getting oid of tuple in executePlan" }, { "msg_contents": "Dhruv Pilania <dhruv@cs.sunysb.edu> writes:\n> Basically, I am making a small change in executePlan() function of\n> executor/execMain.c. Right after a tupleslot is retrieved, I try to find\n> out the oid of the tuple that has been retrieved.\n\nThe retrieved tuple doesn't have an OID, because it's not a raw pointer\nto a row on disk: it's a computed tuple (the result of ExecProject).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 13:33:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: getting oid of tuple in executePlan " } ]
[ { "msg_contents": "Another set of SSL patches have been sent to the patches list.\n(No idea when they'll get through the system.) This is a new\nbaseline set of patches that fix many of the problems identified\nearlier and also add a number of security patches.\n\nN.B., some of these changes are visible to the user, but are\ncommon practice for SSL code. The most notable is a minimal\ncertificate validation that requires that certs be current\n(no more expired certs) and that the cert's common name match\nthe hostname used with contacting the backend.\n\nThis means that a cert containing a common name such as\n'eris.example.com' *must* be accessed via\n\n psql -h eris.example.com ...\n\nnot\n\n psql -h eris ...\n\nA future patch can relax this so that the common name can\nresolve to the address returned by getpeername(2).\n\nClient certs are optional, but if they exist they are expected\nin the user's home directory, under the .postgresql directory.\nEncrypted private keys are not yet supported.\n\nBear\n", "msg_date": "Mon, 20 May 2002 00:48:48 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": true, "msg_subject": "revised SSL patches submitted" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 18 May 2002 00:01\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: More schema queries \n> \n> I'm not sure how to do it on Cygwin, either. On Unix you'd \n> build a profilable backend executable using\n> \tcd pgsql/src/backend\n> \tgmake clean\n> \tgmake PROFILE=\"-pg\" all\n> install same, run it, and then use gprof on the gmon.out file \n> dumped at postmaster termination. Dunno if it has to be done \n> differently on Cygwin.\n\nHmm, I tried this and got some errors, then got side tracked with #1\ndaughters birthday..\n\nTried again today with the latest snapshot from ftp.postgresql.org and\ngot the same errors.\n\nDownloaded a complete new Cygwin installation (in case my old one was\nscrewed up - it is very old and has been well hacked about) and still\nget the same error when doing a complete build (./configure --with-CXX\n--prefix=/usr/local/pgsql73 --docdir=/usr/doc/postgresql-\n--with-pgport=5433):\n\ndlltool --dllname postgres.exe --def postgres.def --output-lib\nlibpostgres.a\ndlltool --dllname postgres.exe --output-exp postgres.exp --def\npostgres.def\ngcc -g -o postgres.exe -Wl,--base-file,postgres.base postgres.exp\naccess/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o\ncommands/SUBSYS.o executor /SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o\nmain/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o\npostmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/ SUBSYS.o\ntcop/SUBSYS.o utils/SUBSYS.o -lcygipc -lcrypt\naccess/SUBSYS.o(.text+0x13):heaptuple.c: undefined reference to `mcount'\naccess/SUBSYS.o(.text+0xdf):heaptuple.c: undefined reference to `mcount'\naccess/SUBSYS.o(.text+0x319):heaptuple.c: undefined reference to\n`mcount'\naccess/SUBSYS.o(.text+0x3a3):heaptuple.c: undefined reference to\n`mcount'\naccess/SUBSYS.o(.text+0x790):heaptuple.c: undefined reference to\n`mcount'\naccess/SUBSYS.o(.text+0x826):heaptuple.c: more undefined references to\n`mcount' follow\nmain/SUBSYS.o(.text+0x186):main.c: undefined reference to `_monstartup'\nmain/SUBSYS.o(.text+0x190):main.c: undefined reference to `mcount'\nnodes/SUBSYS.o(.text+0x10):nodeFuncs.c: undefined reference to `mcount'\nnodes/SUBSYS.o(.text+0x44):nodeFuncs.c: undefined reference to `mcount'\nnodes/SUBSYS.o(.text+0x68):nodeFuncs.c: undefined reference to `mcount'\nnodes/SUBSYS.o(.text+0x8e):nodeFuncs.c: undefined reference to `mcount'\nnodes/SUBSYS.o(.text+0xd1):nodeFuncs.c: more undefined references to\n`mcount' follow\ncollect2: ld returned 1 exit status\nmake[2]: *** [postgres] Error 1\nmake[2]: Leaving directory\n`/usr/local/src/postgresql-snapshot/src/backend'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/usr/local/src/postgresql-snapshot/src'\nmake: *** [all] Error 2\n\nAny ideas?\n\nRegards, Dave.\n", "msg_date": "Mon, 20 May 2002 14:54:58 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " }, { "msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> gcc -g -o postgres.exe -Wl,--base-file,postgres.base postgres.exp\n> access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o\n> commands/SUBSYS.o executor /SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o\n> main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o\n> postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/ SUBSYS.o\n> tcop/SUBSYS.o utils/SUBSYS.o -lcygipc -lcrypt\n> access/SUBSYS.o(.text+0x13):heaptuple.c: undefined reference to `mcount'\n\nOn Unix it's necessary for the link step to include a -pg switch, just\nlike the compile steps. This is evidently not happening in the Windows\ncase.\n\nIn the Unix case, $(PROFILE) gets incorporated into $(LDFLAGS) in\nsrc/Makefile.global, and then src/backend/Makefile uses $(LDFLAGS)\nin the backend link rule (line 40 in current source). I don't see any\ninclusion of flags at all in the Windows link rule at lines 48, 50.\nPresumably these ought to at least mention $(PROFILE), and I wonder\nwhether they should not say $(LDFLAGS).\n\nPlease check it out and submit a patch...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 May 2002 10:15:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 20 May 2002 15:16\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: More schema queries \n> \n> \n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> > gcc -g -o postgres.exe -Wl,--base-file,postgres.base postgres.exp \n> > access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o \n> > commands/SUBSYS.o executor /SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o \n> > main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o \n> > postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o \n> storage/ SUBSYS.o \n> > tcop/SUBSYS.o utils/SUBSYS.o -lcygipc -lcrypt\n> > access/SUBSYS.o(.text+0x13):heaptuple.c: undefined reference to \n> > `mcount'\n> \n> On Unix it's necessary for the link step to include a -pg \n> switch, just like the compile steps. This is evidently not \n> happening in the Windows case.\n> \n> In the Unix case, $(PROFILE) gets incorporated into \n> $(LDFLAGS) in src/Makefile.global, and then \n> src/backend/Makefile uses $(LDFLAGS) in the backend link rule \n> (line 40 in current source). I don't see any inclusion of \n> flags at all in the Windows link rule at lines 48, 50. \n> Presumably these ought to at least mention $(PROFILE), and I \n> wonder whether they should not say $(LDFLAGS).\n>\n> Please check it out and submit a patch...\n\nAttached (& CC'd to -patches). It all built OK with this, so I'll go off\nand play with initdb & gprof now.\n\nRegards, Dave.", "msg_date": "Mon, 20 May 2002 16:11:36 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " }, { "msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> Attached (& CC'd to -patches). It all built OK with this, so I'll go off\n> and play with initdb & gprof now.\n\nMakefile patch applied. We're done with this issue now, right? Or were\nthere still loose ends?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 17:47:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "With CVS tip (and float-type timestamps) I get\n\nregression=# select date('2002-02-01 00:00:00'::timestamp);\n date\n------------\n 2000-01-01\n(1 row)\n\nSeems to be the same result no matter what timestamp is put in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 May 2002 11:25:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "timestamp-to-date broken in current sources" }, { "msg_contents": "> With CVS tip (and float-type timestamps) I get\n...\n> Seems to be the same result no matter what timestamp is put in.\n\nYup. Broken for double timestamps. Will patch...\n\n - Thomas\n", "msg_date": "Mon, 20 May 2002 09:16:44 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: timestamp-to-date broken in current sources" } ]
[ { "msg_contents": "I've been giving some more thought to the makecert (server)\nand pgkeygen (client) programs, and may have a clean solution\nto the problem of mapping certs to pgusers.\n\n1) makecert will have the ability to generate two different types\n of certs: root (CA) certs and server certs. The server certs\n are signed by the root cert and server certs can be fully\n verified by adding the root cert to the user's ~/.postgresql\n directory.\n\n This tool would be run once soon after installation.\n\n2) pgkeygen will create basic self-signed certs. These certs\n must be signed by the DBA before they're usable.\n\n This tool would be run once by each user.\n\n3) an third tool (signcert?) is used to sign the user certs with\n the root cert created by makecert. This allows the backend\n to require client certs be signed by a trusted CA... and to\n trust the pguser string within them. Clients could still use\n SSL without client certs, but it couldn't be used for \n authentication.\n\n This tool would take two arguments - the client's self-signed\n (or previously signed) cert and a pguser string. The cert\n would be modified to include an altSubjName extension that\n identifies the pguser (e.g., \"postgresql: pguser\") and then \n signed.\n\n This tool would be run as-needed by the DBA, and would be\n required to support SSL-based authentication. \n\nThere are other issues with CRLs, renewals, etc., but they can\nbe pushed off for a while.\n\nBear\n", "msg_date": "Mon, 20 May 2002 09:46:33 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": true, "msg_subject": "more on makecert and pgkeygen" } ]
[ { "msg_contents": "Is there an implemented JDBC that supports SSL? How are you testing the SSL\npatches to postgresql server?\n\nThere is a posting\n(http://archives.postgresql.org/pgsql-jdbc/2002-02/msg00139.php) that\ndiscusses how to add SSL sockets to the current JDBC source code. We at\nnuBridges http://www.nubridges.com/ did something similar for Apache SOAP\nand would rather not do it again for postgresql JDBC.\n\nI've read Bear's various postings about SSL patches to the postgresql server\n(e.g. http://archives.postgresql.org/pgsql-hackers/2002-05/msg00793.php on\n5-20-02). It's not clear when there will be a stable version of postgresql\nwith SSL support. Any clues?\n\nthanks,\nRich Elwyn\n\n\n\n\n\n\n\nIs there an \nimplemented JDBC that supports SSL? How are you testing the SSL patches to \npostgresql server?\n \nThere is a posting \n(http://archives.postgresql.org/pgsql-jdbc/2002-02/msg00139.php) \nthat discusses how to add SSL sockets to the current JDBC source code. We at \nnuBridges http://www.nubridges.com/ did something \nsimilar for Apache SOAP and would rather not do it again for postgresql \nJDBC.\n \nI've read Bear's \nvarious postings about SSL patches to the postgresql server (e.g. http://archives.postgresql.org/pgsql-hackers/2002-05/msg00793.php on 5-20-02). It's not \nclear when there will be a stable version of postgresql with SSL support. Any \nclues?\n \nthanks,\nRich \nElwyn", "msg_date": "Mon, 20 May 2002 13:23:19 -0400", "msg_from": "\"Rich Elwyn\" <relwyn@charter.net>", "msg_from_op": true, "msg_subject": "JDBC with SSL for postgresql" } ]
[ { "msg_contents": "Attached is the first cut at mkcert.sh, a tool to create PostgreSQL\nserver certificates. It also sets up a directory suitable for the\nOpenSSL CA tool, something that can be used to sign client certs.\n\nThe root cert should be added to the backend SSL cert verification\ntools, and copied to user's .postgresql directory so the client\ncan verify the server cert. This one root cert can be used for\nmultiple server certs in addition to all client certs. \n\nAlso, this script sets up DSA keys/certs. With empheral DH keys the\nserver (and client) keys are only used to sign the emphermal keys,\nso you can use DSA keys. Without emphermal keys you would need to\nuse RSA keys since those keys are used for encryption in addition\nto signing.\n\nSome predictable changes:\n\n1) the root key should be encrypted, since it isn't necessary for\n the system to boot. (Extreme case: the root key should be\n kept off the hard disk, perhaps in a smart cart.)\n\n2) the 'openssl.conf' file could be split into 'root.conf' and\n 'server.conf' files so the prompts can be a bit more suggestive.\n There should also be a 'client.conf' file for client certs,\n and it should be copied to /etc/postgresql and visible to clients.\n\n (To avoid the hassles of requiring clients have the OpenSSL\n tools bundled, pgkeygen should be a binary program instead of\n a script.)\n\n3) there should be a sample domain-component config file in addition\n to the geopolitical one. That gives DNs like\n\n DC=com/DC=example/CN=eris.example.com/email=postgres@example.com\n\n instead of\n\n C=US/ST=Colorado/O=Snakeoil/CN=eris.example.com/email=postgres@example.com\n\nBear", "msg_date": "Mon, 20 May 2002 12:29:54 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": true, "msg_subject": "First cut at mkcert" } ]
[ { "msg_contents": "In ProcedureCreate() (backend/catalog/pg_proc.c) there are special cases\nfor the built-in languages that check whether the to-be-created function\nhas a valid body. ISTM that we could extend that for user-defined\nfunctions, as follows.\n\nWhen creating a language, the user can optionally register a \"check\"\nfunction for the language, whose responsibility is to check the supplied\nfunction body for correctness and return a Boolean result. This function\nwould be executed at the time the function is created.\n\nFor example, for PL/Perl, the check function could execute the equivalent\nof 'perl -c', or if we have a Java language in the future it could check\nwhether certain classes are loadable.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 20 May 2002 21:11:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Language-specific initialization actions" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> In ProcedureCreate() (backend/catalog/pg_proc.c) there are special cases\n> for the built-in languages that check whether the to-be-created function\n> has a valid body. ISTM that we could extend that for user-defined\n> functions, as follows.\n\n> When creating a language, the user can optionally register a \"check\"\n> function for the language, whose responsibility is to check the supplied\n> function body for correctness and return a Boolean result. This function\n> would be executed at the time the function is created.\n\nAre you planning to also move the existing special cases out to\nfunctions called through this same interface? That would make pg_proc.c\na lot cleaner, I think.\n\nI don't see any value in returning a boolean; might as well let the\nthing just throw an elog --- with, one hopes, an error message somewhat\nmore specific than \"bad function body\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 May 2002 20:08:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Language-specific initialization actions " } ]
[ { "msg_contents": "Since we now have an official entry in /etc/services, shouldn't we be able\nto make use of it, by using getservbyname() if a nonnumeric port number is\nspecified?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 20 May 2002 21:12:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Use of /etc/services?" }, { "msg_contents": "Peter Eisentraut wrote:\n> Since we now have an official entry in /etc/services, shouldn't we be able\n> to make use of it, by using getservbyname() if a nonnumeric port number is\n> specified?\n\nIs any OS actually shipping us in /etc/services?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 01:41:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "On Fri, 7 Jun 2002, Bruce Momjian wrote:\n\n> Peter Eisentraut wrote:\n> > Since we now have an official entry in /etc/services, shouldn't we be able\n> > to make use of it, by using getservbyname() if a nonnumeric port number is\n> > specified?\n>\n> Is any OS actually shipping us in /etc/services?\n\nNetBSD will be, as of 1.7, though the 1.7 release is a while away yet.\n(Sorry, I didn't find out about this in time to get it into for 1.6,\nwhich is just about to be released.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 7 Jun 2002 15:01:42 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "Curt Sampson wrote:\n> On Fri, 7 Jun 2002, Bruce Momjian wrote:\n> \n> > Peter Eisentraut wrote:\n> > > Since we now have an official entry in /etc/services, shouldn't we be able\n> > > to make use of it, by using getservbyname() if a nonnumeric port number is\n> > > specified?\n> >\n> > Is any OS actually shipping us in /etc/services?\n> \n> NetBSD will be, as of 1.7, though the 1.7 release is a while away yet.\n> (Sorry, I didn't find out about this in time to get it into for 1.6,\n> which is just about to be released.)\n\nSure, then let's start using getservbyname(), if it works.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 02:02:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "\nFreeBSD\n\nOn Fri, 7 Jun 2002, Bruce Momjian wrote:\n\n> Peter Eisentraut wrote:\n> > Since we now have an official entry in /etc/services, shouldn't we be able\n> > to make use of it, by using getservbyname() if a nonnumeric port number is\n> > specified?\n>\n> Is any OS actually shipping us in /etc/services?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Fri, 7 Jun 2002 09:08:48 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "I see PostgreSQL in /etc/services on an upgraded Redhat Linux 7.3 system. \nDon't think it was me adding it since I didn't have PG running on the \nsystem.\n\n\nRod\n-- \n Please don't tell my mother I'm a System Administrator.\n She thinks I play piano in a bordello.\n\n", "msg_date": "Fri, 7 Jun 2002 05:59:44 -0700 (PDT)", "msg_from": "\"Roderick A. Anderson\" <raanders@acm.org>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "SuSE Linux 8.0\n\nOn Fri, 2002-06-07 at 07:41, Bruce Momjian wrote:\n> Peter Eisentraut wrote:\n> > Since we now have an official entry in /etc/services, shouldn't we be able\n> > to make use of it, by using getservbyname() if a nonnumeric port number is\n> > specified?\n> \n> Is any OS actually shipping us in /etc/services?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \nEttore Simone\nSuSE Linux srl Cel. +39 348 4904011\n\nVia Montanara, 26 Tel. +39 059 5395 41\n41051 Castelnuovo R. (MO) Fax +39 059 5332009\n\nVia Proust, 40 Tel. +39 06 50514545\n00143 Roma\n", "msg_date": "07 Jun 2002 15:05:59 +0200", "msg_from": "Ettore Simone <esimone@suse.it>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "\tHi!\n\n\tMandrake Linux release 8.1 also and without postgres rpms installed.\n\n[nach@golfinho ~]$ cat /etc/services | grep 5432\n#\t\t5432-5434 Unassigned\npostgres 5432/tcp # POSTGRES\npostgres 5432/udp # POSTGRES\n\n-- \n o__\t\tBem haja,\n _.>/ _\t\t\tNunoACHenriques\n (_) \\(_)\t\t\t~~~~~~~~~~~~~~~\n\t\t\t\thttp://students.fct.unl.pt/users/nuno/\n\nOn Fri, 7 Jun 2002, Roderick A. Anderson wrote:\n\n>I see PostgreSQL in /etc/services on an upgraded Redhat Linux 7.3 system. \n>Don't think it was me adding it since I didn't have PG running on the \n>system.\n>\n>\n>Rod\n>\n\n", "msg_date": "Fri, 7 Jun 2002 14:07:37 +0100 (WEST)", "msg_from": "NunoACHenriques <nach@fct.unl.pt>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "Debian \"woody\" (to be 3.0 RSN . . . or something) has this in\n/etc/services:\n\npostgres 5432/tcp # POSTGRES\npostgres 5432/udp # POSTGRES\n\nA\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 7 Jun 2002 10:18:03 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "> Is any OS actually shipping us in /etc/services?\n\nIt's right here in SuSE Linux 8.0. It was not in 7.3, so maybe it's \nofficially included from now on.\n", "msg_date": "Fri, 7 Jun 2002 16:57:53 +0200", "msg_from": "Kaare Rasmussen <kar@bering.webline.dk>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Peter Eisentraut wrote:\n> > Since we now have an official entry in /etc/services, shouldn't we be able\n> > to make use of it, by using getservbyname() if a nonnumeric port number is\n> > specified?\n> Is any OS actually shipping us in /etc/services?\n\nDebian GNU/Linux is, or at least will be for the imminent 3.0 release.\n\nMike.\n", "msg_date": "07 Jun 2002 11:55:14 -0400", "msg_from": "Michael Alan Dorman <mdorman@debian.org>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "On Fri, 2002-06-07 at 10:55, Michael Alan Dorman wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Peter Eisentraut wrote:\n> > > Since we now have an official entry in /etc/services, shouldn't we be able\n> > > to make use of it, by using getservbyname() if a nonnumeric port number is\n> > > specified?\n> > Is any OS actually shipping us in /etc/services?\n> \n> Debian GNU/Linux is, or at least will be for the imminent 3.0 release.\nIt's in FreeBSD 4-STABLE, and definitely in 4.6-RELEASE which is due out\ntomorrow. (or shortly thereafter). \n\n\n> \n> Mike.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "07 Jun 2002 13:03:34 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "NunoACHenriques writes:\n\n> \tMandrake Linux release 8.1 also and without postgres rpms installed.\n>\n> [nach@golfinho ~]$ cat /etc/services | grep 5432\n> #\t\t5432-5434 Unassigned\n> postgres 5432/tcp # POSTGRES\n> postgres 5432/udp # POSTGRES\n\nThis is inconsistent with the official IANA assignment which reads\n\npostgresql 5432/tcp # PostgreSQL Database\npostqresql 5432/udp # PostgreSQL Database\n# Tom Lane <tgl@sss.pgh.pa.us>\n\n(The spelling might have been fixed by now.)\n\nYou should probably file a bug report for your OS.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 8 Jun 2002 00:26:05 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "Bruce Momjian writes:\n\n> Sure, then let's start using getservbyname(), if it works.\n\nOne thing that had occurred to me is that this probably doesn't work in\nJava, so you couldn't do configure --with-pgport=postgresql. That reduces\nthe potential value a lot.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 8 Jun 2002 00:26:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "On Fri, 7 Jun 2002, Bruce Momjian wrote:\n\n> Peter Eisentraut wrote:\n> > Since we now have an official entry in /etc/services, shouldn't we be able\n> > to make use of it, by using getservbyname() if a nonnumeric port number is\n> > specified?\n>\n> Is any OS actually shipping us in /etc/services?\n\nSuSE 8.0:\n\npostgresql\t5432/tcp\t# PostgreSQL Database\npostqresql\t5432/udp\t# PostgreSQL Database\n\nI'll check OpenBSD 3.1 when I'm done installing.\n\n-- \nJonathan Conway\t\t\t\t\t\t rise@knavery.net\nhistory is paling & my surge protection failed, & so I FRIED\n\t\t\t\t\t\t- Concrete Blonde, \"Fried\"\n\n", "msg_date": "Sat, 8 Jun 2002 01:12:24 -0600 (MDT)", "msg_from": "rise <rise@knavery.net>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "rise <rise@knavery.net> writes:\n> On Fri, 7 Jun 2002, Bruce Momjian wrote:\n>> Is any OS actually shipping us in /etc/services?\n\n> SuSE 8.0:\n\n> postgresql\t5432/tcp\t# PostgreSQL Database\n> postqresql\t5432/udp\t# PostgreSQL Database\n\nMph, complete with the typo in the UDP entry. Hang onto that, it'll\nbe a collector's item someday ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 10:50:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services? " }, { "msg_contents": "Tom Lane wrote:\n> rise <rise@knavery.net> writes:\n> > On Fri, 7 Jun 2002, Bruce Momjian wrote:\n> >> Is any OS actually shipping us in /etc/services?\n> \n> > SuSE 8.0:\n> \n> > postgresql\t5432/tcp\t# PostgreSQL Database\n> > postqresql\t5432/udp\t# PostgreSQL Database\n> \n> Mph, complete with the typo in the UDP entry. Hang onto that, it'll\n> be a collector's item someday ;-)\n\nIsn't Suse centralizing development for the new UnitedLinux2? Guess Red\nHat doesn't have much to worry about. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Sat, 8 Jun 2002 11:15:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" }, { "msg_contents": "On Sat, 8 Jun 2002, Peter Eisentraut wrote:\n\n> This is inconsistent with the official IANA assignment which reads\n\nThanks. I'll update my services file and check all those I come into \ncontact with. I'll check if a new install if Redhat 7.3 has the correct \nentries this weekend.\n\n> postgresql 5432/tcp # PostgreSQL Database\n> postqresql 5432/udp # PostgreSQL Database\n> # Tom Lane <tgl@sss.pgh.pa.us>\n> \n> (The spelling might have been fixed by now.)\n\nIt is corrected.\n\n> You should probably file a bug report for your OS.\n\nInteresting. I've never done this before. Most of the problems like this \nI see after someone else has repored them. Maybe I'll get my 5 minutes \nof fame.\n\n\nCheers,\nRod\n-- \n Please don't tell my mother I'm a System Administrator.\n She thinks I play piano in a bordello.\n\n", "msg_date": "Sat, 8 Jun 2002 10:25:16 -0700 (PDT)", "msg_from": "\"Roderick A. Anderson\" <raanders@acm.org>", "msg_from_op": false, "msg_subject": "Re: Use of /etc/services?" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 17 May 2002 23:24\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] More schema queries \n> \n> \n> > 2200 | pg_stat_all_tables\n> > 2200 | pg_stat_sys_tables\n> \n> Bizarre. It's not that way here. Would you mind updating to \n> CVS tip, rebuilding, and seeing if you can duplicate that? \n> Also, make sure you're using the right initdb script ...\n\nOK, brand new Cygwin installation, built from CVS tip and still the\nviews are in public. I checked 4 times that I'm using the correct\ninitdb, even manually installing it from the source tree. I then hacked\ninitdb and prepended 'pg_catalog.' to the view names in the CREATE\nVIEWs. Cleared my data dir, ran initdb and the views are still in\npublic. I then cleared & ran initdb with --debug. The views were *again*\nin public, and no errors were seen.\n\nI'm confused. Does the standalone backend not deal with schemas fully\nand is silently failing 'cos there's nothing technically wrong with the\npg_catalog.viewname syntax? Or do I just not know what the heck I'm\ndoing :-)\n\n> Curious. I have not noticed much of any change in postmaster \n> startup time on Unix. Can you run a profile or something to \n> see where the time is going?\n\nOn my clean cygwin installation this problem is no longer present. I\nguess something got screwed up in my old installation that - maybe\nsomething from the 7.2 installation that ran in parallel (thought that\nworked fine).\n\nRegards, Dave.\n", "msg_date": "Mon, 20 May 2002 20:22:51 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " }, { "msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> I'm confused. Does the standalone backend not deal with schemas fully\n> and is silently failing 'cos there's nothing technically wrong with the\n> pg_catalog.viewname syntax?\n\nThe standalone backend does schemas just fine. What is supposed to\nensure that the views get created in pg_catalog is the bit in initdb:\n\nPGSQL_OPT=\"$PGSQL_OPT -O --search_path=pg_catalog\"\n\nThe -- parameter should do the equivalent of\n\tSET search_path = pg_catalog;\nbut apparently it's not working for you; if it weren't there then the\nviews would indeed get created in public.\n\nAny idea why it's not working?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 May 2002 20:00:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "Attached is the first cut at some SSL documetation for the\nPostgreSQL manual. It's in plain text, not DocBook, to make\nediting easy for the first few revisions. The documentation\nleads the code by a day or so.\n\nAlso, I'm still having problems with the patches list - none\nof my recent submissions have gotten through, and I haven't\neven gotten the confirmation note from when I tried to resubscribe\nto that list. That's why the main SSL patches haven't appeared yet.\n\nBear", "msg_date": "Mon, 20 May 2002 14:03:54 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": true, "msg_subject": "First cut at SSL documentation" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nBear Giles wrote:\n> Attached is the first cut at some SSL documetation for the\n> PostgreSQL manual. It's in plain text, not DocBook, to make\n> editing easy for the first few revisions. The documentation\n> leads the code by a day or so.\n> \n> Also, I'm still having problems with the patches list - none\n> of my recent submissions have gotten through, and I haven't\n> even gotten the confirmation note from when I tried to resubscribe\n> to that list. That's why the main SSL patches haven't appeared yet.\n> \n> Bear\n\nContent-Description: /tmp/ssldoc\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jun 2002 20:29:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] First cut at SSL documentation" }, { "msg_contents": "\nSorry, there is a newer version. I will use that one.\n\n---------------------------------------------------------------------------\n\nBear Giles wrote:\n> Attached is the first cut at some SSL documetation for the\n> PostgreSQL manual. It's in plain text, not DocBook, to make\n> editing easy for the first few revisions. The documentation\n> leads the code by a day or so.\n> \n> Also, I'm still having problems with the patches list - none\n> of my recent submissions have gotten through, and I haven't\n> even gotten the confirmation note from when I tried to resubscribe\n> to that list. That's why the main SSL patches haven't appeared yet.\n> \n> Bear\n\nContent-Description: /tmp/ssldoc\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jun 2002 20:30:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] First cut at SSL documentation" }, { "msg_contents": "> Sorry, there is a newer version. I will use that one.\n\nYou may want to hold off on that - I've been busy lately and haven't had\na chance to revisit the documentation or change some of the literal constants\nto numeric constants, but it's been on my \"to do\" list.\n\nThe latter didn't affect the other patches since I planned on doing a\nlatter-day patch anyway, but the documentation may need some big changes\nto emphasize that the rule that it's \"use SSH tunnels if you just want\nto prevent eavesdropping, use SSL directly if you need to firmly establish\nthe identity of the server or clients.\"\n\n(And sorry about responding via the lists, but your mail server doesn't\nlike to talk to cable modem users.)\n\nBear\n", "msg_date": "Thu, 13 Jun 2002 18:58:33 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] First cut at SSL documentation" }, { "msg_contents": "Bear Giles wrote:\n> > Sorry, there is a newer version. I will use that one.\n> \n> You may want to hold off on that - I've been busy lately and haven't had\n> a chance to revisit the documentation or change some of the literal constants\n> to numeric constants, but it's been on my \"to do\" list.\n\nOK, thanks. I will hold off on the docs part.\n\nSorry it has taken me so long to get to these SSL patches (my vacation).\nI am doing them now.\n\n> The latter didn't affect the other patches since I planned on doing a\n> latter-day patch anyway, but the documentation may need some big changes\n> to emphasize that the rule that it's \"use SSH tunnels if you just want\n> to prevent eavesdropping, use SSL directly if you need to firmly establish\n> the identity of the server or clients.\"\n> \n> (And sorry about responding via the lists, but your mail server doesn't\n> like to talk to cable modem users.)\n\nSorry about the block. RBL+ has been much more effective lately, and it\nis because they are blocking more dialup users. This the first false\npositive I have gotten from them. You can use momjian@postgresql.org or\nroute your email through west.navpoint.com. I will see if I can pass\nyour IP through. I can do it in my blacklist, but I am not sure that\nworks for RBL+.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jun 2002 22:34:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] First cut at SSL documentation" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [020613 21:49]:\n> Bear Giles wrote:\n> > > Sorry, there is a newer version. I will use that one.\n> > \n> > You may want to hold off on that - I've been busy lately and haven't had\n> > a chance to revisit the documentation or change some of the literal constants\n> > to numeric constants, but it's been on my \"to do\" list.\n> \n> OK, thanks. I will hold off on the docs part.\n> \n> Sorry it has taken me so long to get to these SSL patches (my vacation).\n> I am doing them now.\n> \n> > The latter didn't affect the other patches since I planned on doing a\n> > latter-day patch anyway, but the documentation may need some big changes\n> > to emphasize that the rule that it's \"use SSH tunnels if you just want\n> > to prevent eavesdropping, use SSL directly if you need to firmly establish\n> > the identity of the server or clients.\"\n> > \n> > (And sorry about responding via the lists, but your mail server doesn't\n> > like to talk to cable modem users.)\n> \n> Sorry about the block. RBL+ has been much more effective lately, and it\n> is because they are blocking more dialup users. This the first false\n> positive I have gotten from them. You can use momjian@postgresql.org or\n> route your email through west.navpoint.com. I will see if I can pass\n> your IP through. I can do it in my blacklist, but I am not sure that\n> works for RBL+.\nIf you are using sendmail, the access file overrides the RBL, if you\nset delay checks in the MC file. \n\nI can help if you are using sendmail.\n\nLER\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 13 Jun 2002 21:55:13 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] First cut at SSL documentation" }, { "msg_contents": "Larry Rosenman wrote:\n> > Sorry about the block. RBL+ has been much more effective lately, and it\n> > is because they are blocking more dialup users. This the first false\n> > positive I have gotten from them. You can use momjian@postgresql.org or\n> > route your email through west.navpoint.com. I will see if I can pass\n> > your IP through. I can do it in my blacklist, but I am not sure that\n> > works for RBL+.\n> If you are using sendmail, the access file overrides the RBL, if you\n> set delay checks in the MC file. \n> \n> I can help if you are using sendmail.\n\nYes, using sendmail. That is helpful info. I don't have delay checks\nenabled right now, but can easily do that. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jun 2002 23:31:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] First cut at SSL documentation" }, { "msg_contents": "Larry Rosenman wrote:\n> > Sorry about the block. RBL+ has been much more effective lately, and it\n> > is because they are blocking more dialup users. This the first false\n> > positive I have gotten from them. You can use momjian@postgresql.org or\n> > route your email through west.navpoint.com. I will see if I can pass\n> > your IP through. I can do it in my blacklist, but I am not sure that\n> > works for RBL+.\n> If you are using sendmail, the access file overrides the RBL, if you\n> set delay checks in the MC file. \n> \n> I can help if you are using sendmail.\n\nOK, Bear, configured for 192.168.1.3. Would you shoot me a personal\nemail as a test? Send failure message to momjian@postgresql.org. \nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jun 2002 23:37:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] First cut at SSL documentation" }, { "msg_contents": "Bear, there is some IPv6 stuff in fe-secure.c. Is this intended? We\ndon't support IPv6 in the backend yet, do we. We are having portability\nproblems with that 'case' statement and I am considering removing it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 19 Jun 2002 02:59:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Problem with SSL and IPv6" } ]
[ { "msg_contents": "> \n> Hmm. Which file(s) were growing, exactly? How many row updates is this\n> run covering?\n> \n\nThe toast table gets about 90 percent of the growth, followed by the toast \nindex at about 9 percent. The actual table + primary key stay at about 2M each.\n\nI neglected to mention what the update statement actually was :\n\nUPDATE grow SET body = ? WHERE id = ?\n\nSo the untoasted elements are not being altered at all...\n\nA typical run has 2 threads each of which updates the entire table (20,000 \nrows) every 2000 s.\n\nThe vacuum thread manages to get 6-7 vacuums in before both threads update the \nentire table.\n\nregards\n\nMark\n\n\n", "msg_date": "Mon, 20 May 2002 21:37:17 GMT", "msg_from": "<markir@slingshot.co.nz>", "msg_from_op": true, "msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting " } ]
[ { "msg_contents": "Folks,\n\nFound this interesting bug:\n\njwnet=> select version();\n version\n---------------------------------------------------------------\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.3\n(1 row)\n\njwnet=> select ('2001-07-31 10:00:00 PST'::TIMESTAMP) + ('248 days'::INTERVAL) \n;\n ?column?\n------------------------\n 2002-04-05 10:00:00-08\n(1 row)\n\njwnet=> select ('2001-07-31 10:00:00 PST'::TIMESTAMP) + ('249 days'::INTERVAL) \n;\n ?column?\n------------------------\n 2002-04-06 10:00:00-08\n(1 row)\n\njwnet=> select ('2001-07-31 10:00:00 PST'::TIMESTAMP) + ('250 days'::INTERVAL) \n;\n ?column?\n------------------------\n 2002-04-07 11:00:00-07\n\njwnet=> select ('2001-04-01 10:00:00 PST'::TIMESTAMP) + ('100 days'::INTERVAL) \n;\n ?column?\n------------------------\n 2001-07-10 11:00:00-07\n\n\nIt appears that Spring Daylight Savings Time causes PostgreSQL to change my \ntime zone. Only the spring, mind you, and not the fall. This is \npotentially catastrophic for the application I'm developing; what can I do to \nsee that it's fixed? Or am I misunderstanding the behavior, here?\n\n-- \n-Josh Berkus\n\nP.S. I'm posting this here instead of the online bug form because I know that \nBruce is on vacation.\n\n\n", "msg_date": "Mon, 20 May 2002 15:34:32 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Bug with Daylight Savings Time & Interval" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> Found this interesting bug:\n> jwnet=> select ('2001-07-31 10:00:00 PST'::TIMESTAMP) + ('249 days'::INTERVAL) \n> ;\n> ?column?\n> ------------------------\n> 2002-04-06 10:00:00-08\n> (1 row)\n\n> jwnet=> select ('2001-07-31 10:00:00 PST'::TIMESTAMP) + ('250 days'::INTERVAL) \n> ;\n> ?column?\n> ------------------------\n> 2002-04-07 11:00:00-07\n\nThis isn't a bug per the existing definition of INTERVAL. '250 days' is\ndefined as '250*24 hours', exactly, no more no less. When you move\nacross a DST boundary you get behavior like the above.\n\nI've opined several times that interval should account for three\nseparate units: months, days, and seconds. But our time-meister\nTom Lockhart doesn't seem to have taken any interest in the idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 May 2002 22:47:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] Bug with Daylight Savings Time & Interval " }, { "msg_contents": "Tom and Tom,\n\n> This isn't a bug per the existing definition of INTERVAL. '250 days'\n> is\n> defined as '250*24 hours', exactly, no more no less. When you move\n> across a DST boundary you get behavior like the above.\n \n> I've opined several times that interval should account for three\n> separate units: months, days, and seconds. But our time-meister\n> Tom Lockhart doesn't seem to have taken any interest in the idea.\n\nI beg to differ with Tom L. Even if there were justification for the\naddition of an hour to a calculation involving only days, which there\nis not, there are two bugs with the existing behavior:\n\n1. You do not lose an hour with the end of DST, you just gain one with\nthe beginning of it (until you wraparound a whole year, which is really\nconfusing), which is inconsistent;\n\n2. Even if you justify gaining or losing an hour through DST in a\n'+days' operation, changing the TIMEZONE is a bizarre and confusing way\nto do it. I don't fly to Colorado on April 7th!\n\nWhile this needs to be fixed eventually, I need a quick workaround; is\nthere a way to \"turn off\" DST behavior in PostgreSQL?\n\nFurther, it seems that the whole \"Interval\" section of Postgres,\npossibly one of our greatest strengths as a database, has languished in\nthe realm of inconsistent behavior due to lack of interest. Is there\nanything I can do without learning C? \n\n-Josh Berkus\n", "msg_date": "Mon, 20 May 2002 22:57:46 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] Bug with Daylight Savings Time & Interval " }, { "msg_contents": "> > I've opined several times that interval should account for three\n> > separate units: months, days, and seconds. But our time-meister\n> > Tom Lockhart doesn't seem to have taken any interest in the idea.\n\nI have taken an interest in the idea. But have not implemented it and\nhave not concluded that this is the best option. I expect that you will\ncontinue to opine and will continue to take me to task for not following\nyour advice.\n\n> I beg to differ with Tom L. Even if there were justification for the\n> addition of an hour to a calculation involving only days, which there\n> is not, there are two bugs with the existing behavior:\n> 1. You do not lose an hour with the end of DST, you just gain one with\n> the beginning of it (until you wraparound a whole year, which is really\n> confusing), which is inconsistent;\n\nNot actually true (probably due to a cut and paste error in your test\nsuite). Your example specified '2001-07-31 10:00:00 PST' which is\nactually within the PDT time of year. PostgreSQL took you at your word\non this one and evaluated the time as though it were in PST. So you\ndidn't see the 1 hour offset when adding days to another time zone.\n\n> 2. Even if you justify gaining or losing an hour through DST in a\n> '+days' operation, changing the TIMEZONE is a bizarre and confusing way\n> to do it. I don't fly to Colorado on April 7th!\n\nI'm not sure what you mean here.\n\n> While this needs to be fixed eventually, I need a quick workaround; is\n> there a way to \"turn off\" DST behavior in PostgreSQL?\n\nConsider using TIMESTAMP WITHOUT TIME ZONE.\n\n> Further, it seems that the whole \"Interval\" section of Postgres,\n> possibly one of our greatest strengths as a database, has languished in\n> the realm of inconsistent behavior due to lack of interest. Is there\n> anything I can do without learning C?\n\nYou can continue to explore the current behavior and to form an opinion\non what correct behavior should be. I've resisted adding fields to the\ninternal interval type for performance and design reasons. As previously\nmentioned, blind verbatim compliance with SQL9x may suggest breaking our\nINTERVAL type into a bunch of pieces corresponding to the different\ninterval ranges specified in the standard. However, the SQL standard is\nchoosing to cover a small subset of common usage to avoid dealing with\nthe implementation complexities and usage patterns which are uncovered\nwhen trying to do more.\n\n - Thomas\n", "msg_date": "Tue, 21 May 2002 08:24:06 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [SQL] Bug with Daylight Savings Time & Interval" }, { "msg_contents": "Tom L,\n\nThanks for answering my pushy opinions!\n\n> Not actually true (probably due to a cut and paste error in your test\n> suite). Your example specified '2001-07-31 10:00:00 PST' which is\n> actually within the PDT time of year. PostgreSQL took you at your\n> word\n> on this one and evaluated the time as though it were in PST. So you\n> didn't see the 1 hour offset when adding days to another time zone.\n\nAha. I understand. That's consistent, even if it doesn't work the way\nI want it (life is difficult that way). However, I would assert that\nit is not at all intuitive, and we need to have it documented\nsomewhere.\n \n> > 2. Even if you justify gaining or losing an hour through DST in a\n> > '+days' operation, changing the TIMEZONE is a bizarre and confusing\n> way\n> > to do it. I don't fly to Colorado on April 7th!\n> \n> I'm not sure what you mean here.\n\nMy confusion because of the default way of displaying time zones. It\nlooked to me like Postgres was changing to CST on April 7th. Once\nagain, consistent but not intuitive.\n\n> > While this needs to be fixed eventually, I need a quick workaround;\n> is\n> > there a way to \"turn off\" DST behavior in PostgreSQL?\n> \n> Consider using TIMESTAMP WITHOUT TIME ZONE.\n\nDamn. Doesn't work for me either. I do need to cast stuff into\nseveral time zones, as this is a New York/San Francisco calendar.\n Isn't there a version of GMT -8:00 I can use that doesn't involve\nDST? What does Postgresql do for Arizona (Arizona does not have DST)?\n\n> You can continue to explore the current behavior and to form an\n> opinion\n> on what correct behavior should be. \n\nOliver and I are having a lively discussion regarding Interval math on\nPGSQL-SQL. I would love to have you enter the discussion.\n\n> I've resisted adding fields to\n> the\n> internal interval type for performance and design reasons. \n\nI don't blame you. Data Subtypes is a huge can o' crawdads.\n\n> As\n> previously\n> mentioned, blind verbatim compliance with SQL9x may suggest breaking\n> our\n> INTERVAL type into a bunch of pieces corresponding to the different\n> interval ranges specified in the standard. However, the SQL standard\n> is\n> choosing to cover a small subset of common usage to avoid dealing\n> with\n> the implementation complexities and usage patterns which are\n> uncovered\n> when trying to do more.\n\nOk, so how should things work, then? While I agree that SQL92's spec\nis awkward and limited, we'd need a pretty good argument for breaking\nstandards. Oliver is already wearing me down in this regard.\n\n-Josh Berkus\n", "msg_date": "Tue, 21 May 2002 09:05:49 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] Bug with Daylight Savings Time & Interval" }, { "msg_contents": "...\n> Ok, so how should things work, then? While I agree that SQL92's spec\n> is awkward and limited, we'd need a pretty good argument for breaking\n> standards. Oliver is already wearing me down in this regard.\n\nWell, the standard sucks ;)\n\nMy reference on this is Date and Darwen (I think that Date used another\nword than \"sucks\", but his meaning is clear), who reinforced my\nsuspicion that the SQL9x date/time folks were in an altered state when\nthey formulated the standard. The standard is inconsistant, incomplete,\nand does not match common and essential usage for dates and times. Other\nthan that, it does a great job with dates and times :))\n\nI won't try to defend the current PostgreSQL implementation as the way\nit should always be, but it does have hundreds of hours of work in it to\nget where it is. Backing off to \"if it isn't in the standard, then kill\nit\" is a step backwards. I see more than a few more hours of work coming\nwith the unbelievably short sighted glibc breakage recently introduced.\n\nI'm not subscribed to -sql, and would think that if the discussions have\nevolved from \"how do I do this\" to \"how *should* we do this\" then the\ndiscussion should move to -hackers. I'm not subscribed to every list,\nand really can not keep up with the ones I am on now. I recently\nsubscribed to -general because design discussions seem to erupt there,\nbut will likely unsubscribe soon. And expect that design happens on\n-hackers where it is intended.\n\n - Thomas\n", "msg_date": "Tue, 21 May 2002 09:19:13 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [SQL] Bug with Daylight Savings Time & Interval" }, { "msg_contents": "Switched to -hackers from -sql and -bugs.\n\nOn Tue, 2002-05-21 at 16:24, Thomas Lockhart wrote:\n> \n> You can continue to explore the current behavior and to form an opinion\n> on what correct behavior should be. I've resisted adding fields to the\n> internal interval type for performance and design reasons. As previously\n> mentioned, blind verbatim compliance with SQL9x may suggest breaking our\n> INTERVAL type into a bunch of pieces corresponding to the different\n> interval ranges specified in the standard. However, the SQL standard is\n> choosing to cover a small subset of common usage to avoid dealing with\n> the implementation complexities and usage patterns which are uncovered\n> when trying to do more.\n\nIt's worth pointing out that the same syntax is in SQL92, so I conclude\nthat no one could think how to improve it through a seven year period.\n\nI don't want to dispose of the existing INTERVAL type, but I would like\nthe functionality offered by the SQL99 types. For example, I want to be\nable to use INTERVAL HOUR(3) TO MINUTE to record the time taken by some\nindustrial process and I don't want '125 hours 15 minutes' converted\ninto '5 days 05:15'.\n\nYou talk of breaking interval into a number of pieces, but I don't see\nthe need. You have already implemented half of what is needed. The\nother part needed is to record the leading field precision, which we can\nsurely do in typmod, where you already store the fractional precision. \nAt present you have in AdjustIntervalForTypmod():\n\n int range = ((typmod >> 16) & 0x7FFF);\n int precision = (typmod & 0xFFFF);\n\nand since precision is limited to the range 0-6, we should certainly be\nable to fit the leading field precision into the same space:\n\n int frac_precision = (typmod & 0xFF); /* default is 6 */\n int lead_precision = ((typmod >> 8) & 0xFF); /* default is 2 */\n\nall that is left is a set of rules to validate input and to format\noutput according to the given precision, and to change the parser\nslightly to get the SQL99 syntax right..\n\nNow I'm sure I'm oversimplifying, but where?\n\n\nAs to other common usage, I can see benefits in extending the subtypes\nto include WEEK, and this is conceptually merely an extension of the\nexisting SQL99 DAY TO SECOND type. What other usage do you see that can\nreasonably be translated from fuzzy human talk into solid data? Years\nand months are already handled and can be used meaningfully. What you\ncan't do in SQL99 is translate from exact INTERVAL DAY TO SECOND to\nfuzzy INTERVAL YEAR TO MONTH. I can't see why one should want to, but\nif you do, our existing type system would let us cast INTERVAL DAY TO\nSECOND to INTERVAL, which already does this in a satisfactorily fuzzy\nway. I can even conceive of doing the conversion using a configured\nchoice out of a set of fuzzy conversion options. For example: configure\nyear to be 360, 365 or 365.2425 days; configure month to be year/12 or\n30 days or 4 weeks; and so on.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"We are troubled on every side, yet not distressed; we \n are perplexed, but not in despair; persecuted, but not\n forsaken; cast down, but not destroyed; Always bearing\n about in the body the dying of the Lord Jesus, that \n the life also of Jesus might be made manifest in our \n body.\" II Corinthians 4:8-10", "msg_date": "22 May 2002 15:45:39 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: [SQL] Bug with Daylight Savings Time & Interval" } ]
[ { "msg_contents": "I am still on vacation but have access to an Internet connection today\nand am going through as much email as I can.\n\nFor the curious, I took an 18-day cruise from New York City on April 31,\nthrough the Panama Canal to Seattle, then a 2-day train from Seattle to\nLos Angeles, which is where I am now. I will be visiting San Diego and\nreturning home the night of May 31.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 20 May 2002 18:45:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "I am online today" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 21 May 2002 01:00\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] More schema queries \n> \n> \n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> > I'm confused. Does the standalone backend not deal with \n> schemas fully \n> > and is silently failing 'cos there's nothing technically wrong with \n> > the pg_catalog.viewname syntax?\n> \n> The standalone backend does schemas just fine. What is \n> supposed to ensure that the views get created in pg_catalog \n> is the bit in initdb:\n> \n> PGSQL_OPT=\"$PGSQL_OPT -O --search_path=pg_catalog\"\n\nThat said, I'm still surprised that prepending 'pg_catalog.' to the view\nnames didn't force them into pg_catalog.\n\n> The -- parameter should do the equivalent of\n> \tSET search_path = pg_catalog;\n> but apparently it's not working for you; if it weren't there \n> then the views would indeed get created in public.\n> \n> Any idea why it's not working?\n\nJust to be doubly sure, I've installed a fresh Cygwin, and confirmed\nthat none of Jason's prepackaged 7.2 got in there by mistake. Built and\ninstalled from CVS tip as of about 9:30AM BST 21/5/02. The problem still\nremains. \n\nI've played with initdb, and confirmed that \n\n$PGSQL_OPT = -F -D/data -o /dev/null -O --search_path=pg_catalog \n\nimmediately prior to the views being created. I then tried running a\nsingle user backend in exactly the same way initdb does (bar the\nredirection of the output), and checking the search path:\n\n----\nPC9 $ postgres -F -D/data -O --search_path=pg_catalog template1\nLOG: database system was shut down at 2002-05-21 10:44:50\nLOG: checkpoint record is at 0/49D6B0\nLOG: redo record is at 0/49D6B0; undo record is at 0/0; shutdown TRUE\nLOG: next transaction id: 103; next oid: 16570\nLOG: database system is ready\n\nPOSTGRES backend interactive interface\n$Revision: 1.267 $ $Date: 2002/05/18 15:44:47 $\n\nbackend> select current_schemas();\nblank\n 1: current_schemas (typeid = 1003, len = -1, typmod = -1,\nbyval = f)\n ----\n 1: current_schemas = \"{public}\" (typeid = 1003, len =\n-1, typmod = -1, byval = f)\n ----\n----\n\nWhich makes sense because as you said previously pg_catalog is implictly\nincluded at the beginning of the search path anyway. It then struck me\nthat as that is the case, does the --search_path=pg_catalog get ignored?\nI tested this by creating a view, and then examining it's\npg_class.relnamespace:\n\n----\nbackend> create view testview as select * from pg_class;\nbackend> select relnamespace from pg_class where relname = 'testview';\nblank\n 1: relnamespace (typeid = 26, len = 4, typmod = -1,\nbyval = t)\n ----\n 1: relnamespace = \"2200\" (typeid = 26, len = 4, typmod =\n-1, byval = t)\n ----\n----\n\n2200 is the oid of 'public', so it seems to me that the\n--search_path=pg_catalog is being ignored by the standalone backend for\nsome reason. I then tried explicitly naming the schema:\n\n----\nbackend> create view pg_catalog.testview2 as select * from pg_class;\nbackend> select relnamespace from pg_class where relname = 'testview2';\nblank\n 1: relnamespace (typeid = 26, len = 4, typmod = -1,\nbyval = t)\n ----\n 1: relnamespace = \"11\" (typeid = 26, len = 4, typmod = -1,\nbyval = t)\n ----\n----\n\nThis appears to work fine, so I hacked initdb to prepend the\n'pg_catalog.' to the viewnames. Cleared $PGDATA, confirmed I was running\nthe correct initdb, and still, the views are in public - Arrrggghhh!\n\nAny suggestions?\n\nRegards, Dave.\n\n", "msg_date": "Tue, 21 May 2002 11:29:56 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " }, { "msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> This appears to work fine, so I hacked initdb to prepend the\n> 'pg_catalog.' to the viewnames. Cleared $PGDATA, confirmed I was running\n> the correct initdb, and still, the views are in public - Arrrggghhh!\n\nWeird. Maybe there is more than one bug involved, because adding\npg_catalog. to the create view should definitely have worked.\nWill try to duplicate that here.\n\n> Any suggestions?\n\nTry changing the PGOPTS setting to use\n\n\t-c search_path=pg_catalog \n\nThat shouldn't make any difference but ...\n\nAlso, you could try setting a breakpoint at RangeVarGetCreationNamespace\n(in backend/catalog/namespace.c) to see what it thinks it's doing and\nwhat's in namespace_search_path at the time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 09:17:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 21 May 2002 14:17\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] More schema queries \n> \n> \n> \n> Try changing the PGOPTS setting to use\n> \n> \t-c search_path=pg_catalog \n> \n> That shouldn't make any difference but ...\n\nShouldn't but does :-). Checked & double-checked, that works perfectly.\n\n> Also, you could try setting a breakpoint at \n> RangeVarGetCreationNamespace (in backend/catalog/namespace.c) \n> to see what it thinks it's doing and what's in \n> namespace_search_path at the time.\n\nI'm going to try to do this regardless of the fact it now works - this\nwill be my first play with gdb so it might take me a while but would\nprobably be a useful learning experience. I'll let you know what I find.\n\nRegards, Dave.\n", "msg_date": "Tue, 21 May 2002 14:38:34 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " }, { "msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n>> Try changing the PGOPTS setting to use\n>> -c search_path=pg_catalog \n>> That shouldn't make any difference but ...\n\n> Shouldn't but does :-). Checked & double-checked, that works perfectly.\n\nI guess your version of getopt() won't cooperate with -- switches.\nI've committed this change in CVS.\n\nI'm still interested in why explicitly saying \"create view pg_catalog.foo\"\ndidn't work ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 15:08:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "> > They are just wanting to be standard. I know this; I just can't say how I\n> > know this. But the link to the ISO definition is\n> > http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap04.html#tag_04_14\n...\n> FWIW, here's what I see in the C99 spec pdf for mktime and the tm\n> structure info. I don't have C90 to compare to and I'm not sure that\n> there's anywhere else to look, but.... I assume that the change is\n> over whether returning -1 from mktime is a \"successful completion\" of\n> the function.\n...\n> 3 The mktime function returns the specified calendar time encoded as a\n> value of type time_t. If the calendar time cannot be represented, the\n> function returns the value (time_t)(-1).\n\nRight. Both standards refer to what is defined, and neither specifies a\nbehavior for dates and times prior to 1970. \"Undefined\" means that the\nstandard chooses to not cover that territory.\n\nIn this case, one could fully conform to the standard by returning \"-1\"\nfrom mktime() on error. That is *all* that the standard asks. One could\nalso look at tm_isdst to distinguish if this is a real error or whether\nit happens to be a time 1 second before 1970-01-01 (-1 on error, 0 for\nno DST, 1 for DST, if initialized to -1 before the call).\n\nI'm not sure how to contact the glibc folks in a way which gives full\ndiscussion to this issue. I'm certain that this recent change was done\nin good faith but was arbitrary and capricious in formulation and\napplication. This fundamentally damages the capabilities of Linux-based\nsystems everywhere and is not in the best interests of anyone or\nanything other than its competitors.\n\n - Thomas\n", "msg_date": "Tue, 21 May 2002 06:54:51 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Psql 7.2.1 Regress tests failed on RedHat7.3" } ]
[ { "msg_contents": "<markir@slingshot.co.nz> writes:\n> The toast table gets about 90 percent of the growth, followed by the toast \n> index at about 9 percent. The actual table + primary key stay at about 2M each.\n\nOdd. I wonder whether you are looking at an unintended behavior of the\nfree space map's thresholding mechanism. The toast table will generally\nhave large tuples of consistent size (about 2K each). This will cause\nthe FSM threshold for whether to remember a page to approach 2K, which\nprobably will mean that we forget about pages that could still hold one\ntoast tuple. That might be enough to cause the growth. It may be\nworth playing around with the details of the threshold-setting policy.\n\nIn particular, I'd suggest altering the code in GetPageWithFreeSpace\nand RecordAndGetPageWithFreeSpace (in\nsrc/backend/storage/freespace/freespace.c) to make the threshold\nconverge towards something less than the average request size, perhaps\naverage/2, which you could do with\n\n-\t\tcur_avg += ((int) spaceNeeded - cur_avg) / 32;\n+\t\tcur_avg += (((int) spaceNeeded)/2 - cur_avg) / 32;\n\nPossibly the initial threshold set in create_fsm_rel also needs to be\nsmaller than it is. Not sure about that though.\n\nLet me know how that affects your results ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 11:10:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting " }, { "msg_contents": "On Tue, 21 May 2002 11:10:04 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Odd. I wonder whether you are looking at an unintended behavior of the\n>free space map's thresholding mechanism. The toast table will generally\n>have large tuples of consistent size (about 2K each).\n\nSo we have 4 tuples per page?\n\n>This will cause\n>the FSM threshold for whether to remember a page to approach 2K, which\n>probably will mean that we forget about pages that could still hold one\n>toast tuple.\n\nI thought I was able to follow you up to here.\n\n>That might be enough to cause the growth.\n\nHere I'm lost. The effect you mention explains growth up to a state\nwhere each toast table page holds 3 instead of 4 tuples (1.33 *\ninitial size). Now with each UPDATE we get pages with significantly\nmore free space than 2K. Even if we add a few 1.000 pages being added\nbefore the next VACUUM, we still reach a stable size. Of course this\nonly holds if there are enough FSM slots, which Mark claims to have.\n\nSo IMHO there have to be additional reasons causing *unbounded*\ngrowth. Or am I missing something?\n\nJust my 0.02.\nServus\n Manfred\n", "msg_date": "Tue, 21 May 2002 18:48:53 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting " }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Here I'm lost. The effect you mention explains growth up to a state\n> where each toast table page holds 3 instead of 4 tuples (1.33 *\n> initial size). Now with each UPDATE we get pages with significantly\n> more free space than 2K.\n\nGood point, it should still stabilize with at worst 33% overhead. So\nmaybe I'm barking up the wrong tree.\n\nStill, the FSM code is new in 7.2 and I'm quite prepared to believe that\nthe effect Mark is seeing indicates some problem in it. Anyone care to\nsit down and read through freespace.c? It's pretty liberally commented.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 13:06:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Dave Page \n> Sent: 21 May 2002 14:39\n> To: 'Tom Lane'\n> Cc: pgsql-hackers@postgresql.org\n> Subject: RE: [HACKERS] More schema queries \n> \n> \n> > Also, you could try setting a breakpoint at\n> > RangeVarGetCreationNamespace (in backend/catalog/namespace.c) \n> > to see what it thinks it's doing and what's in \n> > namespace_search_path at the time.\n> \n> I'm going to try to do this regardless of the fact it now \n> works - this will be my first play with gdb so it might take \n> me a while but would probably be a useful learning \n> experience. I'll let you know what I find.\n> \n\nSorry Tom, I know this isn't strictly a PostgreSQL problem, but despite\nmuch time on Google I'm stuck with gdb. I can attach it to the\nstandalone backend at the relevant point in initdb, and have got it to\nbreak in RangeVarGetCreationNamespace. I can also see the call stack &\nregisters etc. \n\nWhat I cannot do is get it to show me anything useful. I only seem to be\nable to step through the assembly code (is it possible to load the C\nsource?), and more importantly, adding a watch (or print-ing)\nnamespace_search_path gives: 167839776. Attempting to watch or print\nnamespaceId gives 'Error: No symbol \"namespaceId\" in current context.'.\n\nI'd appreciate any pointers you can give me...\n\nRegards, Dave.\n", "msg_date": "Tue, 21 May 2002 16:26:22 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " }, { "msg_contents": "> What I cannot do is get it to show me anything useful.\n\nIt sounds like gdb does not have access to debugging symbol tables.\n\nFirstly, did you compile with -g (configure --enable-debug)?\n\nSecondly, did you point gdb at the postgres executable when you\nstarted it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 11:32:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "Noticed that increasing NAMEDATALEN to 128 is still on the TODO.\n\nGiven that the addition of namespaces for 7.3 is going to require many\nclient utilities to be updated anyway, is this a reaonable time to bring\nthis increase into the standard distribution? It seems like it would be\nminor pain whenever we do this, and 7.3 could be as good a time as any.\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Tue, 21 May 2002 11:41:26 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": true, "msg_subject": "Is 7.3 a good time to increase NAMEDATALEN ?" }, { "msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n> Noticed that increasing NAMEDATALEN to 128 is still on the TODO.\n> Given that the addition of namespaces for 7.3 is going to require many\n> client utilities to be updated anyway, is this a reaonable time to bring\n> this increase into the standard distribution?\n\nRight at the moment we are still trying to understand/eliminate the\nperformance penalty from increasing NAMEDATALEN. At last report\nsomeone had measured it as still being annoying large on pgbench.\n\nI have not done any profiling but my best theory at the moment is that\nthe remaining cost must be in lookup key matching for in-memory hash\ntables. dynahash.c treats keys as fixed-length and always does a\nmemcmp(), which is going to get slower with bigger NAMEDATALEN, even\nif the actually used names aren't getting longer.\n\nThe issue might be fixable by teaching this code to use strcmp() for\nName keys, but I haven't tried.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 13:53:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is 7.3 a good time to increase NAMEDATALEN ? " }, { "msg_contents": "On Tue, 21 May 2002 11:41:26 -0400\n\"Joel Burton\" <joel@joelburton.com> wrote:\n> Noticed that increasing NAMEDATALEN to 128 is still on the TODO.\n\nThe last benchmarks I saw indicate that there's still a significant\nperformance hit when increasing NAMEDATALEN, whether to 64 or 128.\n\nGiven that only a small percentage of PostgreSQL users need long\nidentifiers, and *everyone* would suffer the performance hit, I'd\nrather that we not touch NAMEDATALEN until more work has been\ndone on attempting to reduce the performance penalty.\n\nUntil then, the people who absolutely, positively must have long\nidentifiers can just raise NAMEDATALEN themselves.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Tue, 21 May 2002 15:19:16 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: Is 7.3 a good time to increase NAMEDATALEN ?" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 21 May 2002 16:33\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] More schema queries \n> \n> \n> > What I cannot do is get it to show me anything useful.\n> \n> It sounds like gdb does not have access to debugging symbol tables.\n> \n> Firstly, did you compile with -g (configure --enable-debug)?\n\nYes, but when I read this I realised that I forget to 'make clean'\nbefore rebuilding. Having done that I then found that gdb eats about\n100Mb of memory and 50% of cpu without actually displaying itself until\nkilled 10 minutes later. I tried this twice - I guess that gdb under\ncygwin has trouble with large exe's as my machine should handle it\n(PIII-M 1.13GHz, 512Mb).\n\n> Secondly, did you point gdb at the postgres executable when \n> you started it?\n\nYes, I added a 60 second wait to the appropriate part of initdb (-W 60).\nI could also get a stack trace which showed that I had broken in\nRangeVarGetCreationNamespace as intended.\n\nRegards, Dave.\n", "msg_date": "Tue, 21 May 2002 17:07:22 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " }, { "msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> Yes, but when I read this I realised that I forget to 'make clean'\n> before rebuilding. Having done that I then found that gdb eats about\n> 100Mb of memory and 50% of cpu without actually displaying itself until\n> killed 10 minutes later. I tried this twice - I guess that gdb under\n> cygwin has trouble with large exe's as my machine should handle it\n> (PIII-M 1.13GHz, 512Mb).\n\nThat's annoying. gdb is quite memory-hungry when dealing with big\nprograms, but as long as you're not running out of memory or swap it\nshould work. AFAIK anyway. I remember having to compile only parts\nof a big program with debug support, years ago on a machine that was\npretty small and slow by current standards.\n\nIf you can't get gdb to work then another possibility is the low-tech\napproach: add some debugging printf's to RangeVarGetCreationNamespace.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 13:29:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "I'm planning to tackle the problem of killing index tuples for dead rows\nduring normal processing (ie, before VACUUM). We've noticed repeatedly\nthat visits to dead heap rows consume a lot of time during indexscans\nof heavily-updated tables. This problem has been discussed before,\nso the general outline of the solution is apparent, but several design\ndecisions remain to be made. Here are my present thoughts:\n\n1. The basic idea is for index_getnext, when it retrieves a tuple that\nturns out to be invisible to the current transaction, to test whether\nthe tuple is dead to *all* transactions; if so, tell the index AM to mark\nthat index tuple as killed. Subsequently the index tuple will be ignored\nuntil it's finally vacuumed. (We cannot try to remove the index tuple\nimmediately, because of concurrency issues; but not returning it out of\nthe index AM during an indexscan should largely solve the performance\nproblem.) Under normal circumstances the time window between \"dead to\nmy transaction\" and \"dead to all transactions\" should not be very large,\nso this approach should not cause very many extra tuple-visibility tests\nto be performed.\n\n2. The second visibility test is the same as VACUUM's: is the tuple\ncommitted dead (or never good) and older than any running transaction's\nxmin? To call HeapTupleSatisfiesVacuum we need an idea of the global\nxmin, but we surely don't want index_getnext calling GetOldestXmin()\nevery time it does this. (Quite aside from the speed penalty, I'm worried\nabout possible deadlocks due to having to grab SInvalLock there.) Instead\nI propose that we modify GetSnapshotData() to compute the current global\nxmin as a byproduct of its existing computation (which it can do almost\nfor free) and stash that in a global variable someplace. index_getnext\ncan then use the global variable to call HeapTupleSatisfiesVacuum. This\nwill effectively mean that we do index-tuple killing on the basis of the\nglobal xmin as it stood when we started the current transaction. In some\ncases that might be a little out of date, but using an old xmin cannot\ncause incorrect behavior; at worst an index entry will survive a little\nlonger than it really needs to.\n\n3. What should the API to the index AMs look like? I propose adding\ntwo fields to the IndexScanDesc data structure:\n\nbool\tkill_prior_tuple; /* true if previously returned tuple is dead */\nbool\tignore_killed_tuples; /* true to not return killed entries */\n\nkill_prior_tuple is always set false during RelationGetIndexScan and at\nthe start of index_getnext. It's set true when index_getnext detects\na dead tuple and loops around to call the index AM again. So the index\nAM may interpret it as \"kill the tuple you last returned, ie, the one\nindicated by currentItemData\". Performing this action as part of\namgetnext minimizes the extra overhead needed to kill a tuple --- we don't\nneed an extra cycle of re-locking the current index page and re-finding\nour place.\n\nignore_killed_tuples will be set true in RelationGetIndexScan, but could\nbe set false by callers that want to see the killed index tuples.\n(Offhand I think only VACUUM would want to do that.)\n\nWithin the index AMs, both kill_prior_tuple and ignore_killed_tuples would\nbe examined only by the topmost amgetnext routine. A \"killed\" entry\nbehaves normally with respect to all internal operations of the index AM;\nwe just don't return it to callers when ignore_killed_tuples is true.\nThis will minimize the risk of introducing bugs into the index AMs.\nAs long as we can loop around for the next index tuple before we've\nreleased page locks inside the AM, we should get most of the possible\nperformance benefit with just a localized change.\n\n4. How exactly should a killed index tuple be marked on-disk? While there\nis one free bit available in IndexTupleData.t_info, I would prefer to use\nthat bit to expand the index tuple size field to 14 bits instead of 13.\n(This would allow btree index entries to be up to 10K when BLCKSZ is 32K,\nrather than being hard-limited to 8K.) What I am thinking of doing is\nusing the LP_DELETE bit in ItemIdData.lp_flags --- this appears to be\nunused for index tuples. (I'm not sure it's ever set for heap tuples\neither, actually, but it definitely looks free for index tuples.)\n\nWhichever bit we use, the index AM can simply set it and mark the buffer\ndirty with SetBufferCommitInfoNeedsSave. We do not need to WAL-log the\naction, just as we do not WAL-log marking heap tuple commit status bits,\nbecause the action could be done over by someone else if it were lost.\n\nComments? Anyone see any flaws or better ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 12:48:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Killing dead index tuples before they get vacuumed" }, { "msg_contents": "On Tue, 21 May 2002 12:48:39 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>4. How exactly should a killed index tuple be marked on-disk? While there\n>is one free bit available in IndexTupleData.t_info, I would prefer to use\n>that bit to expand the index tuple size field to 14 bits instead of 13.\n>(This would allow btree index entries to be up to 10K when BLCKSZ is 32K,\n>rather than being hard-limited to 8K.) What I am thinking of doing is\n>using the LP_DELETE bit in ItemIdData.lp_flags --- this appears to be\n>unused for index tuples. (I'm not sure it's ever set for heap tuples\n>either, actually, but it definitely looks free for index tuples.)\n\nAFAICS LP_DELETE is not used at all. The only place where something\nseems to happen to it is in PageRepairFragmentation() in bufpage.c:\n if ((*lp).lp_flags & LP_DELETE) /* marked for deletion */\n (*lp).lp_flags &= ~(LP_USED | LP_DELETE);\nbut there is no place where this bit is set. There's also a macro\ndefinition in itemid.h:\n#define ItemIdDeleted(itemId) \\\n (((itemId)->lp_flags & LP_DELETE) != 0)\nwhich is *always* used in this context\n if (!ItemIdIsUsed(lp) || ItemIdDeleted(lp))\n\nSo it looks save to use this bit for marking dead tuples. Wouldn't it\nbe even possible to simply reset LP_USED instead of setting\nLP_DELETED?\n\nIf you do not use LP_DELETED I'd vote for cleaning up the source and\nremoving it completely.\n\nYet another idea: set ItemIdData.lp_len = 0 for killed index tuples.\n\nWill this free space be used by subsequent inserts?\n\nServus\n Manfred\n", "msg_date": "Wed, 22 May 2002 22:00:30 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Killing dead index tuples before they get vacuumed" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> So it looks save to use this bit for marking dead tuples. Wouldn't it\n> be even possible to simply reset LP_USED instead of setting\n> LP_DELETED?\n\nMmmm ... I don't think so. That would cause the tuple to actually\ndisappear from the perspective of the index AM internals, which seems\nlike a bad idea. (For example, if another backend had an indexscan\nstopped on that same tuple, it would fail to re-find its place when it\ntried to continue the indexscan.)\n\n> Yet another idea: set ItemIdData.lp_len = 0 for killed index tuples.\n\nSee above. This is *not* a substitute for vacuuming.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 17:56:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Killing dead index tuples before they get vacuumed " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 21 May 2002 20:09\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] More schema queries \n>\n> \n> I guess your version of getopt() won't cooperate with -- \n> switches. I've committed this change in CVS.\n\nThanks.\n\n> \n> I'm still interested in why explicitly saying \"create view \n> pg_catalog.foo\" didn't work ...\n\nI've just been playing with this as you suggested, and using an initdb\nwith both 'create view foo' and 'create view pg_catalog.bar', with the\n-- style switch I get (for both types of view): \n\nnamespace_search_path = $user,public\nnewRelation->schemaname = null\nnamespaceId = 2200 (public)\n\nSo I guess the problem is a combination of the getopt() that we've\nalready found, and schemaname being null in the newRelation structure. \n\nUsing the -c style switch in PGSQL_OPTS gives namespace_search_path =\npg_catalog as expected.\n\nI am interested in learning more about this so any pointers you might\noffer would be useful (I seriously doubt I'd find the fault myself\nthough) but I do understand that you probably have better things to do\nthan help me begin to understand the internals so I won't be overly\noffended if you don't have time :-)\n\nCheers, Dave.\n", "msg_date": "Tue, 21 May 2002 20:24:46 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " }, { "msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n>> I'm still interested in why explicitly saying \"create view \n>> pg_catalog.foo\" didn't work ...\n\n> I've just been playing with this as you suggested, and using an initdb\n> with both 'create view foo' and 'create view pg_catalog.bar', with the\n> -- style switch I get (for both types of view): \n\n> namespace_search_path = $user,public\n> newRelation->schemaname = null\n> namespaceId = 2200 (public)\n\n> So I guess the problem is a combination of the getopt() that we've\n> already found, and schemaname being null in the newRelation structure. \n\nGiven that getopt wasn't working, I'd expect namespace_search_path to be\nthat, and since there won't be any $user view at initdb time, public\nshould be the default creation target. For \"create view foo\",\nnewRelation->schemaname *should* be null and thus public would be\nselected. But if you say \"create view pg_catalog.foo\" then\nnewRelation->schemaname should be \"pg_catalog\". Can you trace it back a\nlittle further and try to see why it's not? It works fine here AFAICT,\nso I'm wondering about portability problems ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 15:31:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "A second cut at SSL documentation....\n\n\n\nSSL Support in PostgreSQL\n=========================\n\nWho needs it?\n=============\n\nThe sites that require SSL fall into one (or more) of several broad\ncategories.\n\n*) They have insecure networks. \n\n Examples of insecure networks are anyone in a \"corporate hotel,\"\n any network with 802.11b wireless access points (WAP) (in 2002,\n this protocol has many well-known security weaknesses and even\n 'gold' connections can be broken within 8 hours), or anyone \n accessing their database over the internet.\n\n These sites need a Virtual Private Network (VPN), and either\n SSH tunnels or direct SSL connections can be used.\n\n*) They are storing extremely sensitive information.\n\n An example of extremely sensitive information is logs from\n network intrusion detection systems. This information *must*\n be fully encrypted between front- and back-end since an attacker\n is presumably sniffing all traffic within the VPN, and if they\n learn that you know what they are doing they may attempt to\n cover their tracks with a quick 'rm -rf /' and 'dropdb'\n\n In the extreme case, the contents of the database itself may\n be encrypted with either the crypt package (which provides\n symmetrical encryption of the records) or the PKIX package\n (which provides public-key encryption of the records).\n\n*) They are storing information which is considered confidential\n by custom, law or regulation.\n\n This includes all records held by your doctor, lawyer, accountant,\n etc. In these cases, the motivation for using encryption is not\n a conscious evaulation of risk, but the fear of liability for \n 'failure to perform due diligence' if encryption is available but\n unused and an attacker gains unauthorized access to the harm of\n others.\n\n*) They have 'road warriors.'\n\n This includes all sites where people need to have direct access\n to the database (not through a proxy such as a secure web page)\n from changing remote addresses. Client certificates provide a\n clean way to grant this access without opening up the database\n to the world.\n\nWho does not need it?\n---------------------\n\nIt's at least as important to know who does not need SSL as it\nis to know who does. Sites that do not need SSL fall into several\nbroad categories.\n\n*) Access is limited to the Unix socket.\n\n*) Access is limited to a physically secure network.\n\n \"Physically secure\" networks are common in the clusters and\n colocation sites - all database traffic is restricted to dedicated\n NIC cards and hubs, and all servers and cabling are maintained in\n locked cabinets.\n\n\nUsing SSH/OpenSSH as a Virtual Private Network (VPN)\n====================================================\n\nSSH and OpenSSH can be used to construct a Virtual Private Network\n(VPN) to provide confidentiality of PostgreSQL communications. \nThese tunnels are widely available and fairly well understood, but \ndo not provide any application-level authentication information.\n\nTo set up a SSH/OpenSSH tunnel, a shell account for each\nuser should be set up on the database server. It is acceptable\nfor the shell program to be bogus (e.g., /bin/false), if the\ntunnel is set up in to avoid launching a remote shell.\n\nOn each client system the $HOME/.ssh/config file should contain\nan additional line similiar to\n\n LocalForward 5555 psql.example.com:5432\n\n(replacing psql.example.com with the name of your database server).\nBy putting this line in the configuration file, instead of specifying\nit on the command line, the tunnel will be created whenever a \nconnection is made to the remote system.\n\nThe psql(1) client (or any client) should be wrapped with a script\nthat establishes an SSH tunnel when the program is launched:\n\n #!/bin/sh\n HOST=psql.example.com\n IDENTITY=$HOME/.ssh/identity.psql\n /usr/bin/ssh -1 -i $IDENTITY -n $HOST 'sleep 60' & \\\n\t/usr/bin/psql -h $HOST -p 5555 $1\n\nAlternately, the system could run a daemon that establishes and maintains\nthe tunnel. This is preferrable when multiple users need to establish\nsimilar tunnels to the same remote site.\n\nUnfortunately, there are many potential drawbacks to SSL tunnels:\n\n*) the SSH implementation or protocol may be flawed. Serious problems\n are discovered about once every 18- to 24- months.\n\n*) the systems may be misconfigured by accident.\n\n*) the database server must provide shell accounts for all users\n needing access. This can be a chore to maintain, esp. in if\n all other user access should be denied.\n\n*) neither the front- or back-end can determine the level of\n encryption provided by the SSH tunnel - or even whether an\n SSH tunnel is in use. This prevents security-aware clients\n from refusing any connection with unacceptly weak encryption.\n\n*) neither the front- or back-end can get any authentication\n information pertaining to the SSH tunnel.\n\nBottom line: if you just need a VPN, SSH tunnels are a good solution.\nBut if you explicitly need a secure connection they're inadequate.\n\n\nDirect SSL Support\n==================\n\nInsecure Channel: ANONYMOUS DH Server\n-------------------------------------\n\n\"ANONYMOUS DH\" is the most basic SSL implementation. It does\nnot require a server certificate, but it is vulnerable to\n\"man-in-the-middle\" attacks.\n\nThe PostgreSQL backend does not support ANONYMOUS DH sessions.\n\n\nSecure Channel: Server Authentication\n-------------------------------------\n\nServer Authentication requires that the server authenticate itself\nto clients (via certificates), but clients can remain anonymous.\nThis protects clients from \"man-in-the-middle\" attacks (where a\nbogus server either captures all data or provides bogus data),\nbut does not protect the server from bad data injected by false\nclients.\n\nThe community has established a set of criteria for secure\ncommunications:\n\n*) the server must provide a certificate identifying itself\n via its own fully qualified domain name (FDQN) in the\n certificate's Common Name (CN) field.\n\n*) the FQDN in the server certificate must resolve to the\n IP address used in the connection.\n\n*) the certificate must be valid. (The current date must be\n no earlier than the 'notBefore' date, and no later than the\n 'notAfter' date.)\n\n*) the server certificate must be signed by an issuer certificate\n known to the clients.\n\n This issuer can be a known public CA (e.g., Verisign), a locally\n generated root cert installed with the database client, or the \n self-signed server cert installed with the database client.\n\n Another approach (used by SSH and most web clients) is for the\n client to prompt the user whether to accept a new root cert when\n it is encountered for the first time. psql(1) does not currently\n support this mechanism.\n\n*) the client *should* check the issuer's Certificate Revocation\n List (CRL) to verify that the server's certificate hasn't been\n revoked for some reason, but in practice this step is often\n skipped.\n\n*) the server private key must be owned by the database process\n and not world-accessible. It is recommended that the server\n key be encrypted, but it is not required if necessary for the\n operation of the system. (Primarily to allow automatic restarts\n after the system is rebooted.)\n \nThe 'mkcert.sh' script can be used to generate and install \nsuitable certificates\n\nFinally, the client library can have one or more trusted root\ncertificates compiled into it. This allows clients to verify\ncertificates without the need for local copies. To do this,\nthe source file src/interfaces/libpq/fe-ssl.c must be edited\nand the database recompiled.\n\nSecure Channel: Mutual Authentication\n-------------------------------------\n\nMutual authentication requires that servers and clients each\nauthenticate to the other. This protects the server from\nfalse clients in addition to protecting the clients from false\nservers.\n\nThe community has established a set of criteria for client\nauthentication similar to the list above.\n\n*) the client must provide a certificate identifying itself.\n The certificate's Common Name (CN) field should contain the\n client's usual name.\n\n*) the client certificate must be signed by a certificate known\n to the server.\n\n If a local root cert was used to sign the server's cert, the\n client certs can be signed by the issuer.\n\n*) the certificate must be valid. (The current date must be\n no earlier than the 'notBefore' date, and no later than the\n 'notAfter' date.)\n\n*) the server *should* check the issuer's Certificate Revocation\n List (CRL) to verify that the clients's certificate hasn't been\n revoked for some reason, but in practice this step is often\n skipped.\n\n*) the client's private key must be owned by the client process\n and not world-accessible. It is recommended that the client\n key be encrypted, but because of technical reasons in the\n architecture of the client library this is not yet supported.\n\nPostgreSQL can generate client certificates via a four-step process.\n\n1. The \"client.conf\" file must be copied from the server. Certificates\n can be highly localizable, and this file contains information that\n will be needed later.\n\n The client.conf file is normally installed in /etc/postgresql/root.crt.\n The client should also copy the server's root.crt file to\n $HOME/.postgresql/root.crt.\n\n2. If the user has the OpenSSL applications installed, they can\n run pgkeygen.sh. (An equivalent compiled program will be available\n in the future.) They should provide a copy of the\n $HOME/.postgresql/postgresql.pem file to their DBA.\n\n3. The DBA should sign this file the OpenSSL applications:\n\n $ openssl ca -config root.conf -ss_cert ....\n\n and return the signed cert (postgresql.crt) to the user.\n\n4. The user should install this file in $HOME/.postgresql/postgresql.crt.\n\nThe server will log every time a client certificate has been\nused, but there is not yet a mechanism provided for using client\ncertificates as PostgreSQL authentication at the application level.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Tue, 21 May 2002 14:27:00 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": true, "msg_subject": "[HACKERS] 2nd cut at SSL documentation" }, { "msg_contents": "On Tue, 21 May 2002 14:27:00 -0600 (MDT)\n\"Bear Giles\" <bgiles@coyotesong.com> wrote:\n> A second cut at SSL documentation....\n\nI've pointed out some minor things I noticed while reading through.\nYeah, I was bored :-)\n\n> The sites that require SSL fall into one (or more) of several broad\n> categories.\n> \n> *) They have insecure networks. \n> \n> Examples of insecure networks are anyone in a \"corporate hotel,\"\n\nWhat's a corporate hotel?\n\n> *) They have 'road warriors.'\n\nThis section title sounds confusingly similar to the 1st item.\nPerhaps \"They need to authentication clients securely\" or something\nsimilar? The need to use client certificates does not apply only to\n\"road warriors\" -- I've seen situations where client-certs are used for\nclients to connecting to a server over a LAN.\n\n> *) Access is limited to the Unix socket.\n\n\"the\" sounds wrong, there's more than just 1 :-)\n\n> *) Access is limited to a physically secure network.\n> \n> \"Physically secure\" networks are common in the clusters and\n> colocation sites - all database traffic is restricted to dedicated\n> NIC cards and hubs, and all servers and cabling are maintained in\n> locked cabinets.\n\nPerhaps add a note on the performance hit here?\n\n> Using SSH/OpenSSH as a Virtual Private Network (VPN)\n\nI'm unsure why you're bothering to differentiate between SSH\nand OpenSSH.\n\n> SSH and OpenSSH can be used to construct a Virtual Private Network\n> (VPN)\n\nNo need to include the abbreviation for VPN here, you've explained\nthe term before.\n\n> to provide confidentiality of PostgreSQL communications. \n> These tunnels are widely available and fairly well understood, but \n> do not provide any application-level authentication information.\n\nYou might want to clarify what \"application-level authentication\ninformation\" means, or else leave out all discussion of drawbacks\nuntil later.\n\n> To set up a SSH/OpenSSH tunnel, a shell account for each\n> user should be set up on the database server. It is acceptable\n> for the shell program to be bogus (e.g., /bin/false), if the\n> tunnel is set up in to avoid launching a remote shell.\n> \n> On each client system the $HOME/.ssh/config file should contain\n> an additional line similiar to\n> \n> LocalForward 5555 psql.example.com:5432\n\n\"pgsql.example.com\" strikes me as a better example hostname (I always\nthink that psql == DB client, postgres/postmaster/pgsql == DB server).\n\n> Unfortunately, there are many potential drawbacks to SSL tunnels:\n\nI think you mean SSH tunnels.\n\n> *) the SSH implementation or protocol may be flawed. Serious problems\n> are discovered about once every 18- to 24- months.\n\nI'd be skeptical whether this weakness if specific to SSH -- there\ncan be security holes in OpenSSL, the SSL protocol, the SSL\nimplementation in PostgreSQL, etc.\n\n> *) the database server must provide shell accounts for all users\n> needing access. This can be a chore to maintain, esp. in if\n\nRemove the \"in\".\n\n> *) neither the front- or back-end can determine the level of\n> encryption provided by the SSH tunnel - or even whether an\n> SSH tunnel is in use. This prevents security-aware clients\n> from refusing any connection with unacceptly weak encryption.\n\nSpelling error.\n\n> Finally, the client library can have one or more trusted root\n> certificates compiled into it. This allows clients to verify\n> certificates without the need for local copies. To do this,\n> the source file src/interfaces/libpq/fe-ssl.c must be edited\n> and the database recompiled.\n\n\"PostgreSQL\" recompiled -- database versus RDBMS can be ambiguous.\n\n> Mutual authentication requires that servers and clients each\n> authenticate to the other. This protects the server from\n> false clients in addition to protecting the clients from false\n> servers.\n\n\"false\" in this context?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n--ELM1032744392-11773-0_--\n", "msg_date": "Tue, 21 May 2002 19:50:38 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 2nd cut at SSL documentation" }, { "msg_contents": "\n> On Tue, 21 May 2002 14:27:00 -0600 (MDT)\n> \"Bear Giles\" <bgiles@coyotesong.com> wrote:\n> > A second cut at SSL documentation....\n\n> [snip]\n\n> > To set up a SSH/OpenSSH tunnel, a shell account for each\n> > user should be set up on the database server. It is acceptable\n> > for the shell program to be bogus (e.g., /bin/false), if the\n> > tunnel is set up in to avoid launching a remote shell.\n> > \n> > On each client system the $HOME/.ssh/config file should contain\n> > an additional line similiar to\n> > \n> > LocalForward 5555 psql.example.com:5432\n\nI'm coming to this party a bit late in that this is the first I've read the\ndocumentation. I'm also a bit of a newbie when it comes to SSH and I've not\ninvestigated ssh3 at all yet. However, isn't this assuming ssh1 only? I know\nssh2 will fallback to ssh1 compatibility but should there be something about\nconfiguring for the later versions?\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Thu, 23 May 2002 02:50:41 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: 2nd cut at SSL documentation" }, { "msg_contents": "\nWith Bear disappearing, I am now unsure of our current SSL code. He\nsubmitted a bunch of patches that seemed to improve our SSL\ncapabilities, but I am unsure if it is done, and exactly how it was\nimproved.\n\nAttached is the documentation Bear supplied. I have removed the\ninterfaces/ssl directory because I couldn't figure out what to do with\nit, but this documentation does help.\n\nThe files in interfaces/ssl where:\n\n\tclient.conf mkcert.sh* pgkeygen.sh root.conf server.conf\n\nWould someone who understands SSL coding please comment?\n\n---------------------------------------------------------------------------\n\nBear Giles wrote:\n> A second cut at SSL documentation....\n> \n> \n> \n> SSL Support in PostgreSQL\n> =========================\n> \n> Who needs it?\n> =============\n> \n> The sites that require SSL fall into one (or more) of several broad\n> categories.\n> \n> *) They have insecure networks. \n> \n> Examples of insecure networks are anyone in a \"corporate hotel,\"\n> any network with 802.11b wireless access points (WAP) (in 2002,\n> this protocol has many well-known security weaknesses and even\n> 'gold' connections can be broken within 8 hours), or anyone \n> accessing their database over the internet.\n> \n> These sites need a Virtual Private Network (VPN), and either\n> SSH tunnels or direct SSL connections can be used.\n> \n> *) They are storing extremely sensitive information.\n> \n> An example of extremely sensitive information is logs from\n> network intrusion detection systems. This information *must*\n> be fully encrypted between front- and back-end since an attacker\n> is presumably sniffing all traffic within the VPN, and if they\n> learn that you know what they are doing they may attempt to\n> cover their tracks with a quick 'rm -rf /' and 'dropdb'\n> \n> In the extreme case, the contents of the database itself may\n> be encrypted with either the crypt package (which provides\n> symmetrical encryption of the records) or the PKIX package\n> (which provides public-key encryption of the records).\n> \n> *) They are storing information which is considered confidential\n> by custom, law or regulation.\n> \n> This includes all records held by your doctor, lawyer, accountant,\n> etc. In these cases, the motivation for using encryption is not\n> a conscious evaulation of risk, but the fear of liability for \n> 'failure to perform due diligence' if encryption is available but\n> unused and an attacker gains unauthorized access to the harm of\n> others.\n> \n> *) They have 'road warriors.'\n> \n> This includes all sites where people need to have direct access\n> to the database (not through a proxy such as a secure web page)\n> from changing remote addresses. Client certificates provide a\n> clean way to grant this access without opening up the database\n> to the world.\n> \n> Who does not need it?\n> ---------------------\n> \n> It's at least as important to know who does not need SSL as it\n> is to know who does. Sites that do not need SSL fall into several\n> broad categories.\n> \n> *) Access is limited to the Unix socket.\n> \n> *) Access is limited to a physically secure network.\n> \n> \"Physically secure\" networks are common in the clusters and\n> colocation sites - all database traffic is restricted to dedicated\n> NIC cards and hubs, and all servers and cabling are maintained in\n> locked cabinets.\n> \n> \n> Using SSH/OpenSSH as a Virtual Private Network (VPN)\n> ====================================================\n> \n> SSH and OpenSSH can be used to construct a Virtual Private Network\n> (VPN) to provide confidentiality of PostgreSQL communications. \n> These tunnels are widely available and fairly well understood, but \n> do not provide any application-level authentication information.\n> \n> To set up a SSH/OpenSSH tunnel, a shell account for each\n> user should be set up on the database server. It is acceptable\n> for the shell program to be bogus (e.g., /bin/false), if the\n> tunnel is set up in to avoid launching a remote shell.\n> \n> On each client system the $HOME/.ssh/config file should contain\n> an additional line similiar to\n> \n> LocalForward 5555 psql.example.com:5432\n> \n> (replacing psql.example.com with the name of your database server).\n> By putting this line in the configuration file, instead of specifying\n> it on the command line, the tunnel will be created whenever a \n> connection is made to the remote system.\n> \n> The psql(1) client (or any client) should be wrapped with a script\n> that establishes an SSH tunnel when the program is launched:\n> \n> #!/bin/sh\n> HOST=psql.example.com\n> IDENTITY=$HOME/.ssh/identity.psql\n> /usr/bin/ssh -1 -i $IDENTITY -n $HOST 'sleep 60' & \\\n> \t/usr/bin/psql -h $HOST -p 5555 $1\n> \n> Alternately, the system could run a daemon that establishes and maintains\n> the tunnel. This is preferrable when multiple users need to establish\n> similar tunnels to the same remote site.\n> \n> Unfortunately, there are many potential drawbacks to SSL tunnels:\n> \n> *) the SSH implementation or protocol may be flawed. Serious problems\n> are discovered about once every 18- to 24- months.\n> \n> *) the systems may be misconfigured by accident.\n> \n> *) the database server must provide shell accounts for all users\n> needing access. This can be a chore to maintain, esp. in if\n> all other user access should be denied.\n> \n> *) neither the front- or back-end can determine the level of\n> encryption provided by the SSH tunnel - or even whether an\n> SSH tunnel is in use. This prevents security-aware clients\n> from refusing any connection with unacceptly weak encryption.\n> \n> *) neither the front- or back-end can get any authentication\n> information pertaining to the SSH tunnel.\n> \n> Bottom line: if you just need a VPN, SSH tunnels are a good solution.\n> But if you explicitly need a secure connection they're inadequate.\n> \n> \n> Direct SSL Support\n> ==================\n> \n> Insecure Channel: ANONYMOUS DH Server\n> -------------------------------------\n> \n> \"ANONYMOUS DH\" is the most basic SSL implementation. It does\n> not require a server certificate, but it is vulnerable to\n> \"man-in-the-middle\" attacks.\n> \n> The PostgreSQL backend does not support ANONYMOUS DH sessions.\n> \n> \n> Secure Channel: Server Authentication\n> -------------------------------------\n> \n> Server Authentication requires that the server authenticate itself\n> to clients (via certificates), but clients can remain anonymous.\n> This protects clients from \"man-in-the-middle\" attacks (where a\n> bogus server either captures all data or provides bogus data),\n> but does not protect the server from bad data injected by false\n> clients.\n> \n> The community has established a set of criteria for secure\n> communications:\n> \n> *) the server must provide a certificate identifying itself\n> via its own fully qualified domain name (FDQN) in the\n> certificate's Common Name (CN) field.\n> \n> *) the FQDN in the server certificate must resolve to the\n> IP address used in the connection.\n> \n> *) the certificate must be valid. (The current date must be\n> no earlier than the 'notBefore' date, and no later than the\n> 'notAfter' date.)\n> \n> *) the server certificate must be signed by an issuer certificate\n> known to the clients.\n> \n> This issuer can be a known public CA (e.g., Verisign), a locally\n> generated root cert installed with the database client, or the \n> self-signed server cert installed with the database client.\n> \n> Another approach (used by SSH and most web clients) is for the\n> client to prompt the user whether to accept a new root cert when\n> it is encountered for the first time. psql(1) does not currently\n> support this mechanism.\n> \n> *) the client *should* check the issuer's Certificate Revocation\n> List (CRL) to verify that the server's certificate hasn't been\n> revoked for some reason, but in practice this step is often\n> skipped.\n> \n> *) the server private key must be owned by the database process\n> and not world-accessible. It is recommended that the server\n> key be encrypted, but it is not required if necessary for the\n> operation of the system. (Primarily to allow automatic restarts\n> after the system is rebooted.)\n> \n> The 'mkcert.sh' script can be used to generate and install \n> suitable certificates\n> \n> Finally, the client library can have one or more trusted root\n> certificates compiled into it. This allows clients to verify\n> certificates without the need for local copies. To do this,\n> the source file src/interfaces/libpq/fe-ssl.c must be edited\n> and the database recompiled.\n> \n> Secure Channel: Mutual Authentication\n> -------------------------------------\n> \n> Mutual authentication requires that servers and clients each\n> authenticate to the other. This protects the server from\n> false clients in addition to protecting the clients from false\n> servers.\n> \n> The community has established a set of criteria for client\n> authentication similar to the list above.\n> \n> *) the client must provide a certificate identifying itself.\n> The certificate's Common Name (CN) field should contain the\n> client's usual name.\n> \n> *) the client certificate must be signed by a certificate known\n> to the server.\n> \n> If a local root cert was used to sign the server's cert, the\n> client certs can be signed by the issuer.\n> \n> *) the certificate must be valid. (The current date must be\n> no earlier than the 'notBefore' date, and no later than the\n> 'notAfter' date.)\n> \n> *) the server *should* check the issuer's Certificate Revocation\n> List (CRL) to verify that the clients's certificate hasn't been\n> revoked for some reason, but in practice this step is often\n> skipped.\n> \n> *) the client's private key must be owned by the client process\n> and not world-accessible. It is recommended that the client\n> key be encrypted, but because of technical reasons in the\n> architecture of the client library this is not yet supported.\n> \n> PostgreSQL can generate client certificates via a four-step process.\n> \n> 1. The \"client.conf\" file must be copied from the server. Certificates\n> can be highly localizable, and this file contains information that\n> will be needed later.\n> \n> The client.conf file is normally installed in /etc/postgresql/root.crt.\n> The client should also copy the server's root.crt file to\n> $HOME/.postgresql/root.crt.\n> \n> 2. If the user has the OpenSSL applications installed, they can\n> run pgkeygen.sh. (An equivalent compiled program will be available\n> in the future.) They should provide a copy of the\n> $HOME/.postgresql/postgresql.pem file to their DBA.\n> \n> 3. The DBA should sign this file the OpenSSL applications:\n> \n> $ openssl ca -config root.conf -ss_cert ....\n> \n> and return the signed cert (postgresql.crt) to the user.\n> \n> 4. The user should install this file in $HOME/.postgresql/postgresql.crt.\n> \n> The server will log every time a client certificate has been\n> used, but there is not yet a mechanism provided for using client\n> certificates as PostgreSQL authentication at the application level.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 14:42:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2nd cut at SSL documentation" }, { "msg_contents": "\nI have added this to backend/libpq/README.SSL to be integrated into our\nmain docs later.\n\n---------------------------------------------------------------------------\n\nBear Giles wrote:\n> A second cut at SSL documentation....\n> \n> \n> \n> SSL Support in PostgreSQL\n> =========================\n> \n> Who needs it?\n> =============\n> \n> The sites that require SSL fall into one (or more) of several broad\n> categories.\n> \n> *) They have insecure networks. \n> \n> Examples of insecure networks are anyone in a \"corporate hotel,\"\n> any network with 802.11b wireless access points (WAP) (in 2002,\n> this protocol has many well-known security weaknesses and even\n> 'gold' connections can be broken within 8 hours), or anyone \n> accessing their database over the internet.\n> \n> These sites need a Virtual Private Network (VPN), and either\n> SSH tunnels or direct SSL connections can be used.\n> \n> *) They are storing extremely sensitive information.\n> \n> An example of extremely sensitive information is logs from\n> network intrusion detection systems. This information *must*\n> be fully encrypted between front- and back-end since an attacker\n> is presumably sniffing all traffic within the VPN, and if they\n> learn that you know what they are doing they may attempt to\n> cover their tracks with a quick 'rm -rf /' and 'dropdb'\n> \n> In the extreme case, the contents of the database itself may\n> be encrypted with either the crypt package (which provides\n> symmetrical encryption of the records) or the PKIX package\n> (which provides public-key encryption of the records).\n> \n> *) They are storing information which is considered confidential\n> by custom, law or regulation.\n> \n> This includes all records held by your doctor, lawyer, accountant,\n> etc. In these cases, the motivation for using encryption is not\n> a conscious evaulation of risk, but the fear of liability for \n> 'failure to perform due diligence' if encryption is available but\n> unused and an attacker gains unauthorized access to the harm of\n> others.\n> \n> *) They have 'road warriors.'\n> \n> This includes all sites where people need to have direct access\n> to the database (not through a proxy such as a secure web page)\n> from changing remote addresses. Client certificates provide a\n> clean way to grant this access without opening up the database\n> to the world.\n> \n> Who does not need it?\n> ---------------------\n> \n> It's at least as important to know who does not need SSL as it\n> is to know who does. Sites that do not need SSL fall into several\n> broad categories.\n> \n> *) Access is limited to the Unix socket.\n> \n> *) Access is limited to a physically secure network.\n> \n> \"Physically secure\" networks are common in the clusters and\n> colocation sites - all database traffic is restricted to dedicated\n> NIC cards and hubs, and all servers and cabling are maintained in\n> locked cabinets.\n> \n> \n> Using SSH/OpenSSH as a Virtual Private Network (VPN)\n> ====================================================\n> \n> SSH and OpenSSH can be used to construct a Virtual Private Network\n> (VPN) to provide confidentiality of PostgreSQL communications. \n> These tunnels are widely available and fairly well understood, but \n> do not provide any application-level authentication information.\n> \n> To set up a SSH/OpenSSH tunnel, a shell account for each\n> user should be set up on the database server. It is acceptable\n> for the shell program to be bogus (e.g., /bin/false), if the\n> tunnel is set up in to avoid launching a remote shell.\n> \n> On each client system the $HOME/.ssh/config file should contain\n> an additional line similiar to\n> \n> LocalForward 5555 psql.example.com:5432\n> \n> (replacing psql.example.com with the name of your database server).\n> By putting this line in the configuration file, instead of specifying\n> it on the command line, the tunnel will be created whenever a \n> connection is made to the remote system.\n> \n> The psql(1) client (or any client) should be wrapped with a script\n> that establishes an SSH tunnel when the program is launched:\n> \n> #!/bin/sh\n> HOST=psql.example.com\n> IDENTITY=$HOME/.ssh/identity.psql\n> /usr/bin/ssh -1 -i $IDENTITY -n $HOST 'sleep 60' & \\\n> \t/usr/bin/psql -h $HOST -p 5555 $1\n> \n> Alternately, the system could run a daemon that establishes and maintains\n> the tunnel. This is preferrable when multiple users need to establish\n> similar tunnels to the same remote site.\n> \n> Unfortunately, there are many potential drawbacks to SSL tunnels:\n> \n> *) the SSH implementation or protocol may be flawed. Serious problems\n> are discovered about once every 18- to 24- months.\n> \n> *) the systems may be misconfigured by accident.\n> \n> *) the database server must provide shell accounts for all users\n> needing access. This can be a chore to maintain, esp. in if\n> all other user access should be denied.\n> \n> *) neither the front- or back-end can determine the level of\n> encryption provided by the SSH tunnel - or even whether an\n> SSH tunnel is in use. This prevents security-aware clients\n> from refusing any connection with unacceptly weak encryption.\n> \n> *) neither the front- or back-end can get any authentication\n> information pertaining to the SSH tunnel.\n> \n> Bottom line: if you just need a VPN, SSH tunnels are a good solution.\n> But if you explicitly need a secure connection they're inadequate.\n> \n> \n> Direct SSL Support\n> ==================\n> \n> Insecure Channel: ANONYMOUS DH Server\n> -------------------------------------\n> \n> \"ANONYMOUS DH\" is the most basic SSL implementation. It does\n> not require a server certificate, but it is vulnerable to\n> \"man-in-the-middle\" attacks.\n> \n> The PostgreSQL backend does not support ANONYMOUS DH sessions.\n> \n> \n> Secure Channel: Server Authentication\n> -------------------------------------\n> \n> Server Authentication requires that the server authenticate itself\n> to clients (via certificates), but clients can remain anonymous.\n> This protects clients from \"man-in-the-middle\" attacks (where a\n> bogus server either captures all data or provides bogus data),\n> but does not protect the server from bad data injected by false\n> clients.\n> \n> The community has established a set of criteria for secure\n> communications:\n> \n> *) the server must provide a certificate identifying itself\n> via its own fully qualified domain name (FDQN) in the\n> certificate's Common Name (CN) field.\n> \n> *) the FQDN in the server certificate must resolve to the\n> IP address used in the connection.\n> \n> *) the certificate must be valid. (The current date must be\n> no earlier than the 'notBefore' date, and no later than the\n> 'notAfter' date.)\n> \n> *) the server certificate must be signed by an issuer certificate\n> known to the clients.\n> \n> This issuer can be a known public CA (e.g., Verisign), a locally\n> generated root cert installed with the database client, or the \n> self-signed server cert installed with the database client.\n> \n> Another approach (used by SSH and most web clients) is for the\n> client to prompt the user whether to accept a new root cert when\n> it is encountered for the first time. psql(1) does not currently\n> support this mechanism.\n> \n> *) the client *should* check the issuer's Certificate Revocation\n> List (CRL) to verify that the server's certificate hasn't been\n> revoked for some reason, but in practice this step is often\n> skipped.\n> \n> *) the server private key must be owned by the database process\n> and not world-accessible. It is recommended that the server\n> key be encrypted, but it is not required if necessary for the\n> operation of the system. (Primarily to allow automatic restarts\n> after the system is rebooted.)\n> \n> The 'mkcert.sh' script can be used to generate and install \n> suitable certificates\n> \n> Finally, the client library can have one or more trusted root\n> certificates compiled into it. This allows clients to verify\n> certificates without the need for local copies. To do this,\n> the source file src/interfaces/libpq/fe-ssl.c must be edited\n> and the database recompiled.\n> \n> Secure Channel: Mutual Authentication\n> -------------------------------------\n> \n> Mutual authentication requires that servers and clients each\n> authenticate to the other. This protects the server from\n> false clients in addition to protecting the clients from false\n> servers.\n> \n> The community has established a set of criteria for client\n> authentication similar to the list above.\n> \n> *) the client must provide a certificate identifying itself.\n> The certificate's Common Name (CN) field should contain the\n> client's usual name.\n> \n> *) the client certificate must be signed by a certificate known\n> to the server.\n> \n> If a local root cert was used to sign the server's cert, the\n> client certs can be signed by the issuer.\n> \n> *) the certificate must be valid. (The current date must be\n> no earlier than the 'notBefore' date, and no later than the\n> 'notAfter' date.)\n> \n> *) the server *should* check the issuer's Certificate Revocation\n> List (CRL) to verify that the clients's certificate hasn't been\n> revoked for some reason, but in practice this step is often\n> skipped.\n> \n> *) the client's private key must be owned by the client process\n> and not world-accessible. It is recommended that the client\n> key be encrypted, but because of technical reasons in the\n> architecture of the client library this is not yet supported.\n> \n> PostgreSQL can generate client certificates via a four-step process.\n> \n> 1. The \"client.conf\" file must be copied from the server. Certificates\n> can be highly localizable, and this file contains information that\n> will be needed later.\n> \n> The client.conf file is normally installed in /etc/postgresql/root.crt.\n> The client should also copy the server's root.crt file to\n> $HOME/.postgresql/root.crt.\n> \n> 2. If the user has the OpenSSL applications installed, they can\n> run pgkeygen.sh. (An equivalent compiled program will be available\n> in the future.) They should provide a copy of the\n> $HOME/.postgresql/postgresql.pem file to their DBA.\n> \n> 3. The DBA should sign this file the OpenSSL applications:\n> \n> $ openssl ca -config root.conf -ss_cert ....\n> \n> and return the signed cert (postgresql.crt) to the user.\n> \n> 4. The user should install this file in $HOME/.postgresql/postgresql.crt.\n> \n> The server will log every time a client certificate has been\n> used, but there is not yet a mechanism provided for using client\n> certificates as PostgreSQL authentication at the application level.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:26:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2nd cut at SSL documentation" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 21 May 2002 20:31\n> To: Dave Page\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] More schema queries \n> \n> Can you \n> trace it back a\n> little further and try to see why it's not? It works fine \n> here AFAICT, so I'm wondering about portability problems ...\n\nThis week just gets wierder. I haven't a clue what I overlooked, but\nthere must have been something - I put initdb back with the -- switch &\npg_catalog. prefixes. Ran it, same problem as expected.\n\nI then added various printf's right back to DefineRelation (iirc), ran\ninitdb again and _could_ see the schema name in every function, and, the\nviews were created in pg_catalog!!\n\nTook all the printf's back out, and it still works as expected.\n\nOh well :-)\n\nThanks for your help with this Tom, if nothing else, at least I've\nlearnt a fair bit.\n\nRegards, Dave.\n", "msg_date": "Tue, 21 May 2002 21:44:44 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: More schema queries " } ]
[ { "msg_contents": "On Wed, 2002-05-22 at 03:10, Tom Lane wrote:\n> (snippage) That might be enough to cause the growth. It may be\n> worth playing around with the details of the threshold-setting policy.\n> \n(snippage)\n> Possibly the initial threshold set in create_fsm_rel also needs to be\n> smaller than it is. Not sure about that though.\n> \n> Let me know how that affects your results ...\n> \nI will try some changes out here (and look at freespace.c in general) -\nbut dont let that stop anyone else examining it as well... :-)\n\n(I am on holiday for a 10 days as of 24/05, so I may not report anything\nfor a little while)\n\nregards\n\nMark \n\n\n", "msg_date": "22 May 2002 09:08:35 +1200", "msg_from": "Mark kirkwood <markir@slingshot.co.nz>", "msg_from_op": true, "msg_subject": "Re: Unbounded (Possibly) Database Size Increase - Toasting" } ]
[ { "msg_contents": "Hello, anybody can tell me about a graphical tool that help me when I wanna \nrelate tables from a postgre database and make referential integrity between \nthem?\nThank you!\nGaston.-\n\n_________________________________________________________________\n�nase con MSN Hotmail al servicio de correo electr�nico m�s grande del \nmundo. http://www.hotmail.com\n\n", "msg_date": "Tue, 21 May 2002 18:24:38 -0300", "msg_from": "\"Gaston Micheri\" <ggmsistemas@hotmail.com>", "msg_from_op": true, "msg_subject": "Graphical Tool" }, { "msg_contents": "\nOn Tue, 21 May 2002, Gaston Micheri wrote:\n\n> Hello, anybody can tell me about a graphical tool that help me when I wanna \n> relate tables from a postgre database and make referential integrity between \n> them?\n> Thank you!\n> Gaston.-\n\nPgAccess will give you a graphical frontend to postgres, as will\nPgAdmin-II. PgAccess won't enable you to drag and drop columns between tables\nin some graphical representation of a design in order to create references\nthough. I can't speak for PgAdmin but I doubt that provides that functionality\nalso, although I'll happily be ocrrected here.\n\nPgAccess will however let you draw your design it just doesn't generate the\ncommands to implement it. May be this is an enhancement to be considered. On\nthe other hand it's not terribly difficult to write the command to add a\nforiegn key.\n\n[Note, you may want to avoid hiding recepient addresses in future, I've added\npgsql-general myself since this obviously came through that list but I've\nomitted -hackers since this isn't a hacker issue. I have added -interfaces\nthough because of my enhancement comment and I know some pgaccess programmers\nonly read that list.]\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Thu, 23 May 2002 02:39:01 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Graphical Tool" } ]
[ { "msg_contents": "I notice that the large-object operations in pg_largeobject.c and \ninv_api.c all use SnapshotNow to access large-object tuples. This\nmeans they are not really MVCC compliant. For example, I could be\nreading a large object that someone else is writing; if he commits\nmid-read, then I will see some old data and some updated data.\nThis seems wrong.\n\nIn particular, pg_dump cannot promise to dump a consistent snapshot\nof large objects, because what it reads will be read under SnapshotNow.\n\nI suggest that large object tuples are user data and so should be\nread using the QuerySnapshot established at start of transaction.\n\nComments anyone? Is it possible that changing this will break any\nexisting applications that depend on the current behavior?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 18:56:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Shouldn't large objects be MVCC-aware?" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Neil Conway [mailto:nconway@klamath.dyndns.org]\n> Sent: Tuesday, May 21, 2002 12:19 PM\n> To: Joel Burton\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Is 7.3 a good time to increase NAMEDATALEN ?\n> \n> \n> On Tue, 21 May 2002 11:41:26 -0400\n> \"Joel Burton\" <joel@joelburton.com> wrote:\n> > Noticed that increasing NAMEDATALEN to 128 is still on the TODO.\n> \n> The last benchmarks I saw indicate that there's still a significant\n> performance hit when increasing NAMEDATALEN, whether to 64 or 128.\n> \n> Given that only a small percentage of PostgreSQL users need long\n> identifiers, and *everyone* would suffer the performance hit, I'd\n> rather that we not touch NAMEDATALEN until more work has been\n> done on attempting to reduce the performance penalty.\n> \n> Until then, the people who absolutely, positively must have long\n> identifiers can just raise NAMEDATALEN themselves.\n\nI'm sure that this is an idiotic thing to say, but why not just make it\nvarchar?\n\nMost of the time the database objects will be small (maybe 10 characters\non average) but sometimes you want them to be really large.\n", "msg_date": "Tue, 21 May 2002 16:30:29 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Is 7.3 a good time to increase NAMEDATALEN ?" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> I'm sure that this is an idiotic thing to say, but why not just make it\n> varchar?\n\nThe main reason NAME is a fixed-length datatype is that we'd have to\nrewrite (and make slower) a lot of catalog-accessing code that expects\nto be able to access other fields in catalog tuples at fixed offsets.\nI do not think it's worth it.\n\nAlso, the existing performance bottlenecks look to me to be associated\nwith assumptions that NAME is fixed-length. To convert to varlena NAME,\nwe'd still have to fix all that code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 23:49:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is 7.3 a good time to increase NAMEDATALEN ? " } ]
[ { "msg_contents": "Tom, Oliver,\n\nI haven't finished writing up my ideas for INTERVAL. However, here's \nsomething to get started:\n\nPROPOSAL FOR ADJUSTMENTS OF POSTGRESQL TIMESTAMP AND INTERVAL HANDLING\nDraft 0.1 - Part 1\n\nTimestamp\n------------------------------\nProposal #1: TIMESTAMP WITHOUT TIME ZONE as default\n\nDescription: Currently, the data type invoked when users select TIMESTAMP is \nTIMESTAMP WITH TIME ZONE. We should change this so that TIMESTAMP defaults \nto TIMESTAMP WITHOUT TIME ZONE unless WITH TIME ZONE is specificied.\n\nReason: Handling time zones is tricky and non-intuitive for the beginning \nuser. TIMESTAMP WITH TIME ZONE should be reserved for DBAs who know what \nthey're doing.\n\n\nProposal #2: We need more time zones.\n\nDescription: We need to add, or be able to add, many new time zones to \nPostgresql. Ideal would be some kind of \"create time zone\" statement.\n\nReason: Current included time zones do not cover all real-world time zones, \nand the situation is likely to get worse as various governments play with \ntheir calendars. For example, there is no current time zone which would be \nappropriate for the state of Arizona, i.e. \"Central Standard Time without \nDaylight Savings Time\". \n\nFurther: A CREATE TIME ZONE statement would have the following syntax:\nCREATE TIME ZONE GMT_adjustment, abbreviation, uses_DST, DST_starts \n(optional), DST_ends (optional) \nThis would allow, to some degree, DBA creation of time zones to take into \naccount local laws and wierdnesses.\n\n-- \n-Josh Berkus\n\n\n", "msg_date": "Tue, 21 May 2002 17:22:05 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Timestamp & Interval - Part 1" }, { "msg_contents": "> Proposal #1: TIMESTAMP WITHOUT TIME ZONE as default\n\nHmm. Already done for 7.3 :)\n\n7.2 introduced that data type, and 7.1 did not have it, so we had one\nrelease cycle to allow dump/restore to do the right thing.\n\n> Proposal #2: We need more time zones.\n\nThe other complaint is that we have too many time zones. Certainly it is\nnot ideal (but it may be optimal from an execution time standpoint) that\nthese time zones are hardcoded into lookup tables; moving these into\nexternal files will be *much* slower, moving these into database tables\nwill be somewhat slower. But asking us to deal with Arizona may be a bit\ntoo much; those guys do things just to be different ;)\n\nbtw, on my Linux box the time zone rule is 'US/Arizona', as in\n\nlockhart=# SET TIME ZONE 'US/Arizona';\n\nMy Linux box thinks that for Arizona time input would always be in\n'MST', which is recognized by the PostgreSQL date/time parser so things\nare handled consistantly (at least until I upgrade glibc :((\n\nLet's see how the glibc breakage discussion pans out. I haven't worried\nabout pre-1903 dates and times because time zones were not as\nstandardized then as they are today. But if we end up rolling our own\nthen we might consider pulling more of this into the backend and getting\nrid of our y2038 problems at the same time :))\n\n - Thomas\n", "msg_date": "Tue, 21 May 2002 18:40:57 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Timestamp & Interval - Part 1" } ]
[ { "msg_contents": "Notice that the parallel regression test runs\n\nparallel group (7 tests): create_aggregate create_operator inherit\ntriggers constraints create_misc create_index\n\ncreate_index creates an index on a table \"onek2\" which is created in\ncreate_misc. I just saw this fail because create_index got there first.\nOn the next run everything was OK.\n\nIt's interesting that no one has seen this before, so it's quite\nlow-probability. I'll just mention it here for the archives.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 22 May 2002 02:26:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Wrong dependency in parallel regression test" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> create_index creates an index on a table \"onek2\" which is created in\n> create_misc. I just saw this fail because create_index got there first.\n> On the next run everything was OK.\n\n> It's interesting that no one has seen this before, so it's quite\n> low-probability.\n\nWow. Has anyone tried to do an exhaustive check that the parallel\nregression test schedule is OK?\n\nI'd think that it could be done in a reasonable amount of time by\nrunning each test of each parallel group (a) first and (b) last\namong its group.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 00:31:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Wrong dependency in parallel regression test " } ]
[ { "msg_contents": "Why not fix it completely with this stuff:\nftp://elsie.nci.nih.gov/pub/\n\nJust an idea.\n", "msg_date": "Tue, 21 May 2002 18:45:31 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Timestamp & Interval - Part 1" }, { "msg_contents": "> Why not fix it completely with this stuff:\n> ftp://elsie.nci.nih.gov/pub/\n> Just an idea.\n\nAh, the real zic implementation. afaik this is public domain or BSD or\nat least compatible with our BSD license wrt distribution.\n\nGreat idea. We may end up doing this! Though I hate for the project to\npick up the task of maintaining sync with that distro.\n\nWe already have a NO_MKTIME_BEFORE_1970 #define'd for AIX and IRIX\n(always paragons of standard behavior :/ Funny enough it doesn't\nactually guard the mktime() code, since I think that there is a good\nsignature from the exit from mktime() on those systems (independent of\nthe return value) to detect that there is a problem. glibc is sinking to\nnew depths in lack of support for this feature by brute force exiting\nearly on.\n\nIt looks like we might (easily?) get good behavior beyond y2038, since\nwe might be able to redefine time_t within our code. At the moment zic\nlooks for it from sys/types.h, but maybe we could isolate it and force\nit to be a 64-bit number on systems which support it. Hmm, need to find\nhow to translate current system time to that representation...\n\n - Thomas\n", "msg_date": "Tue, 21 May 2002 19:21:28 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Timestamp & Interval - Part 1" } ]
[ { "msg_contents": "> > \n> > But I recall a number of rounds of bug-fixes concerning quoting in\n> > the pgsql shell scripts, so I'd not be surprised in the \n> least to hear\n> > that pre-7.2 PG releases get this wrong. Or for that \n> matter, we might\n> > still have some problems in this line on some platforms with oddball\n> > shells. If you find that dropdb messes up with weird names in 7.2,\n> > please send details about the test case and your platform...\n\n\nRH 7.2; Postgres 7.2.1\n\nbash-2.05$ BLAH='ab\n> cd'\nbash-2.05$ echo $BLAH\nab cd\nbash-2.05$ echo \"$BLAH\"\nab\ncd\n\n[root@vault bin]# diff -c dropdb.orig dropdb\n*** dropdb.orig Tue May 21 22:40:33 2002\n--- dropdb Tue May 21 22:40:46 2002\n***************\n*** 131,137 ****\n fi\n\n\n! dbname=`echo $dbname | sed 's/\\\"/\\\\\\\"/g'`\n\n ${PATHNAME}psql $PSQLOPT -d template1 -c \"DROP DATABASE \\\"$dbname\\\"\"\n if [ \"$?\" -ne 0 ]; then\n--- 131,137 ----\n fi\n\n\n! dbname=`echo \"$dbname\" | sed 's/\\\"/\\\\\\\"/g'`\n\n ${PATHNAME}psql $PSQLOPT -d template1 -c \"DROP DATABASE \\\"$dbname\\\"\"\n if [ \"$?\" -ne 0 ]; then\n\n\n\n\n", "msg_date": "Tue, 21 May 2002 22:48:29 -0700", "msg_from": "Ron Snyder <snyder@roguewave.com>", "msg_from_op": true, "msg_subject": "Re: psql -l gives bad output " } ]
[ { "msg_contents": "There are certain oddities in current interval behaviour:\n\n template1=# select version();\n version \n ------------------------------------------------------------------\n PostgreSQL 7.3devel on i686-pc-linux-gnu, compiled by GCC 2.95.4\n (1 row)\n \n template1=# select '1200005.567772 seconds'::interval(12);\n ERROR: INTERVAL(12) precision must be between 0 and 6\n\nThe documentation says 0 and 13 (users' manual 3.5.1.6).\n\nThen there seem to be some problems with large numbers:\n \n template1=# select '111101.56772 seconds'::interval(6);\n interval \n ----------------------\n 1 day 06:51:41.56772\n (1 row)\n \n template1=# select '1111101.56772 seconds'::interval(6);\n interval \n -----------------------------\n 12 days 20:38:21.5677199999\n (1 row)\n \n template1=# select '111101.56772 seconds'::interval(2);\n interval \n -------------------\n 1 day 06:51:41.57\n (1 row)\n \n template1=# select '1111101.56772 seconds'::interval(2);\n interval \n -----------------------------\n 12 days 20:38:21.5700000001\n (1 row)\n\n\nI see you've started to implement the SQL99 interval type. Shouldn't\nthese give an error?\n\n lfix=# select '5 years'::interval day to second;\n interval \n ----------\n 00:00\n (1 row)\n \n lfix=# select '900 days':: interval year to month;\n interval \n ----------\n 00:00\n (1 row)\n\nand shouldn't this return '50:00' or else give an out of range error?\n\n lfix=# select '50 hours'::interval hour to minute;\n interval \n ----------\n 02:00\n (1 row)\n \nThe existing precision implements fractional seconds precision. Are you\nplanning to implement the interval leading field precision? (So that I\ncan say INTERVAL HOUR(4) TO MINUTE to allow values up to 9999 hours.) \n\nAt the moment \"interval(4) hour to second\" is valid syntax for the\nfractional precision, whereas the standard's syntax is \"interval hour to\nsecond(4)\"\n\nAre you doing any more work on intervals? If not, I would like to\nimplement the standard's definition.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"We are troubled on every side, yet not distressed; we \n are perplexed, but not in despair; persecuted, but not\n forsaken; cast down, but not destroyed; Always bearing\n about in the body the dying of the Lord Jesus, that \n the life also of Jesus might be made manifest in our \n body.\" II Corinthians 4:8-10", "msg_date": "22 May 2002 09:23:57 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": true, "msg_subject": "Intrerval oddities" }, { "msg_contents": "> There are certain oddities in current interval behaviour:\n> template1=# select '1200005.567772 seconds'::interval(12);\n> ERROR: INTERVAL(12) precision must be between 0 and 6\n> The documentation says 0 and 13 (users' manual 3.5.1.6).\n\nOK, docs need fixing.\n\n> Then there seem to be some problems with large numbers:\n> template1=# select '1111101.56772 seconds'::interval(2);\n> 12 days 20:38:21.5700000001\n\nThis is due to floating point rounding issues on output. Try configuring\nand compiling with --enable-integer-datetimes and you should see this go\naway.\n\n> I see you've started to implement the SQL99 interval type. Shouldn't\n> these give an error?\n> lfix=# select '5 years'::interval day to second;\n> lfix=# select '900 days':: interval year to month;\n\nProbably. I apparently zero out the other fields rather than throwing an\nerror. Pretty sure that is easy to change.\n\n> and shouldn't this return '50:00' or else give an out of range error?\n> lfix=# select '50 hours'::interval hour to minute;\n> 02:00\n\nSeems like it should.\n\n> The existing precision implements fractional seconds precision. Are you\n> planning to implement the interval leading field precision? (So that I\n> can say INTERVAL HOUR(4) TO MINUTE to allow values up to 9999 hours.)\n\nHad not even thought about it.\n\n> At the moment \"interval(4) hour to second\" is valid syntax for the\n> fractional precision, whereas the standard's syntax is \"interval hour to\n> second(4)\"\n\nYuck. If that is the standard then we need to figure out how to monkey\naround the syntax. I haven't looked to see if it makes things easier in\nthe parser; it might...\n\n> Are you doing any more work on intervals? If not, I would like to\n> implement the standard's definition.\n\nThe standard does not allow mixing year/month intervals with\nday/hour/min/sec intervals. We need to not enforce this restriction, at\nleast for intervals without explicit unit qualification. I'd also like\nto see this data type *not* blow up into a huge footprint,\nslow-to-calculate type. Or at least have one of the interval data types\nnot blow up.\n\nOne possibility is to implement another interval type for qualified\nintervals (that is, for intervals specified with explicit YEAR, MONTH,\nDAY, ... clauses). Then the storage and enforcement overhead are present\nonly if you need it. The standard does not allow an unadorned interval\nanyway, and does not allow the units mixing that we do. I don't want to\nsee our extended capabilities go away, but we could add the restricted\n\"standard type\" in parallel.\n\n - Thomas\n", "msg_date": "Wed, 22 May 2002 07:20:15 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Interval oddities" }, { "msg_contents": "> > This is due to floating point rounding issues on output. Try configuring\n> > and compiling with --enable-integer-datetimes and you should see this go\n> > away.\n> Hey, where is this compile-time option documented? It may have part of the\n> functionality I need.\n\n./configure --help\n\nIsn't anywhere else yet.\n\n - Thomas\n", "msg_date": "Wed, 22 May 2002 17:31:11 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Interval oddities" }, { "msg_contents": "\nThomas,\n\n> ./configure --help\n> \n> Isn't anywhere else yet.\n\nNot seeing it. Is this a 7.3 thing? What does it do?\n\n-- \n-Josh Berkus\n\n\n", "msg_date": "Wed, 22 May 2002 17:51:12 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Interval oddities" }, { "msg_contents": "> > ./configure --help\n> > Isn't anywhere else yet.\n> Not seeing it. Is this a 7.3 thing? What does it do?\n\nSorry, yes it is a 7.3 thing.\n\n - Thomas\n", "msg_date": "Wed, 22 May 2002 18:12:25 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Interval oddities" }, { "msg_contents": "Thomas,\n\n> > > ./configure --help\n> > > Isn't anywhere else yet.\n> > Not seeing it. Is this a 7.3 thing? What does it do?\n> \n> Sorry, yes it is a 7.3 thing.\n\nWhat does --enable-interval-integers do? I don't want to bother writing up \nissues you've already taken care of.\n\n-- \n-Josh Berkus\n\n", "msg_date": "Mon, 27 May 2002 14:19:35 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Interval oddities" }, { "msg_contents": "> > > > ./configure --help\n> > > > Isn't anywhere else yet.\n> > > Not seeing it. Is this a 7.3 thing? What does it do?\n> > Sorry, yes it is a 7.3 thing.\n> What does --enable-interval-integers do? I don't want to bother writing up\n> issues you've already taken care of.\n\nNot implemented afaik. Or are you asking about\n--enable-integer-datetimes ?\n\nThat implements timestamps and intervals as 64-bit integers with\nmicrosecond precision. Without it (and for the last few years of\nreleases), you get a double precision float (52 bits of precision).\n\n - Thomas\n", "msg_date": "Tue, 28 May 2002 15:12:29 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Interval oddities" } ]
[ { "msg_contents": "> 4. How exactly should a killed index tuple be marked on-disk? While there\n> is one free bit available in IndexTupleData.t_info, I would prefer to use\n> that bit to expand the index tuple size field to 14 bits instead of 13.\n> (This would allow btree index entries to be up to 10K when BLCKSZ is 32K,\n> rather than being hard-limited to 8K.)\n\nWhile I agree that it might be handy to save this bit for future use,\nI do not see any value in increasing the max key length from 8k,\nespecially when the new limit is then 10k. The limit is already 32 *\nthe max key size of some other db's, and even those 256 bytes are usually \nsufficient.\n\nAndreas\n", "msg_date": "Wed, 22 May 2002 12:28:59 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Killing dead index tuples before they get vacuumed" }, { "msg_contents": "On Wed, 2002-05-22 at 12:28, Zeugswetter Andreas SB SD wrote:\n> > 4. How exactly should a killed index tuple be marked on-disk? While there\n> > is one free bit available in IndexTupleData.t_info, I would prefer to use\n> > that bit to expand the index tuple size field to 14 bits instead of 13.\n> > (This would allow btree index entries to be up to 10K when BLCKSZ is 32K,\n> > rather than being hard-limited to 8K.)\n> \n> While I agree that it might be handy to save this bit for future use,\n> I do not see any value in increasing the max key length from 8k,\n> especially when the new limit is then 10k. The limit is already 32 *\n> the max key size of some other db's, and even those 256 bytes are usually \n> sufficient.\n\nI'm not sure if it applies here, but key length for GIST indexes may\nbenefit from 2x increase (14bits = 16k). IIRC limited key length is one\nreason for intarray indexes being 'lossy'.\n\nAnd we can even make it bigger if we start measuring keys in words or\ndwords instead of bytes - 16k x dword = 64kb\n\n--------------\nHannu\n\n\n", "msg_date": "22 May 2002 15:18:13 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Killing dead index tuples before they get vacuumed" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Wed, 2002-05-22 at 12:28, Zeugswetter Andreas SB SD wrote:\n>> While I agree that it might be handy to save this bit for future use,\n>> I do not see any value in increasing the max key length from 8k,\n\n> I'm not sure if it applies here, but key length for GIST indexes may\n> benefit from 2x increase (14bits = 16k). IIRC limited key length is one\n> reason for intarray indexes being 'lossy'.\n\nSince there seems to be some dissension about that, I'll leave the\nt_info bit unused for now, instead of absorbing it into the length\nfield.\n\nSince 13 bits is sufficient for 8K, people would not see any benefit\nanyway unless they use a nonstandard BLCKSZ. So I'm not that concerned\nabout raising it --- just wanted to throw out the idea and see if people\nliked it.\n\nIn the long run it'd be possible to not store length in IndexTupleData\nat all, but rely on the length from the item header, same as we do for\nheap tuples. So if we ever need more bits in IndexTupleData, there's\na way out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 09:47:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Killing dead index tuples before they get vacuumed " }, { "msg_contents": "Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Wed, 2002-05-22 at 12:28, Zeugswetter Andreas SB SD wrote:\n> >> While I agree that it might be handy to save this bit for future use,\n> >> I do not see any value in increasing the max key length from 8k,\n>\n> > I'm not sure if it applies here, but key length for GIST indexes may\n> > benefit from 2x increase (14bits = 16k). IIRC limited key length is one\n> > reason for intarray indexes being 'lossy'.\n>\n> Since there seems to be some dissension about that, I'll leave the\n> t_info bit unused for now, instead of absorbing it into the length\n> field.\n>\n> Since 13 bits is sufficient for 8K, people would not see any benefit\n> anyway unless they use a nonstandard BLCKSZ. So I'm not that concerned\n> about raising it --- just wanted to throw out the idea and see if people\n> liked it.\n\n Also, in btree haven't we had some problems with index page\n splits when using entries large enought so that not at least\n 3 of them fit on a page?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Wed, 22 May 2002 11:35:42 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Killing dead index tuples before they get vacuumed" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Also, in btree haven't we had some problems with index page\n> splits when using entries large enought so that not at least\n> 3 of them fit on a page?\n\nRight, that's why I said that the limit would only go up to ~10K anyway;\nbtree won't take keys > 1/3 page.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 11:47:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Killing dead index tuples before they get vacuumed " }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Also, in btree haven't we had some problems with index page\n> > splits when using entries large enought so that not at least\n> > 3 of them fit on a page?\n>\n> Right, that's why I said that the limit would only go up to ~10K anyway;\n> btree won't take keys > 1/3 page.\n\n What's the point then? I mean, someone who needs more than 8K\n will outgrow 10K in no time, and those cases are topics for\n comp.databases.abuse.brutal ...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Wed, 22 May 2002 13:14:17 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Killing dead index tuples before they get vacuumed" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Tom Lane wrote:\n>> Right, that's why I said that the limit would only go up to ~10K anyway;\n>> btree won't take keys > 1/3 page.\n\n> What's the point then?\n\nWell, btree's not the only index access method we have. I'm not sure\nwhether gist or rtree allow larger tuples...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 13:27:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Killing dead index tuples before they get vacuumed " } ]
[ { "msg_contents": "There seems to be a pg_dump issue with inherited tables when columns\nare added to the parent table after creating the child table. Here is\nhow to repeat the problem.\n\nCreate a test database and run the following SQL statements:\n\ncreate table a (x int);\ncreate table b (y int) inherits (a);\nalter table a add column z int;\ninsert into b values (1, 2, 3);\nselect * from b;\n\nYou should see this:\n\ntest1=# select * from b;\n x | y | z\n---+---+---\n 1 | 2 | 3\n (1 row)\n\nNow create a second test database and dump the first into the second:\n\npg_dump test1 | psql test2\n\nNow try that last select statement in the new database and you will see:\n\ntest2=# select * from b;\n x | z | y\n---+---+---\n 1 | 2 | 3\n (1 row)\n\nIf you are lucky the restore fails because of conflicting types. Worse, as\nin this example, the types are the same and it silently messes up your data\nby reversing the names on two columns.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 22 May 2002 09:51:24 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>", "msg_from_op": true, "msg_subject": "Edge case problem with pg_dump" }, { "msg_contents": "\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> There seems to be a pg_dump issue with inherited tables when columns\n> are added to the parent table after creating the child table.\n\nIt's always been there --- ever tried dumping and reloading the\nregression database?\n\nRight now the only safe way to dump such a database is to use the\ninserts-with-explicit-column-names option. Someone was working on\nextending COPY to allow a column name list, and as soon as that gets\ndone I intend to change pg_dump to specify a column name list in\nCOPY commands. That should fix this problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 10:28:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Edge case problem with pg_dump " }, { "msg_contents": "On May 22, 2002 10:28 am, you wrote:\n> Right now the only safe way to dump such a database is to use the\n> inserts-with-explicit-column-names option. Someone was working on\n> extending COPY to allow a column name list, and as soon as that gets\n> done I intend to change pg_dump to specify a column name list in\n> COPY commands. That should fix this problem.\n\nDo you mean issue COPY commands with fields or COPY out the fields in a \nspecific order by using the extension in pg_dump? Seems like the latter \nwould be cleaner but the former is probably a lot simpler to do.\n\nWhat would the new syntax of the COPY look like?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 23 May 2002 07:32:45 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>", "msg_from_op": true, "msg_subject": "Re: Edge case problem with pg_dump" }, { "msg_contents": "\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> On May 22, 2002 10:28 am, you wrote:\n>> Right now the only safe way to dump such a database is to use the\n>> inserts-with-explicit-column-names option. Someone was working on\n>> extending COPY to allow a column name list, and as soon as that gets\n>> done I intend to change pg_dump to specify a column name list in\n>> COPY commands. That should fix this problem.\n\n> Do you mean issue COPY commands with fields or COPY out the fields in a \n> specific order by using the extension in pg_dump?\n\nI intended that the dump scripts would say something like\n\n\tCOPY mytab(field1,field2,field3) FROM STDIN;\n\nwhich would make it absolutely clear what the dump's field order is.\nWe can't solve it by reordering the fields while we dump, which is\nwhat I think you mean by the other alternative: how is pg_dump to\nguess what schema you are going to load the data into? For example,\nit should work to do a data-only dump and then reload into the existing\ntable structure. So the dump script really needs to work for either\ncolumn ordering in the destination table, and that's why we need\nexplicit labeling of the field order in the script.\n\nIf we take this really seriously we might want to eliminate pg_dump's\n-d (simple INSERT) option, and have only two dump formats: COPY with\nfield labels, or INSERT with field labels.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 May 2002 09:50:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Edge case problem with pg_dump " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [020523 10:24]:\n> \"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> > Do you mean issue COPY commands with fields or COPY out the fields in a \n> > specific order by using the extension in pg_dump?\n> \n> I intended that the dump scripts would say something like\n> \n> \tCOPY mytab(field1,field2,field3) FROM STDIN;\n\nCool. I assume that the \"(field1,field2,field3)\" would be optional for\nbackwards compatibility.\n\n> which would make it absolutely clear what the dump's field order is.\n> We can't solve it by reordering the fields while we dump, which is\n> what I think you mean by the other alternative: how is pg_dump to\n> guess what schema you are going to load the data into? For example,\n\nWell, the issue now is that it creates the schema too but it is out of\nsync with the data it spits out. I can see how figuring it out is a lot\nmore difficult though. The above works.\n\n> it should work to do a data-only dump and then reload into the existing\n> table structure. So the dump script really needs to work for either\n> column ordering in the destination table, and that's why we need\n> explicit labeling of the field order in the script.\n\nThat's nice. I have scripts that effectively do this in code now when\nI have to dump from one schema and load into another.\n\n> If we take this really seriously we might want to eliminate pg_dump's\n> -d (simple INSERT) option, and have only two dump formats: COPY with\n> field labels, or INSERT with field labels.\n\nYah, I don't think that I have ever used \"-d\". In fact, I bet I will\nhardly ever use \"-D\" any more if we make the above change. The only\nreason I ever used insert statements was to deal with loading into a\ndifferent schema.\n\nSo who was it that wanted to make this change. Perhaps I can help.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 23 May 2002 10:42:42 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>", "msg_from_op": true, "msg_subject": "Re: Edge case problem with pg_dump" }, { "msg_contents": "\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> So who was it that wanted to make this change. Perhaps I can help.\n\nI forget who had volunteered to work on it, but it was several months\nago and nothing's happened ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 May 2002 10:51:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Edge case problem with pg_dump " }, { "msg_contents": "At 10:51 23/05/02 -0400, Tom Lane wrote:\n>\"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n> > So who was it that wanted to make this change. Perhaps I can help.\n>\n>I forget who had volunteered to work on it, but it was several months\n>ago and nothing's happened ...\n\nNot sure if this is the right reference, but about 30-Apr-2001, Alfred \nPerlstein raised the problem of column names in COPY, and you poured water \non the idea:\n\n http://archives.postgresql.org/pgsql-hackers/2001-04/msg01132.php\n\nISTM that we do need *some* solution to the problem, and that based on your \ncomments there are a couple of possibilities:\n\n(a) AP: Allow COPY(OUT) to dump column info. Probably only the name of the \ncolumn. Then (i'd guess) allow COPY(IN) to map named columns to new names,\n\n(b) TL: One possibility is to fix ALTER TABLE ADD COLUMN to maintain the same\ncolumn ordering in parents and children.\n\nAt the time you stated that:\n\n COPY with specified columns may in fact be the best way to deal with\n that particular issue, if pg_dump is all we care about fixing. However\n there are a bunch of things that have a problem with it, not only\n pg_dump. See thread over in committers about functions and inheritance.\n\nI'm not sure what these issues are, but it does seem to me that some more \nportable COPY format would be desirable and that solution (b) will not \nsolve the problem if you are trying to restore a table that has had an attr \ndeleted.\n\nIn your responses you also raised the problem of COPY having to know about \ndefault values for columns if we allow subsets of columns when we load \ndata; does that mean that COPY does something more fancy than the \nequivalent of an INSERT?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Sat, 25 May 2002 16:33:58 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Edge case problem with pg_dump " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Not sure if this is the right reference, but about 30-Apr-2001, Alfred \n> Perlstein raised the problem of column names in COPY, and you poured water \n> on the idea:\n\nSo I did, but I've changed my mind --- it would provide a usable solution\nto this inheritance problem, which has been with us forever, and would\nhave other uses too.\n\n> (b) TL: One possibility is to fix ALTER TABLE ADD COLUMN to maintain the same\n> column ordering in parents and children.\n\nThat would be a nice solution but I do not think it'll happen in the\nforeseeable future :-(. Certainly we're no closer to making it happen\nthan we were a year ago.\n\n> In your responses you also raised the problem of COPY having to know about \n> default values for columns if we allow subsets of columns when we load \n> data; does that mean that COPY does something more fancy than the \n> equivalent of an INSERT?\n\nNo, but it would have to be equivalent to an INSERT. BTW, the\ndefault-value mechanism is cleaner than it used to be and so this\ndoesn't seem like as serious an objection anymore. Since COPY already\nhas to have enough mechanism to evaluate constraint expressions,\nevaluating defaults too doesn't seem that horrid.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 May 2002 11:44:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Edge case problem with pg_dump " }, { "msg_contents": "[2002-05-23 10:51] Tom Lane said:\n| \"D'Arcy J.M. Cain\" <darcy@druid.net> writes:\n| > So who was it that wanted to make this change. Perhaps I can help.\n| \n| I forget who had volunteered to work on it, but it was several months\n| ago and nothing's happened ...\n\nI'd be the disappearing culprit... This patch _was_ mostly done at one\npoint around 7.2 released, infact I've been running the patch on three\nproduction installs. I'll take a look at making the patch current\nand resubmitting.\n\ncheers.\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 25 May 2002 18:00:58 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Edge case problem with pg_dump" }, { "msg_contents": "[2002-05-25 11:44] Tom Lane said:\n| > In your responses you also raised the problem of COPY having to know about \n| > default values for columns if we allow subsets of columns when we load \n| > data; does that mean that COPY does something more fancy than the \n| > equivalent of an INSERT?\n| \n| No, but it would have to be equivalent to an INSERT. BTW, the\n| default-value mechanism is cleaner than it used to be and so this\n| doesn't seem like as serious an objection anymore. Since COPY already\n| has to have enough mechanism to evaluate constraint expressions,\n| evaluating defaults too doesn't seem that horrid.\n\nThe last version of the COPY (attlist) patch does use column defaults.\nAgain, I'll try to get this cleaned up over this (long) weekend.\n\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 25 May 2002 18:05:34 -0400", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Edge case problem with pg_dump" } ]
[ { "msg_contents": "I have looked at the cube datataype in the contrib but I''m not sure \nthat I'm on the right way\n\nI have found these functions:\n\n-- support routines for indexing\n\nCREATE FUNCTION cube_union(cube, cube) RETURNS cube\n AS 'MODULE_PATHNAME' LANGUAGE 'c' with (isstrict);\n\nCREATE FUNCTION cube_inter(cube, cube) RETURNS cube\n AS 'MODULE_PATHNAME' LANGUAGE 'c' with (isstrict);\n\nCREATE FUNCTION cube_size(cube) RETURNS float4\n AS 'MODULE_PATHNAME' LANGUAGE 'c' with (isstrict);\n\n\nand there are the same functions written in c in the file\n\n/* cube_union */\nNDBOX *\ncube_union(NDBOX * box_a, NDBOX * box_b)\n{\n int i;\n NDBOX *result;\n NDBOX *a = swap_corners(box_a);\n NDBOX *b = swap_corners(box_b);\n\n if (a->dim >= b->dim)\n {\n result = palloc(a->size);\n result->size = a->size;\n result->dim = a->dim;\n }\n else\n {\n result = palloc(b->size);\n result->size = b->size;\n result->dim = b->dim;\n }\n\n /* swap the box pointers if needed */\n if (a->dim < b->dim)\n {\n NDBOX *tmp = b;\n\n b = a;\n a = tmp;\n }\n\n /*\n * use the potentially smaller of the two boxes (b) to fill in the\n * result, padding absent dimensions with zeroes\n */\n for (i = 0; i < b->dim; i++)\n {\n result->x[i] = b->x[i];\n result->x[i + a->dim] = b->x[i + b->dim];\n }\n for (i = b->dim; i < a->dim; i++)\n {\n result->x[i] = 0;\n result->x[i + a->dim] = 0;\n }\n\n /* compute the union */\n for (i = 0; i < a->dim; i++)\n result->x[i] = min(a->x[i], result->x[i]);\n for (i = a->dim; i < a->dim * 2; i++)\n result->x[i] = max(a->x[i], result->x[i]);\n\n pfree(a);\n pfree(b);\n\n return (result);\n}\n\n\nNow my question is:\n\nIs it easy to write an indexed datatype without touching the, let's say \ninternal code, of postgresql\nAre there some problems when writing indexed datatypes?\n\nAny information and suggestaions welcome\n\nEwald\n\n\n\n\n", "msg_date": "Wed, 22 May 2002 15:53:07 +0200", "msg_from": "Ewald Geschwinde <webmaster@geschwinde.net>", "msg_from_op": true, "msg_subject": "index a datatype" } ]
[ { "msg_contents": " From a recent bug report:\n\ntest72=# create sequence s;\nCREATE\ntest72=# begin;\nBEGIN\ntest72=# select nextval('s');\n nextval\n---------\n 1\n(1 row)\n\ntest72=# drop sequence s;\nDROP\ntest72=# end;\nNOTICE: LockRelease: no such lock\nCOMMIT\n\nThe NOTICE is a symptom of bad problems underneath (memory corruption\neventually leading to coredump, in the bug report's example).\n\nThe problem is that after nextval(), sequence.c is holding on to a\npointer to the open relcache entry for the sequence, which it intends\nto close at end of transaction. After the sequence is dropped, the\npointer is just a dangling pointer, and so the eventual close is working\non freed memory, with usually-unpleasant consequences.\n\nI think the best solution for this is to get rid of sequence.c's private\nstate list. We could allow the state info for a sequence to be kept\nright in the relcache entry (or more likely, in a struct owned and\npointed to by the relcache entry --- that way, the overhead added for\nnon-sequence relcache entries would only be one more pointer field).\nRelcache deletion would know to free the sequence state object along\nwith the rest of the relcache entry. As for the action of closing the\nrelcache entry, I don't see any good reason for sequence.c to hold\nrelcache entries open between calls anyway. It is good to hold the\nAccessShareLock till commit, but the lock manager can release that lock\nat commit by itself; there's no reason at all for sequence.c to do it.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 12:19:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Internal state in sequence.c is a bad idea" } ]
[ { "msg_contents": "It occurs to me that part of the problem with wasted and incomplete\nefforts can be fixed with a clear security policy. The part that\nI'm interested in is provided below, in a very truncated form.\n\n\nSecure Communications Channels\n------------------------------\n\nSecure communications channels can be provided with Kerberos, GSS-API,\nand SSL, and Unix sockets for local communications. The goals of the \nsecure commuications channel are:\n\n* Confidentiality\n\n Confidentiality means that the data is kept secret from all third\n parties.\n\n- Perfect Forward Security (PFS)\n\n Perfect Forward Security is the logical endpoint of confidentiality.\n It is a form of confidentiality where the data remains secret even\n if the static private keys used by the server (and client) are\n exposed.\n\n* Message Integrity\n\n Message integrity means that the message received is identical to\n the message sent. It is not possible for third parties to add, \n delete, or modify data midstream.\n\n* Endpoint Authentication\n\n Endpoint Authentication means that the identity of the other party\n can be firmly established.\n\n- Mutual Authentication\n\n Mutual Authentication is endpoint authentication by both parties.\n\n- Strong Authentication\n\n Strong Authentication means that the party has authenticated themselves\n with at least two of the following: something they know (e.g., password),\n something they have (e.g., private key, smart card), or something they\n are (e.g., biometrics).\n\nA mechanism to map channel-level authentication (Kerberos principal\nname, SSL distinguished name) to PostgreSQL userids is desirable,\nbut not required.\n\nInitial support for all new protocols shall always include, at a \nminimum, all features present in the sample client and server provided\nwith the respective toolkit. Any omissions must be clearly documented\nand justified.\n\nThe development team shall maintain a matrix cross-referencing each\nprotocol and the goals satisfied. Any omissions from normal practice\nfor each protocol shall be clearly documented and provided to users.\n\n | L-SSL | L-KRB | SSL | GSS-API | SASL | Unix\n------------------------+-------+-------+-----+---------+------+------\nConfidentiality | Y | N | Y | Y | Y | Y \nPFS | N | N | Y | ? | ? | Y \nMessage Integrity | N | N | Y | Y | Y | Y \nAuthentication (server) | N(1) | ?(2) | Y | Y | Y | Y \nAuthentication (mutual) | N | ?(2) | Y | Y | Y | Y \n------------------------+-------+-------+-----+---------+------+------\n\n L-SSL legacy SSL\n L-KRB legacy Kerberos 4 & 5\n SSL current SSL patches\n GSS-API GSS-API (Kerberos 5 reimplementation)\n SASL SASL with appropriate plug-ins\n Unix Unix sockets\n\n(1) a server certificate is required, but it is not verified by the\nclient.\n\n(2) PostgreSQL provides some level of authentication via Kerberos 4 and\nKerberos 5, but it may not have been properly implemented.\n\n\nAs I mentioned in an earlier post on -patches, I'm not sure that the\ncurrent Kerberos implementation is what people think it is. I may\ndevelop a GSS-API patch for evaluation purposes, but it will almost\ncertainly require a different port.\n\nBear\n", "msg_date": "Wed, 22 May 2002 11:39:58 -0600 (MDT)", "msg_from": "Bear Giles <bear@coyotesong.com>", "msg_from_op": true, "msg_subject": "Security policy" } ]
[ { "msg_contents": "The current KSQO code is currently #ifdef'ed out, and the 'ksqo' GUC\nvariable does nothing. Is there a reason for keeping this code around?\n(or conversely, what was the original justification for disabling it?)\n\nShould I just send in a patch getting rid of it?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 22 May 2002 15:11:56 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "ksqo?" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> The current KSQO code is currently #ifdef'ed out, and the 'ksqo' GUC\n> variable does nothing. Is there a reason for keeping this code around?\n> (or conversely, what was the original justification for disabling it?)\n\nI disabled it because I didn't have time to fix it properly when it got\nbroken by the 7.1 rewrite of UNION/INTERSECT/EXCEPT. I've been waiting\nto see whether anyone notices that it's gone ;-). So far the demand for\nit has been invisible, so it hasn't gotten fixed. On the other hand\nI'm not quite convinced that it never will get fixed, so I haven't\napplied the coup de grace.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 May 2002 18:03:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ksqo? " }, { "msg_contents": "On Wed, 22 May 2002 18:03:07 -0400\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > The current KSQO code is currently #ifdef'ed out, and the 'ksqo' GUC\n> > variable does nothing. Is there a reason for keeping this code around?\n> > (or conversely, what was the original justification for disabling it?)\n> \n> I disabled it because I didn't have time to fix it properly when it got\n> broken by the 7.1 rewrite of UNION/INTERSECT/EXCEPT. I've been waiting\n> to see whether anyone notices that it's gone ;-). So far the demand for\n> it has been invisible, so it hasn't gotten fixed. On the other hand\n> I'm not quite convinced that it never will get fixed, so I haven't\n> applied the coup de grace.\n\nHmmm... Well, I'll take a look at it, but I'll probably just leave it\nbe -- since the optimization might actually return invalid results, it\ndoesn't seem like a very valuable thing to have, IMHO.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 23 May 2002 21:10:46 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: ksqo?" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Hmmm... Well, I'll take a look at it, but I'll probably just leave it\n> be -- since the optimization might actually return invalid results, it\n> doesn't seem like a very valuable thing to have, IMHO.\n\nYeah, I never cared for the fact that it altered the semantics of the\nquery, even if only subtly. But I'm hesitant to rip out something that\nsomeone went to the trouble of writing and contributing ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 00:31:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ksqo? " }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > Hmmm... Well, I'll take a look at it, but I'll probably just leave it\n> > be -- since the optimization might actually return invalid results, it\n> > doesn't seem like a very valuable thing to have, IMHO.\n> \n> Yeah, I never cared for the fact that it altered the semantics of the\n> query, even if only subtly. But I'm hesitant to rip out something that\n> someone went to the trouble of writing and contributing ...\n\nIf it does nothing, we certainly should remove it from GUC so people\ndon't see a meaningless option. We can then keep it in CVS to see if we\nwant it later.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 May 2002 21:47:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ksqo?" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Neil Conway <nconway@klamath.dyndns.org> writes:\n> > > Hmmm... Well, I'll take a look at it, but I'll probably just leave it\n> > > be -- since the optimization might actually return invalid results, it\n> > > doesn't seem like a very valuable thing to have, IMHO.\n> > \n> > Yeah, I never cared for the fact that it altered the semantics of the\n> > query, even if only subtly. But I'm hesitant to rip out something that\n> > someone went to the trouble of writing and contributing ...\n> \n> If it does nothing, we certainly should remove it from GUC so people\n> don't see a meaningless option. We can then keep it in CVS to see if we\n> want it later.\n\nHere is the email from May 28 discussing the removal of GUC.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 15 Jun 2002 20:09:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ksqo?" } ]
[ { "msg_contents": "Gents,\n\nI am looking for a more precise polygon overlap test and any comment/pointers/suggestions are appreciated. Attached is the modified poly_overlap in geoops.c. \n\nIf the polygons pass the bounding box check, the following tests will be carried out. The tests are terminated as soon as one of them returns true:\n\n1) At least one of the vertex in polygon a is inside polygon b\n2) At least one of the vertex in polygon b is inside polygon a\n3) At least one edge of polygon a intersects with an edge on polygon b\n\nAll these tests could be expensive for polygons with lots of vertices. Would anyone know where I can find information on a more efficient way of determining polygon overlap. \n\nEfficiency aside, is there anything obivious I have missed which could lead to an incorrect result?\n\nThe end game for me is to be able to test if a path enters a polygon and this is a first step as I am new to postgresql. Looks like postgresql converts the path to a polygon and call poly_overlap(), which could lead to incorrect result. At some stage, I might add an overlap operator that accepts a path and a polygon.\n\nTIA\nKenneth Chan.\n-- \n_______________________________________________\nSign-up for your own FREE Personalized E-mail at Mail.com\nhttp://www.mail.com/?sr=signup", "msg_date": "Thu, 23 May 2002 05:23:35 +1000", "msg_from": "\"Kenneth Chan\" <kkchan@technologist.com>", "msg_from_op": true, "msg_subject": "A more precise polygon_overlap()" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Kenneth Chan [mailto:kkchan@technologist.com]\n> Sent: Wednesday, May 22, 2002 12:24 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] A more precise polygon_overlap()\n> \n> \n> Gents,\n> \n> I am looking for a more precise polygon overlap test and any \n> comment/pointers/suggestions are appreciated. Attached is \n> the modified poly_overlap in geoops.c. \n> \n> If the polygons pass the bounding box check, the following \n> tests will be carried out. The tests are terminated as soon \n> as one of them returns true:\n> \n> 1) At least one of the vertex in polygon a is inside polygon b\n> 2) At least one of the vertex in polygon b is inside polygon a\n> 3) At least one edge of polygon a intersects with an edge on polygon b\n> \n> All these tests could be expensive for polygons with lots of \n> vertices. Would anyone know where I can find information on \n> a more efficient way of determining polygon overlap. \n> \n> Efficiency aside, is there anything obivious I have missed \n> which could lead to an incorrect result?\n> \n> The end game for me is to be able to test if a path enters a \n> polygon and this is a first step as I am new to postgresql. \n> Looks like postgresql converts the path to a polygon and call \n> poly_overlap(), which could lead to incorrect result. At \n> some stage, I might add an overlap operator that accepts a \n> path and a polygon.\n\nFor convex polygons, it is very simple. Unfortunately, most polygon's\ndon't fall into that category. For polygons of arbitrary shape, it is\nan incredibly complex problem. This is a very good article on the\nsubject:\n\n\"On Local Heuristics to Speed Up Polygon-Polygon Intersection Tests\"\n\nby:\n\nWael M. Badawy\nCenter for Advanced Computer Studies\nUniversity of Southwestern Louisiana\nLafayette, LA 70504\nwmb@cacs.usl.edu\n\nWalid G. Aref\nDepartment of Computer Sciences\nPurdue University\nWest Lafayette, IN 47907\naref@cs.purdue.edu\nA link to the above paper is here:\nhttp://www.cs.purdue.edu/homes/aref/spdb.html\n\n\nThe big problems come from:\n1. polygons which are self intersecting\n2. polygons that have holes\n\nHere is a paper one one sort of idea:\nhttp://www.me.cmu.edu/faculty1/shimada/gm97/24700b/project/venkat/projec\nt/\n\nHere is a list of links that may prove helpful:\nhttp://citeseer.nj.nec.com/aref95window.html\n\nThe most general method I know of is the Weiler-Atherton polygon-polygon\nclipping algorithm.\nHere is some stuff on it:\nhttp://www.cs.buffalo.edu/faculty/walters/cs480/NewLect10.pdf\n\n\nHere is a fun one:\n+-------------------+\n| /|\n| +--------------+ |\n| | /\\ | |\n| | / \\ | |\n| | /____\\ | |\n| | | |\n| +--------------+ |\n| |\n+-------------------+\n\nA triangle lives in a box. The upper right hand corner of the box has\nan enclave. Detail:\n\n---------------------+ +\n / /|\n / / |\n / / |\n / / |\n / / |\n / / |\n / / |\n / / |\n / / |\n / / |\n / / |\n / / |\n / / |\n-------+ + |\n | |\n | |\n | |\n\n\nThe point of the triangle on the top nearly touches one line of the\nenclosing box. To answer questions about interesection are tricky even\nwith a simple example like this. When the polygons are\nself-intersecting, it can be even more outrageous.\n", "msg_date": "Wed, 22 May 2002 13:45:50 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: A more precise polygon_overlap()" } ]
[ { "msg_contents": "This might be a possible fit:\nhttp://www.magic-software.com/Applications.html\n(see \"Boolean operations on 2D polygons. Based on BSP trees\" near the\nbottom of the page).\n\nThe license agreement might be acceptable. I'm not a lawyer, so I can't\nbe sure how free it really is to use from reading this:\nhttp://www.magic-software.com/License/free.pdf\n\nIt would probably be good to contact the author.\nhttp://www.magic-software.com/CompanyInfo.html\n", "msg_date": "Wed, 22 May 2002 14:15:49 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: A more precise polygon_overlap()" }, { "msg_contents": "See http://www.vividsolutions.com/jts/jtshome.htm for a robust set of\nalgorithms in Java which do all the spatial predicates (the 3x3\nEgenhofer matrix). Also union, difference, and buffer. It is licenced\nLGPL. Also see PostGIS (http://postgis.refractions.net) (licenced GPL)\nfor a set of PostgreSQL GIS objects. We are currently working on porting\nJTS algorithms to C++ for use in PostGIS.\n\nP.\n\nDann Corbit wrote:\n> \n> This might be a possible fit:\n> http://www.magic-software.com/Applications.html\n> (see \"Boolean operations on 2D polygons. Based on BSP trees\" near the\n> bottom of the page).\n> \n> The license agreement might be acceptable. I'm not a lawyer, so I can't\n> be sure how free it really is to use from reading this:\n> http://www.magic-software.com/License/free.pdf\n> \n> It would probably be good to contact the author.\n> http://www.magic-software.com/CompanyInfo.html\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n", "msg_date": "Wed, 22 May 2002 17:09:49 -0700", "msg_from": "Paul Ramsey <pramsey@refractions.net>", "msg_from_op": false, "msg_subject": "Re: A more precise polygon_overlap()" } ]
[ { "msg_contents": "Hi guys,\n\nJust in case anyone is around, I've recently arrived in Costa Mesa, California from Australia for a couple of weeks on business. So, if anyone's in the area - it might be cool to catch up...\n\nChris\n\n\n\n\n\n\n\n\nHi guys,\n \nJust in case anyone is around, I've recently \narrived in Costa Mesa, California from Australia for a couple of weeks on \nbusiness.  So, if anyone's in the area - it might be cool to catch \nup...\n \nChris", "msg_date": "Wed, 22 May 2002 15:18:58 -0700", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Me in California" } ]
[ { "msg_contents": "I sent this earlier, but accidently sent it from the wrong account\nand it's been sitting in the pending spool all day.\n\nSince writing it, I've sketched in server-side GSS-API and SASL\nsupport for my prior patches. The objective isn't to immediately\nsupport either, but to ensure that future support can be added with\nminimal effort.\n\n========================================================================\n\nIt occurs to me that part of the problem with wasted and incomplete\nefforts can be fixed with a clear security policy. The part that\nI'm interested in is provided below, in a very truncated form.\n\n\nSecure Communications Channels\n------------------------------\n\nSecure communications channels can be provided with Kerberos, GSS-API,\nand SSL, and Unix sockets for local communications. The goals of the \nsecure commuications channel are:\n\n* Confidentiality\n\n Confidentiality means that the data is kept secret from all third\n parties.\n\n- Perfect Forward Security (PFS)\n\n Perfect Forward Security is the logical endpoint of confidentiality.\n It is a form of confidentiality where the data remains secret even\n if the static private keys used by the server (and client) are\n exposed.\n\n* Message Integrity\n\n Message integrity means that the message received is identical to\n the message sent. It is not possible for third parties to add, \n delete, or modify data midstream.\n\n* Endpoint Authentication\n\n Endpoint Authentication means that the identity of the other party\n can be firmly established.\n\n- Mutual Authentication\n\n Mutual Authentication is endpoint authentication by both parties.\n\n- Strong Authentication\n\n Strong Authentication means that the party has authenticated themselves\n with at least two of the following: something they know (e.g., password),\n something they have (e.g., private key, smart card), or something they\n are (e.g., biometrics).\n\nA mechanism to map channel-level authentication (Kerberos principal\nname, SSL distinguished name) to PostgreSQL userids is desirable,\nbut not required.\n\nInitial support for all new protocols shall always include, at a \nminimum, all features present in the sample client and server provided\nwith the respective toolkit. Any omissions must be clearly documented\nand justified.\n\nThe development team shall maintain a matrix cross-referencing each\nprotocol and the goals satisfied. Any omissions from normal practice\nfor each protocol shall be clearly documented and provided to users.\n\n | L-SSL | L-KRB | SSL | GSS-API | SASL | Unix\n------------------------+-------+-------+-----+---------+------+------\nConfidentiality | Y | N | Y | Y | Y | Y \nPFS | N | N | Y | ? | ? | Y \nMessage Integrity | N | N | Y | Y | Y | Y \nAuthentication (server) | N(1) | ?(2) | Y | Y | Y | Y \nAuthentication (mutual) | N | ?(2) | Y | Y | Y | Y \n------------------------+-------+-------+-----+---------+------+------\n\n L-SSL legacy SSL\n L-KRB legacy Kerberos 4 & 5\n SSL current SSL patches\n GSS-API GSS-API (Kerberos 5 reimplementation)\n SASL SASL with appropriate plug-ins\n Unix Unix sockets\n\n(1) a server certificate is required, but it is not verified by the\nclient.\n\n(2) PostgreSQL provides some level of authentication via Kerberos 4 and\nKerberos 5, but it may not have been properly implemented.\n\n\nAs I mentioned in an earlier post on -patches, I'm not sure that the\ncurrent Kerberos implementation is what people think it is. I may\ndevelop a GSS-API patch for evaluation purposes, but it will almost\ncertainly require a different port.\n\nBear\n", "msg_date": "Wed, 22 May 2002 18:26:03 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": true, "msg_subject": "Security policy" } ]
[ { "msg_contents": "Hi,\n\nI talked to one of the bison guys and he told me where to find a beta\nversion of bison 1.49. And this one translates the grammar without a\nproblem, no more table overflow. So once they will release the\nnew bison we should be able to expand our grammar.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 23 May 2002 10:18:28 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "bison" } ]
[ { "msg_contents": "Hi,\n\nI got an corrupted table,,, unfortunately with pretty important data :(\n\nVACUUM tells me:\n\nNOTICE: Rel relx: TID 2344/5704: OID IS INVALID. TUPGONE 1.\nNOTICE: Rel relx: TID 2344/5736: OID IS INVALID. TUPGONE 1.\nNOTICE: Rel relx: TID 2344/5768: OID IS INVALID. TUPGONE 1.\n\n(this, many times, then)\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n\nI can read part (beginning?) of the relation with select or copy, but anything \nthat touches this area dies badly :(\n\nIs there any way to recover this relation? Or at least as much data as \npossible?\n\nOh, an this is 7.1.3 and I am probably running with too large oids :)\n\nDEBUG: NextTransactionId: 708172974; NextOid: 3480073772\n\nDaniel\n\n", "msg_date": "Thu, 23 May 2002 13:19:15 +0300", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "tuples gone?" }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> VACUUM tells me:\n\n> NOTICE: Rel relx: TID 2344/5704: OID IS INVALID. TUPGONE 1.\n\nIt's physically impossible to get 2344 tuples on a page. (If you're\nusing 8k pages then the most you could have per page is less than 200.)\nSo the above TID is obviously bogus, implying that you have pages\nwith corrupted page headers --- probably pd_lower is much larger than\nit should be.\n\nYou could try dumping out the contents of page 5704, eg\n\n\tdd bs=8k skip=5704 count=1 <tablefile | od -x\n\njust to see what's there, but I bet you will find that it looks like\nit's been trashed.\n\n> Is there any way to recover this relation? Or at least as much data as \n> possible?\n\nIf you can figure out what pd_lower should be on each of the trashed\npages, you might be able to reset it to the correct value and recover\nthe tuples, if there are any un-trashed. Otherwise zero out the trashed\npage(s). You should not expect to get everything back --- what you want\nis to make the table readable so that you can dump the contents of the\nundamaged pages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 May 2002 10:09:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuples gone? " }, { "msg_contents": "I said:\n> Daniel Kalchev <daniel@digsys.bg> writes:\n>> NOTICE: Rel relx: TID 2344/5704: OID IS INVALID. TUPGONE 1.\n\n> You could try dumping out the contents of page 5704, eg\n\nBTW, I got the ordering backwards: VACUUM prints TIDs as page number\nand then tuple number. So actually all these complaints are referencing\na single page, 2344, suggesting that you've got just one trashed page\nheader.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 May 2002 11:00:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuples gone? " } ]
[ { "msg_contents": "\tHi!\n\n\tI have sended the message below to pgadmin-support but receive no\nanswers... I hope you can help me on this...\n\n\tIs there any server timeout that is undocumented?\n\n\tI've issued a query like the one below and the server timed out after \n180min (+/-). The query \"construct_warehouse()\" can last well above the \n180min because it fills a table with millions of tuples...\n\n----------------------------------------------------------------------------\nspid=> vacuum full analyze ; select construct_warehouse() ; vacuum analyze ;\nNOTICE: Skipping \"pg_group\" --- only table or database owner can VACUUM it\nNOTICE: Skipping \"pg_database\" --- only table or database owner can VACUUM it\nNOTICE: Skipping \"pg_shadow\" --- only table or database owner can VACUUM it\nVACUUM\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nYou are currently not connected to a database.\n!> \\q\n----------------------------------------------------------------------------\n\n\tI've searched the archives for some documented timeout but nothing...\n\n\tI've searched the postgresql.conf file and nothing...\n\n\tCan anyone help me? Thanks in advance!\n\n\tNote: the first time I noticed a time out was using a JDBC driver and \nthen I've tested in the pgsql to confirm it.\n\n-- \n o__\t\tBem haja,\n _.>/ _\t\t\tNunoACHenriques\n (_) \\(_)\t\t\t~~~~~~~~~~~~~~~\n\t\t\t\thttp://students.fct.unl.pt/users/nuno/\n\n\n\n", "msg_date": "Thu, 23 May 2002 18:36:17 +0100 (WEST)", "msg_from": "NunoACHenriques <nach@fct.unl.pt>", "msg_from_op": true, "msg_subject": "is there any backend timeout undocumented?" }, { "msg_contents": "NunoACHenriques <nach@fct.unl.pt> writes:\n> \tIs there any server timeout that is undocumented?\n\nNo.\n\n> spid=> vacuum full analyze ; select construct_warehouse() ; vacuum analyze ;\n> NOTICE: Skipping \"pg_group\" --- only table or database owner can VACUUM it\n> NOTICE: Skipping \"pg_database\" --- only table or database owner can VACUUM it\n> NOTICE: Skipping \"pg_shadow\" --- only table or database owner can VACUUM it\n> VACUUM\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n\nThis looks like a crash to me, not a timeout. Can you provide us with a\nstack backtrace? Also, you'd better explain what construct_warehouse()\nis doing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 May 2002 17:18:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: is there any backend timeout undocumented? " }, { "msg_contents": "On Thu, 23 May 2002 18:36:17 +0100 (WEST), NunoACHenriques\n<nach@fct.unl.pt> wrote:\n>server closed the connection unexpectedly\n>\tThis probably means the server terminated abnormally\n>\tbefore or while processing the request.\n>The connection to the server was lost. Attempting reset: Failed.\n>You are currently not connected to a database.\n\nI've seen this before. In my case it was not a timeout, but a backend\ncrash. What version are you running? Do you find anything useful in\nthe log file?\n\nServus\n Manfred\n", "msg_date": "Mon, 27 May 2002 23:32:13 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: is there any backend timeout undocumented?" }, { "msg_contents": "On Thu, 23 May 2002 18:36:17 +0100 (WEST)\n\"NunoACHenriques\" <nach@fct.unl.pt> wrote:\n> \tIs there any server timeout that is undocumented?\n\nLooks more like a backend crash to me. Can you look for a core file in\n$PGDATA/base/xxx/ (where xxx is the OID of your database)? If you\ndon't have debugging already enabled, try rebuilding PostgreSQL with\ndebugging support (./configure --enable-debug, or \"-g\" CFLAGS), and\nthen getting a backtrace with gdb. Also, posting the source of\nconstruct_warehouse() might be helpful.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 27 May 2002 17:48:24 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: is there any backend timeout undocumented?" }, { "msg_contents": "NunoACHenriques <nach@fct.unl.pt> writes:\n\n> \tI've issued a query like the one below and the server timed out after \n> 180min (+/-). The query \"construct_warehouse()\" can last well above the \n> 180min because it fills a table with millions of tuples...\n> \n> ----------------------------------------------------------------------------\n> spid=> vacuum full analyze ; select construct_warehouse() ; vacuum analyze ;\n> NOTICE: Skipping \"pg_group\" --- only table or database owner can VACUUM it\n> NOTICE: Skipping \"pg_database\" --- only table or database owner can VACUUM it\n> NOTICE: Skipping \"pg_shadow\" --- only table or database owner can VACUUM it\n> VACUUM\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> You are currently not connected to a database.\n\nAs the message says the backend is not \"timing out\"; it's terminating\n*abnormally*, What's doing the construct_warehouse() function?, It's\nwritten in C?, Could you send the backtrace from the core file?\n\nRegards,\nManuel.\n", "msg_date": "27 May 2002 17:28:22 -0500", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": false, "msg_subject": "Re: is there any backend timeout undocumented?" } ]
[ { "msg_contents": "I've been giving a lot of thought to some of the questions raised\nby my SSL patch, and have both a conclusion and a really stupid\nquestion.\n\nFirst, the conclusion is that what I'm working on is \"secure sessions.\"\nAs I mentioned before, that's not just encryption (e.g., SSH tunnels),\nbut the combination of confidentiality (encryption), message integrity\nand endpoint authentication. This is what people think you mean when\nyou say an application \"supports\" Kerberos or SSL, and it's what's\nrequired for really sensitive information.\n\n(E.g., nobody cares that the data was encrypted if the confidential\ninformation supporting a search warrant went to the bad guys instead\nof directly to the central police database. The snitch is still\ndead, and the evidence destroyed.)\n\nThe latest SSL patches will be out by this weekend, and I hope to\nadd GSS-API (which includes Kerberos 5) soon afterwards. Both will\npublish their endpoint authentication information (X509 structure\nand strings containing subject and issuer distinguished names, string\ncontaining Kerberos principal name), and the HBA code can then use\nthis information for PostgreSQL authentication.\n\n...\n\nThe really stupid question refers to some of the hardcoded fallback\nvalues in this code. The reason for having hardcoded values is to\nprevent \"downgrade\" attacks - you don't want to casually override the\nDBA, but you also don't want to make it easy for a knowledgeable\nattacker to fatally compromise the system in a way that your average\nDBA couldn't catch.\n\nBut the problem is that knowledgeable security administrators can\nreplace the common hardcoded values with their own. How do you allow\nthis to be easily done?\n\nOne possibility that occured to me was that dynamic libraries would\nhandle this nicely. There's even some support for dynamic libraries\nin the user-defined functions, so this wouldn't be a totally\nunprecedented idea.\n\nBut this would be a new way of using dynamic libraries. Is this\nsomething everyone is comfortable with, or is it problematic for\nsome reason? Or is this premature - maybe the first release should\njust use hardcoded values with a note to contact individuals if\nthere's an interest in a dynamic library approach?\n\nBear\n", "msg_date": "Thu, 23 May 2002 11:48:54 -0600 (MDT)", "msg_from": "Bear Giles <bgiles@coyotesong.com>", "msg_from_op": true, "msg_subject": "Really stupid question(?)" }, { "msg_contents": "Bear Giles <bgiles@coyotesong.com> writes:\n> But the problem is that knowledgeable security administrators can\n> replace the common hardcoded values with their own. How do you allow\n> this to be easily done?\n\nConfiguration parameters?\n\n> One possibility that occured to me was that dynamic libraries would\n> handle this nicely. There's even some support for dynamic libraries\n> in the user-defined functions, so this wouldn't be a totally\n> unprecedented idea.\n> But this would be a new way of using dynamic libraries.\n\nYou've lost me completely. What exactly are you suggesting?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 May 2002 16:10:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Really stupid question(?) " }, { "msg_contents": "Bear Giles wrote:\n> The really stupid question refers to some of the hardcoded fallback\n> values in this code. The reason for having hardcoded values is to\n> prevent \"downgrade\" attacks - you don't want to casually override the\n> DBA, but you also don't want to make it easy for a knowledgeable\n> attacker to fatally compromise the system in a way that your average\n> DBA couldn't catch.\n> \n> But the problem is that knowledgeable security administrators can\n> replace the common hardcoded values with their own. How do you allow\n> this to be easily done?\n\nWould GUC variables work? Put in sensible defaults and let the more \nknowledgeable security admins override the defaults in postgresql.conf\n\nJoe\n\n\n\n", "msg_date": "Thu, 23 May 2002 13:29:47 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Really stupid question(?)" } ]
[ { "msg_contents": "I just looked at Bruce's talk about PostgreSQL history and saw the\nflowchart slide. This would probably give a good poster too. Do we have\nthis flowchart available as eps, gif, whatever?\n\nOr Bruce, do you have a version that you could send me?\n\nThanks a lot.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 24 May 2002 16:18:11 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Internal flowchart" }, { "msg_contents": "\nSure. src/tools/backend/flow.fig is the xfig source for the diagram. I\ncan generate a PDF if you wish, but you have to wait for me to return\nfrom vacation on May 31. I only have telnet access right now.\n\n---------------------------------------------------------------------------\n\nMichael Meskes wrote:\n> I just looked at Bruce's talk about PostgreSQL history and saw the\n> flowchart slide. This would probably give a good poster too. Do we have\n> this flowchart available as eps, gif, whatever?\n> \n> Or Bruce, do you have a version that you could send me?\n> \n> Thanks a lot.\n> \n> Michael\n> -- \n> Michael Meskes\n> Michael@Fam-Meskes.De\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 25 May 2002 17:44:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Internal flowchart" } ]
[ { "msg_contents": "Hello,\n\nCan anyone help me how to add big amount of data into database?\n\nI tried COPY, first the addition speed was good, but after each minute it\nbecame worser and worser.\nYou can see the table illustrating this below.\n\nThere were ~10,000,000 records.\nThere were no indexes created on table.\nTable was locked in exclusive mode.\n\n\nWith the best regards,\nSergei\n\n\n\n\nTable:\nFirst column is size of table in kilobytes.\nSecond - speed of COPY command.\n\nSize, Kb | Speed, Kb\n--------------------\n137416\n139856 2440\n142260 2404\n144624 2364\n146936 2312\n149220 2284\n151460 2240\n153672 2212\n155840 2168\n157988 2148\n160108 2120\n162176 2068\n164248 2072\n166284 2036\n168300 2016\n170312 2012\n172296 1984\n174260 1964\n176196 1936\n178112 1916\n180000 1888\n181868 1868\n183708 1840\n185532 1824\n187336 1804\n189128 1792\n190900 1772\n192668 1768\n194432 1764\n196152 1720\n197884 1732\n199580 1696\n201260 1680\n202952 1692\n204608 1656\n206276 1668\n207916 1640\n209564 1648\n211192 1628\n212800 1608\n214412 1612\n216004 1592\n217596 1592\n219160 1564\n220736 1576\n222312 1576\n223860 1548\n225412 1552\n226928 1516\n228496 1568\n230000 1504\n231500 1500\n232964 1464\n234276 1312\n235112 836\n236584 1472\n238040 1456\n239508 1468\n240980 1472\n242500 1520\n243568 1068\n244632 1064\n245784 1152\n246904 1120\n248340 1436\n249772 1432\n251232 1460\n252672 1440\n254096 1424\n255524 1428\n256948 1424\n258444 1496\n259912 1468\n261024 1112\n262152 1128\n263460 1308\n264860 1400\n266276 1416\n267688 1412\n269096 1408\n270488 1392\n271876 1388\n273300 1424\n274612 1312\n275684 1072\n276920 1236\n278136 1216\n279344 1208\n280356 1012\n281212 856\n281964 752\n282652 688\n283756 1104\n285104 1348\n286448 1344\n287784 1336\n289132 1348\n290460 1328\n291860 1400\n292832 972\n293712 880\n294464 752\n295424 960\n296708 1284\n297956 1248\n299156 1200\n300156 1000\n300944 788\n302240 1296\n303104 864\n304296 1192\n305540 1244\n306812 1272\n308068 1256\n309048 980\n309792 744\n\n\n", "msg_date": "Fri, 24 May 2002 18:57:38 +0300", "msg_from": "\"Olonichev Sergei\" <olonichev@scnsoft.com>", "msg_from_op": true, "msg_subject": "How to add big amounts of data into database?" } ]
[ { "msg_contents": "Hi,\n\nI'm wondering about alternative accesses to a PostgreSQL data base\nby means other than SQL. I know one can map many things to SQL, but\nlet me think outside the box for just a moment:\n\n- Sending a parse tree in XML for processing by the optimizer.\n This circumvents the SQL language and avoids the kinds of\n syntactic ideosyncrasies of SQL (e.g., where you put commas.)\n This is fairly trivial, but of course the question is, would\n it be worth it?\n\n- Sending an execution plan in XML directly to the executor.\n This now circumvents the SQL parser and optimizer. I know this\n in in a way against the relational doxology and I don't take that\n light-heartedly. However, isn't it true that most optimizers\n cannot deal very well with more than 6 joins? I may be wrong,\n but I find myself spending quite a bit of time fighting with the\n Oracle or PostgreSQL optimizer to convince it to choose the plan\n that I want. There is so much magic to it with hints and the\n way you write SQL (where in relational theory the expressions are\n equivalent, they make huge difference in what plan is being\n generated.) So, it appears to me almost easier to just send a\n plan directly and have the system execute that plan.\n\n- These direct interfaces could be a nice way to experiment with\n new strategies without having to implement it on all three\n layers (SQL language, optimizer, and executor.)\n\nYou noticed I sneaked in XML as the interface, and that would be\nneat because with XSLT it's so easy to manipulate. But I'm also\nthinking about a Prolog binding or constraint logic programming\nbinding, that might be better optimizeable if it goes through a\nmore direct path than SQL.\n\nAm I crazy?\n-Gunther\n\n-- \nGunther Schadow, M.D., Ph.D. gschadow@regenstrief.org\nMedical Information Scientist Regenstrief Institute for Health Care\nAdjunct Assistant Professor Indiana University School of Medicine\ntel:1(317)630-7960 http://aurora.regenstrief.org\n\n\n", "msg_date": "Fri, 24 May 2002 12:43:36 -0500", "msg_from": "Gunther Schadow <gunther@aurora.regenstrief.org>", "msg_from_op": true, "msg_subject": "Alternatives to SQL ..." }, { "msg_contents": "\nThe SQL interface is bullet-proof because it validates tables, computes\noffsets, and stuff like that. Passing something else into the database\nand bypassing the SQL stage would require rewriting all the C logic for\nSQL to match your new language --- a lot of work for little gain.\n\n---------------------------------------------------------------------------\n\nGunther Schadow wrote:\n> Hi,\n> \n> I'm wondering about alternative accesses to a PostgreSQL data base\n> by means other than SQL. I know one can map many things to SQL, but\n> let me think outside the box for just a moment:\n> \n> - Sending a parse tree in XML for processing by the optimizer.\n> This circumvents the SQL language and avoids the kinds of\n> syntactic ideosyncrasies of SQL (e.g., where you put commas.)\n> This is fairly trivial, but of course the question is, would\n> it be worth it?\n> \n> - Sending an execution plan in XML directly to the executor.\n> This now circumvents the SQL parser and optimizer. I know this\n> in in a way against the relational doxology and I don't take that\n> light-heartedly. However, isn't it true that most optimizers\n> cannot deal very well with more than 6 joins? I may be wrong,\n> but I find myself spending quite a bit of time fighting with the\n> Oracle or PostgreSQL optimizer to convince it to choose the plan\n> that I want. There is so much magic to it with hints and the\n> way you write SQL (where in relational theory the expressions are\n> equivalent, they make huge difference in what plan is being\n> generated.) So, it appears to me almost easier to just send a\n> plan directly and have the system execute that plan.\n> \n> - These direct interfaces could be a nice way to experiment with\n> new strategies without having to implement it on all three\n> layers (SQL language, optimizer, and executor.)\n> \n> You noticed I sneaked in XML as the interface, and that would be\n> neat because with XSLT it's so easy to manipulate. But I'm also\n> thinking about a Prolog binding or constraint logic programming\n> binding, that might be better optimizeable if it goes through a\n> more direct path than SQL.\n> \n> Am I crazy?\n> -Gunther\n> \n> -- \n> Gunther Schadow, M.D., Ph.D. gschadow@regenstrief.org\n> Medical Information Scientist Regenstrief Institute for Health Care\n> Adjunct Assistant Professor Indiana University School of Medicine\n> tel:1(317)630-7960 http://aurora.regenstrief.org\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 13:51:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Alternatives to SQL ..." }, { "msg_contents": "On Fri, May 24, 2002 at 12:43:36PM -0500, Gunther Schadow wrote:\n> - Sending a parse tree in XML for processing by the optimizer.\n> This circumvents the SQL language and avoids the kinds of\n> syntactic ideosyncrasies of SQL (e.g., where you put commas.)\n> This is fairly trivial, but of course the question is, would\n> it be worth it?\n\nI don't know if you can design something in XML that is expressive and\nsimple enough to compete with SQL. SQL is a simple language, why replace it\nwith something unless it is demonstrably better.\n\n> - Sending an execution plan in XML directly to the executor.\n> This now circumvents the SQL parser and optimizer. I know this\n> in in a way against the relational doxology and I don't take that\n> light-heartedly. However, isn't it true that most optimizers\n> cannot deal very well with more than 6 joins? I may be wrong,\n> but I find myself spending quite a bit of time fighting with the\n> Oracle or PostgreSQL optimizer to convince it to choose the plan\n> that I want. There is so much magic to it with hints and the\n> way you write SQL (where in relational theory the expressions are\n> equivalent, they make huge difference in what plan is being\n> generated.) So, it appears to me almost easier to just send a\n> plan directly and have the system execute that plan.\n\nThe detail contained in plans is quite substantial (as you can see using\nEXPLAIN VERBOSE). I doubt you can rely on programmers getting all the\ndetails right. As for the join problem, some people get good results tweaking\nthe genetic query optimiser using documented interfaces. And if you don't\nlike the way the tables are joined, the INNER/OUTER/LEFT/RIGHT JOIN syntax\nin SQL allows you to force the order.\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Sat, 8 Jun 2002 13:31:00 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: Alternatives to SQL ..." }, { "msg_contents": "The world rejoiced as kleptog@svana.org (Martijn van Oosterhout) wrote:\n> On Fri, May 24, 2002 at 12:43:36PM -0500, Gunther Schadow wrote:\n>> - Sending a parse tree in XML for processing by the optimizer.\n>> This circumvents the SQL language and avoids the kinds of\n>> syntactic ideosyncrasies of SQL (e.g., where you put commas.)\n>> This is fairly trivial, but of course the question is, would it\n>> be worth it?\nX-Mailer: mh-e 6.1; nmh 1.0.4+dev; Emacs 21.4\n\n> I don't know if you can design something in XML that is expressive\n> and simple enough to compete with SQL. SQL is a simple language, why\n> replace it with something unless it is demonstrably better.\n\nSQL is good at providing \"linear\" queries; queries that indicate some\n\"linear\" relationship between elements.\n\nIt is not so good at representing hierarchical relationships, which is\nwhat XML is about.\n\nThe SQL: \n SELECT FIELDS FROM TABLE\nprovides you with a linear list.\n\nSQL isn't _nearly_ as nice at representing things that are naturally\nexpressed as trees. It's pretty easy to have a DB schema where you\nessentially have to submit an SQL query for every level of the tree.\n\nAnd I am not ignoring JOIN here; that adds _some_ ability to join\ntogether levels of trees, but not an unlimited ability.\n\nThe XML model fundamentally involves a hierarchy, and the 'query\nmethod' involves passing in a function that reshapes that hierarchy.\nI think there would be considerable value to that.\n\nIt certainly needs to be thought about before it is implemented, but\nit's worth thinking about, to be sure.\n-- \n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www.ntlug.org/~cbbrowne/multiplexor.html\n\"It is easier to move a problem around (for example, by moving the\nproblem to a different part of the overall network architecture) than\nit is to solve it.\" -- RFC 1925\n", "msg_date": "Sat, 08 Jun 2002 12:57:45 -0400", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": false, "msg_subject": "Re: Alternatives to SQL ..." } ]
[ { "msg_contents": "I've altered the PL/Tcl build to use the configured compiler, not the one\nTcl wants, pursuant to earlier discussion. This leaves open the issue of\nthe export file that libpgtcl needs on AIX. That looked too confusing to\nme to touch it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 24 May 2002 21:02:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Tcl build fix attempted" } ]
[ { "msg_contents": "On Sat, 2002-05-25 at 02:38, Joe Conway wrote:\n> Tom Lane wrote:\n> > The remaining degradation is actually in seqscan performance, not\n> > indexscan --- unless one uses a much larger -s setting, the planner will\n> > think it ought to use seqscans for updating the \"branches\" and \"tellers\"\n> > tables, since those nominally have just a few rows; and there's no way\n> > to avoid scanning lots of dead tuples in a seqscan. Forcing indexscans\n> > helps some in the former CVS tip:\n> > \n> \n> This may qualify as a \"way out there\" idea, or more trouble than it's \n> worth, but what about a table option which provides a bitmap index of \n> tuple status -- i.e. tuple dead t/f. If available, a seqscan in between \n> vacuums could maybe gain some of the same efficiency.\n\nI guess this would be only useful if it is a bitmap of dead _pages_ not\ntuples (page reading is mostexpensive plus there is no way to know how\nmany tuples per page)\n\nbut for worst cases (small table with lots of updates) this can be a\ngreat thing that can postpone fixing optimiser to account for dead\ntuples.\n\none 8K pages can hold bits for 8192*8 = 65536 pages = 512 Mbytes and if\nseqscan could skip first 500 of them it would definitely be worth it ;)\n\n> > This is the first time I have ever seen repeated pgbench runs without\n> > substantial performance degradation. Not a bad result for a Friday\n> > afternoon...\n\nReally good news!\n\n-----------\nHannu\n\n\n", "msg_date": "25 May 2002 00:49:39 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Index tuple killing code committed" }, { "msg_contents": "Per previous discussion, I have committed changes that cause the btree\nand hash index methods to mark index tuples \"killed\" the first time they\nare fetched after becoming globally dead. Subsequently the killed\nentries are not returned out of indexscans, saving useless heap fetches.\n(I haven't changed rtree and gist yet; they will need some internal\nrestructuring to do this efficiently. Perhaps Oleg or Teodor would like\nto take that on.)\n\nThis seems to make a useful improvement in pgbench results. Yesterday's\nCVS tip gave me these results:\n\n(Running postmaster with \"-i -F -B 1024\", other parameters at defaults,\nand pgbench initialized with \"pgbench -i -s 10 bench\")\n\n$ time pgbench -c 5 -t 1000 -n bench\ntps = 26.428787(including connections establishing)\ntps = 26.443410(excluding connections establishing)\nreal 3:09.74\n$ time pgbench -c 5 -t 1000 -n bench\ntps = 18.838304(including connections establishing)\ntps = 18.846281(excluding connections establishing)\nreal 4:26.41\n$ time pgbench -c 5 -t 1000 -n bench\ntps = 13.541641(including connections establishing)\ntps = 13.545646(excluding connections establishing)\nreal 6:10.19\n\nNote the \"-n\" switches here to prevent vacuums between runs; the point\nis to observe the degradation as more and more dead tuples accumulate.\n\nWith the just-committed changes I get (starting from a fresh start):\n\n$ time pgbench -c 5 -t 1000 -n bench\ntps = 28.393271(including connections establishing)\ntps = 28.410117(excluding connections establishing)\nreal 2:56.53\n$ time pgbench -c 5 -t 1000 -n bench\ntps = 23.498645(including connections establishing)\ntps = 23.510134(excluding connections establishing)\nreal 3:33.89\n$ time pgbench -c 5 -t 1000 -n bench\ntps = 18.773239(including connections establishing)\ntps = 18.780936(excluding connections establishing)\nreal 4:26.84\n\nThe remaining degradation is actually in seqscan performance, not\nindexscan --- unless one uses a much larger -s setting, the planner will\nthink it ought to use seqscans for updating the \"branches\" and \"tellers\"\ntables, since those nominally have just a few rows; and there's no way\nto avoid scanning lots of dead tuples in a seqscan. Forcing indexscans\nhelps some in the former CVS tip:\n\n$ PGOPTIONS=\"-fs\" time pgbench -c 5 -t 1000 -n bench\ntps = 28.840678(including connections establishing)\ntps = 28.857442(excluding connections establishing)\nreal 2:53.9\n$ PGOPTIONS=\"-fs\" time pgbench -c 5 -t 1000 -n bench\ntps = 25.670674(including connections establishing)\ntps = 25.684493(excluding connections establishing)\nreal 3:15.7\n$ PGOPTIONS=\"-fs\" time pgbench -c 5 -t 1000 -n bench\ntps = 22.593429(including connections establishing)\ntps = 22.603928(excluding connections establishing)\nreal 3:42.7\n\nand with the changes I get:\n\n$ PGOPTIONS=-fs time pgbench -c 5 -t 1000 -n bench\ntps = 29.445004(including connections establishing)\ntps = 29.463948(excluding connections establishing)\nreal 2:50.3\n$ PGOPTIONS=-fs time pgbench -c 5 -t 1000 -n bench\ntps = 30.277968(including connections establishing)\ntps = 30.301363(excluding connections establishing)\nreal 2:45.6\n$ PGOPTIONS=-fs time pgbench -c 5 -t 1000 -n bench\ntps = 30.209377(including connections establishing)\ntps = 30.230646(excluding connections establishing)\nreal 2:46.0\n\n\nThis is the first time I have ever seen repeated pgbench runs without\nsubstantial performance degradation. Not a bad result for a Friday\nafternoon...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 16:42:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Index tuple killing code committed" }, { "msg_contents": "Tom Lane wrote:\n> The remaining degradation is actually in seqscan performance, not\n> indexscan --- unless one uses a much larger -s setting, the planner will\n> think it ought to use seqscans for updating the \"branches\" and \"tellers\"\n> tables, since those nominally have just a few rows; and there's no way\n> to avoid scanning lots of dead tuples in a seqscan. Forcing indexscans\n> helps some in the former CVS tip:\n> \n\nThis may qualify as a \"way out there\" idea, or more trouble than it's \nworth, but what about a table option which provides a bitmap index of \ntuple status -- i.e. tuple dead t/f. If available, a seqscan in between \nvacuums could maybe gain some of the same efficiency.\n\n> This is the first time I have ever seen repeated pgbench runs without\n> substantial performance degradation. Not a bad result for a Friday\n> afternoon...\n\nNice work!\n\nJoe\n\n", "msg_date": "Fri, 24 May 2002 14:38:41 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Index tuple killing code committed" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> This may qualify as a \"way out there\" idea, or more trouble than it's \n> worth, but what about a table option which provides a bitmap index of \n> tuple status -- i.e. tuple dead t/f. If available, a seqscan in between \n> vacuums could maybe gain some of the same efficiency.\n\nHmm. I'm inclined to think that a separate bitmap index wouldn't be\nworth the trouble. Under most scenarios it'd just require extra I/O\nand not buy much.\n\nHowever ... we could potentially take over the LP_DELETED flag bit of\nheap tuples for the same use as for index tuples: set it when the tuple\nis known dead for all transactions. This would save calling\nHeapTupleSatisfiesSnapshot in the inner loop of heapgettup, while not\nadding much expense for the normal case where the tuple's not dead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 May 2002 18:09:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index tuple killing code committed " } ]
[ { "msg_contents": "\n> This is the first time I have ever seen repeated pgbench runs without\n> substantial performance degradation. Not a bad result for a Friday\n> afternoon...\n\nCongatulations :-) This sounds great !!!\n\nAndreas\n", "msg_date": "Fri, 24 May 2002 23:03:35 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Index tuple killing code committed" } ]
[ { "msg_contents": "Hi,\n\nwe've got rather strange problem with updating and GiST indices.\nBelow is a test run:\n\ndrop table tst;\ncreate table tst ( a int[], i int );\ncopy tst from stdin;\n........\n\\.\ncreate index tsti on tst using gist (a);\nvacuum full analyze;\n\n\ntest=# update tst set i = i+10 where a && '{3,4}';\nUPDATE 3267\ntest=# set enable_indexscan=off;\nSET VARIABLE\ntest=# update tst set i = i+10 where a && '{3,4}';\nUPDATE 4060\ntest=# select count(*) from tst where a && '{3,4}';\n count\n-------\n 4060\n(1 row)\n\ntest=# select version();\n version\n---------------------------------------------------------------\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.3\n(1 row)\n\nenabling gist indices cause some rows doesn't updating !\nPlease find attached test sql script (need to install contrib/intarray module)\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 25 May 2002 12:31:04 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "strange update problem with 7.2.1" }, { "msg_contents": "Sorry,\n\nforget to attach file.\n\nOleg\nOn Sat, 25 May 2002, Oleg Bartunov wrote:\n\n> Hi,\n>\n> we've got rather strange problem with updating and GiST indices.\n> Below is a test run:\n>\n> drop table tst;\n> create table tst ( a int[], i int );\n> copy tst from stdin;\n> ........\n> \\.\n> create index tsti on tst using gist (a);\n> vacuum full analyze;\n>\n>\n> test=# update tst set i = i+10 where a && '{3,4}';\n> UPDATE 3267\n> test=# set enable_indexscan=off;\n> SET VARIABLE\n> test=# update tst set i = i+10 where a && '{3,4}';\n> UPDATE 4060\n> test=# select count(*) from tst where a && '{3,4}';\n> count\n> -------\n> 4060\n> (1 row)\n>\n> test=# select version();\n> version\n> ---------------------------------------------------------------\n> PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.3\n> (1 row)\n>\n> enabling gist indices cause some rows doesn't updating !\n> Please find attached test sql script (need to install contrib/intarray module)\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Sat, 25 May 2002 12:32:06 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: strange update problem with 7.2.1" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> test=# update tst set i = i+10 where a && '{3,4}';\n> UPDATE 3267\n> test=# set enable_indexscan=off;\n> SET VARIABLE\n> test=# update tst set i = i+10 where a && '{3,4}';\n> UPDATE 4060\n\nI get the same in current sources (in fact the number of rows updated\nvaries from try to try). Are you sure it's not a problem with the\ngist index mechanism?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 May 2002 11:58:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "On Sat, 25 May 2002, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > test=# update tst set i = i+10 where a && '{3,4}';\n> > UPDATE 3267\n> > test=# set enable_indexscan=off;\n> > SET VARIABLE\n> > test=# update tst set i = i+10 where a && '{3,4}';\n> > UPDATE 4060\n>\n> I get the same in current sources (in fact the number of rows updated\n> varies from try to try). Are you sure it's not a problem with the\n> gist index mechanism?\n>\n\nWe'll look once more, but code for select and update should be the same.\n\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 25 May 2002 20:06:57 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> On Sat, 25 May 2002, Tom Lane wrote:\n>> I get the same in current sources (in fact the number of rows updated\n>> varies from try to try). Are you sure it's not a problem with the\n>> gist index mechanism?\n\n> We'll look once more, but code for select and update should be the same.\n\nYeah, but the update case is inserting more entries into the index.\nI'm wondering if that causes the index scan's state to get corrupted\nso that it misses scanning some entries. btree has a carefully designed\nalgorithm to cope with this, but I have no idea how gist manages it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 May 2002 13:28:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "> Yeah, but the update case is inserting more entries into the index.\n> I'm wondering if that causes the index scan's state to get corrupted\n> so that it misses scanning some entries. btree has a carefully designed\n> algorithm to cope with this, but I have no idea how gist manages it.\n\n\nThank you, Tom. You give me a direction for looking. Attached patch fix\nthe problem with broken state. Please apply it for 7.2.2 and current cvs \n(sorry,\nbut I'll have a possibility to check it on current cvs only tomorrow).\n\n\n\n-- \nTeodor Sigaev\nteodor@stack.net", "msg_date": "Sun, 26 May 2002 15:17:58 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1" }, { "msg_contents": "Just tested with 7.2.1. It works. We have one more patch (for rtree_gist)\nto submit before 7.2.2 release.\n\n\tOleg\n\nOn Sun, 26 May 2002, Teodor Sigaev wrote:\n\n> > Yeah, but the update case is inserting more entries into the index.\n> > I'm wondering if that causes the index scan's state to get corrupted\n> > so that it misses scanning some entries. btree has a carefully designed\n> > algorithm to cope with this, but I have no idea how gist manages it.\n>\n>\n> Thank you, Tom. You give me a direction for looking. Attached patch fix\n> the problem with broken state. Please apply it for 7.2.2 and current cvs\n> (sorry,\n> but I'll have a possibility to check it on current cvs only tomorrow).\n>\n>\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 26 May 2002 16:05:32 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: strange update problem with 7.2.1" }, { "msg_contents": "Tested it with current CVS. It works.\n\nOleg Bartunov wrote:\n> Just tested with 7.2.1. It works. We have one more patch (for rtree_gist)\n> to submit before 7.2.2 release.\n> \n> \tOleg\n> \n> On Sun, 26 May 2002, Teodor Sigaev wrote:\n> \n> \n>>>Yeah, but the update case is inserting more entries into the index.\n>>>I'm wondering if that causes the index scan's state to get corrupted\n>>>so that it misses scanning some entries. btree has a carefully designed\n>>>algorithm to cope with this, but I have no idea how gist manages it.\n>>>\n>>\n>>Thank you, Tom. You give me a direction for looking. Attached patch fix\n>>the problem with broken state. Please apply it for 7.2.2 and current cvs\n>>(sorry,\n>>but I'll have a possibility to check it on current cvs only tomorrow).\n>>\n>>\n>>\n>>\n>>\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Mon, 27 May 2002 12:35:26 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1" }, { "msg_contents": "\n\nOleg Bartunov wrote:\n> Just tested with 7.2.1. It works. We have one more patch (for rtree_gist)\n> to submit before 7.2.2 release.\n> \n\nAttached patch fix a bug with creating index. Bug was reported by Chris Hodgson \n<chodgson@refractions.net>. Please, apply it for 7.2.2 and current CVS.\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Mon, 27 May 2002 12:59:31 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1" }, { "msg_contents": "Sorry, forgot a patch...\n\nTeodor Sigaev wrote:\n> \n> \n> Oleg Bartunov wrote:\n> \n>> Just tested with 7.2.1. It works. We have one more patch (for rtree_gist)\n>> to submit before 7.2.2 release.\n>>\n> \n> Attached patch fix a bug with creating index. Bug was reported by Chris \n> Hodgson <chodgson@refractions.net>. Please, apply it for 7.2.2 and \n> current CVS.\n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net", "msg_date": "Mon, 27 May 2002 13:18:43 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1" }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n>> Yeah, but the update case is inserting more entries into the index.\n>> I'm wondering if that causes the index scan's state to get corrupted\n>> so that it misses scanning some entries.\n\n> Thank you, Tom. You give me a direction for looking. Attached patch fix\n> the problem with broken state.\n\nHmm, is this patch really correct? Removing the gistadjscans() call\nfrom gistSplit seems wrong to me --- won't that miss reporting splits\non leaf pages? Or does this not matter for some reason?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 May 2002 17:54:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "\n\nTom Lane wrote:\n> Teodor Sigaev <teodor@stack.net> writes:\n> \n>>>Yeah, but the update case is inserting more entries into the index.\n>>>I'm wondering if that causes the index scan's state to get corrupted\n>>>so that it misses scanning some entries.\n>>>\n> \n>>Thank you, Tom. You give me a direction for looking. Attached patch fix\n>>the problem with broken state.\n>>\n> \n> Hmm, is this patch really correct? Removing the gistadjscans() call\n> from gistSplit seems wrong to me --- won't that miss reporting splits\n> on leaf pages? Or does this not matter for some reason?\n> \ngistadjscans() is moving to gistlayerinsert. gistadjscans() must be called for \nparent of splitted page, but gistSplit doesn't know parent of current page and\ngistlayerinsert return status of its action: inserted and (may be) splitted. So\nwe can call gistadjscans(GIST_SPLIT) in gistlayerinsert when it's need.\n\n\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Tue, 28 May 2002 11:37:29 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1" }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n>> Hmm, is this patch really correct? Removing the gistadjscans() call\n>> from gistSplit seems wrong to me --- won't that miss reporting splits\n>> on leaf pages? Or does this not matter for some reason?\n\n> gistadjscans() is moving to gistlayerinsert. gistadjscans() must be\n> called for parent of splitted page, but gistSplit doesn't know parent\n> of current page and gistlayerinsert return status of its action:\n> inserted and (may be) splitted. So we can call\n> gistadjscans(GIST_SPLIT) in gistlayerinsert when it's need.\n\nBut gistSplit is recursive. Is there no need to worry about the\nadditional splits it might do internally?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 09:33:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "\n\nTom Lane wrote:\n> Teodor Sigaev <teodor@stack.net> writes:\n> \n>>>Hmm, is this patch really correct? Removing the gistadjscans() call\n>>>from gistSplit seems wrong to me --- won't that miss reporting splits\n>>>on leaf pages? Or does this not matter for some reason?\n>>\n> \n>>gistadjscans() is moving to gistlayerinsert. gistadjscans() must be\n>>called for parent of splitted page, but gistSplit doesn't know parent\n>>of current page and gistlayerinsert return status of its action:\n>>inserted and (may be) splitted. So we can call\n>>gistadjscans(GIST_SPLIT) in gistlayerinsert when it's need.\n> \n> \n> But gistSplit is recursive. Is there no need to worry about the\n> additional splits it might do internally?\n\nInternally splits are doing before calling gistadjscans. All pages \ncreated by gistSplit will be inserted in the end of parent page.\nGiST's indexes aren't a concurrent there for one call of gistadjscans \nwill be sufficiant.\n\n\n\n\n", "msg_date": "Tue, 28 May 2002 19:09:02 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1" }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> Internally splits are doing before calling gistadjscans. All pages \n> created by gistSplit will be inserted in the end of parent page.\n> GiST's indexes aren't a concurrent there for one call of gistadjscans \n> will be sufficiant.\n\nOh, I see. Thanks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 11:10:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> Thank you, Tom. You give me a direction for looking. Attached patch fix\n> the problem with broken state. Please apply it for 7.2.2 and current cvs \n\nPatch applied to current and REL7_2 branch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 11:26:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n>> Attached patch fix a bug with creating index. Bug was reported by Chris \n>> Hodgson <chodgson@refractions.net>. Please, apply it for 7.2.2 and \n>> current CVS.\n\nPatch applied to both branches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 11:26:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "On Tue, 28 May 2002, Tom Lane wrote:\n\n> Teodor Sigaev <teodor@stack.net> writes:\n> > Thank you, Tom. You give me a direction for looking. Attached patch fix\n> > the problem with broken state. Please apply it for 7.2.2 and current cvs\n>\n> Patch applied to current and REL7_2 branch.\n\nIs't time for 7.2.2 ?\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 28 May 2002 20:45:32 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: strange update problem with 7.2.1 " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Is't time for 7.2.2 ?\n\nI think we had agreed start of June for 7.2.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 15:01:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange update problem with 7.2.1 " } ]
[ { "msg_contents": "Here is a new contrib function called \"pgstatindex\", similar to\npgstattuple but different in that it returns the percentage of the\ndead tuples of an index. I am posting this for review purpose.\n\nInstallation of pgstatindex is pretty easy:\n\nunpack the tar package in contrib directory.\ncd into pgstatindex directory.\nmake\nmake install\npsql -f /usr/local/pgsql/share/contrib/pgstatindex.sql your_database\n\nNote:\n\n(1) I think I have adopted to the recent Tom's changes to index access\n routines, but if you find anything is wrong, plese let me know.\n\n(2) pgstatindex probably does not work with rtree and gist indexes.\n--\nTatsuo Ishii", "msg_date": "Sat, 25 May 2002 23:47:46 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "pgstatindex" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Here is a new contrib function called \"pgstatindex\", similar to\n> pgstattuple but different in that it returns the percentage of the\n> dead tuples of an index. I am posting this for review purpose.\n\nUm ... what's the point? Isn't this always the same as the percentage\nfor the underlying table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 May 2002 15:42:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgstatindex " }, { "msg_contents": "> Um ... what's the point? Isn't this always the same as the percentage\n> for the underlying table?\n\nSure. In my understanding, unlike tables \"free/reusable space\" is\nactually not reused in index. pgstatindex would be usefull to judge if\nREINDEX is needed by showing the growth of physical length and\n\"free/reusable space\".\n\nMaybe \"free/reusable space\" is not appropriate wording, \"dead space\"\nis better?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 27 May 2002 10:11:40 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgstatindex " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Sure. In my understanding, unlike tables \"free/reusable space\" is\n> actually not reused in index. pgstatindex would be usefull to judge if\n> REINDEX is needed by showing the growth of physical length and\n> \"free/reusable space\".\n\nOh. Hmm, if that's what you want then I do not think an indexscan is\nthe way to go about it. The indexscan will only visit leaf pages\n(and not, for example, internal nodes of a btree). Also the\nfree-space-counting code you're using seems pretty unworkable since the\nindexscan is unlikely to visit leaf pages in anything like sequential\norder.\n\nI think the only reasonable way to get useful statistics would be to\nread the index directly --- page by page, no indexscan, distinguishing\nleaf pages, internal pages, and overhead pages for yourself. This would\nrequire index-AM-specific knowledge about how to tell which type each\npage is, but I believe all the index AMs make that possible.\n\nAlso, I'd suggest that visiting the heap is just useless overhead. A\nperson who wants to know whether the heap needs to be vacuumed can get\nthat data from pgstattuple. Reading the heap to check tuple state will\nmake this function orders of magnitude slower, while not producing much\nuseful info that I can see.\n\nSomething else to think about is how to present the results. As soon\nas you release this we will have people bleating about how come their\nbtrees always show at least 1/3rd free space :-( unless we can think\nof a way to highlight the fact that that's the expected loading factor\nfor a btree...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 May 2002 13:17:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgstatindex " }, { "msg_contents": "> Oh. Hmm, if that's what you want then I do not think an indexscan is\n> the way to go about it. The indexscan will only visit leaf pages\n> (and not, for example, internal nodes of a btree). Also the\n> free-space-counting code you're using seems pretty unworkable since the\n> indexscan is unlikely to visit leaf pages in anything like sequential\n> order.\n\nOh I was not aware of this.\n\n> I think the only reasonable way to get useful statistics would be to\n> read the index directly --- page by page, no indexscan, distinguishing\n> leaf pages, internal pages, and overhead pages for yourself. This would\n> require index-AM-specific knowledge about how to tell which type each\n> page is, but I believe all the index AMs make that possible.\n\nThat's what I'm afraid of. \n\n> Also, I'd suggest that visiting the heap is just useless overhead. A\n> person who wants to know whether the heap needs to be vacuumed can get\n> that data from pgstattuple. Reading the heap to check tuple state will\n> make this function orders of magnitude slower, while not producing much\n> useful info that I can see.\n\nOk let me think about this. Thank you for the suggestion!\n--\nTatsuo Ishii\n", "msg_date": "Tue, 28 May 2002 10:07:46 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgstatindex " } ]
[ { "msg_contents": "It occurs to me that we could get rid of the\nReferentialIntegritySnapshotOverride flag (which I consider both ugly\nand dangerous) if we tweaked ExecutorStart to accept the snapshot-to-use\nas a parameter. Then RI queries could pass in SnapshotNow instead of\nthe normal query snapshot, and we'd not need a low-level hack anymore.\n\nSince the RI triggers actually go through SPI, this'd mean offering\nan alternate version of SPI_execp that allows specifying the snapshot,\nbut that seems no big problem.\n\nComments?\n\n(I'm also wondering if we couldn't get rid of AMI_OVERRIDE, possibly\nwith a little bit of cleanup of bootstrapping...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 May 2002 13:07:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Getting rid of ReferentialIntegritySnapshotOverride" } ]
[ { "msg_contents": "If a VACUUM running concurrently with someone else's indexscan were to\ndelete the index tuple that the indexscan is currently stopped on, then\nwe'd get a failure when the indexscan resumes and tries to re-find its\nplace. (This is the infamous \"my bits moved right off the end of the\nworld\" error condition.) What is supposed to prevent that from\nhappening is that the indexscan retains a buffer pin (but not a read\nlock) on the index page containing the tuple it's stopped on. VACUUM\nwill not delete any tuple until it can get a \"super exclusive\" lock on\nthe page (cf. LockBufferForCleanup), and the pin prevents it from doing\nso.\n\nHowever: suppose that some other activity causes the index page to be\nsplit while the indexscan is stopped, and that the tuple it's stopped\non gets relocated into the new righthand page of the pair. Then the\nindexscan is holding a pin on the wrong page --- not the one its tuple\nis in. It would then be possible for the VACUUM to arrive at the tuple\nand delete it before the indexscan is resumed.\n\nThis is a pretty low-probability scenario, especially given the new\nindex-tuple-killing mechanism (which renders it less likely that an\nindexscan will stop on a vacuum-able tuple). But it could happen.\n\nThe only solution I've thought of is to make btbulkdelete acquire\n\"super exclusive\" lock on *every* leaf page of the index as it scans,\nrather than only locking the pages it actually needs to delete something\nfrom. And we'd need to tweak _bt_restscan to chain its pins (pin the\nnext page to the right before releasing pin on the previous page).\nThis would prevent a btbulkdelete scan from overtaking ordinary\nindexscans, and thereby ensure that it couldn't arrive at the tuple\non which an indexscan is stopped, even with splitting.\n\nI'm somewhat concerned that the more stringent locking will slow down\nVACUUM a good deal when there's lots of concurrent activity, but I don't\nsee another answer. Ideas anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 May 2002 14:21:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Think I see a btree vacuuming bug" }, { "msg_contents": "Well, given that vacuum does its work in the background now - I think you'll\nbe hard pressed to find a sys admin who'll vote for leaving it as is, no\nmatter how small the chance of corruption.\n\nHowever - this isn't my area of expertise...\n\nChris\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: <pgsql-hackers@postgresql.org>\nSent: Saturday, May 25, 2002 11:21 AM\nSubject: [HACKERS] Think I see a btree vacuuming bug\n\n\n> If a VACUUM running concurrently with someone else's indexscan were to\n> delete the index tuple that the indexscan is currently stopped on, then\n> we'd get a failure when the indexscan resumes and tries to re-find its\n> place. (This is the infamous \"my bits moved right off the end of the\n> world\" error condition.) What is supposed to prevent that from\n> happening is that the indexscan retains a buffer pin (but not a read\n> lock) on the index page containing the tuple it's stopped on. VACUUM\n> will not delete any tuple until it can get a \"super exclusive\" lock on\n> the page (cf. LockBufferForCleanup), and the pin prevents it from doing\n> so.\n>\n> However: suppose that some other activity causes the index page to be\n> split while the indexscan is stopped, and that the tuple it's stopped\n> on gets relocated into the new righthand page of the pair. Then the\n> indexscan is holding a pin on the wrong page --- not the one its tuple\n> is in. It would then be possible for the VACUUM to arrive at the tuple\n> and delete it before the indexscan is resumed.\n>\n> This is a pretty low-probability scenario, especially given the new\n> index-tuple-killing mechanism (which renders it less likely that an\n> indexscan will stop on a vacuum-able tuple). But it could happen.\n>\n> The only solution I've thought of is to make btbulkdelete acquire\n> \"super exclusive\" lock on *every* leaf page of the index as it scans,\n> rather than only locking the pages it actually needs to delete something\n> from. And we'd need to tweak _bt_restscan to chain its pins (pin the\n> next page to the right before releasing pin on the previous page).\n> This would prevent a btbulkdelete scan from overtaking ordinary\n> indexscans, and thereby ensure that it couldn't arrive at the tuple\n> on which an indexscan is stopped, even with splitting.\n>\n> I'm somewhat concerned that the more stringent locking will slow down\n> VACUUM a good deal when there's lots of concurrent activity, but I don't\n> see another answer. Ideas anyone?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Sat, 25 May 2002 18:24:33 -0700", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "On Sat, 25 May 2002 14:21:52 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>I'm somewhat concerned that the more stringent locking will slow down\n>VACUUM a good deal when there's lots of concurrent activity, but I don't\n>see another answer. Ideas anyone?\n\nIdeas? Always! :-) Don't know if this one is so bright, but at least\nwe have something to vote on:\n\nOn leaf pages order index tuples by heap item pointer, if otherwise\nequal. In IndexScanDescData remember the whole index tuple (including\nthe heap item pointer) instead of ItemPointerData. Then depending on\nscan direction _bt_next() would look for the first index tuple greater\nor less than currentItem respectively.\n\nImplications:\n(+) higher concurrency: normal write locks\n(+) robust: can always start from the root, if nothing else helps\n(though I can't think of a case making this necesary)\n(-) need heap item pointer in internal nodes (could partly be\ncompensated by omitting unused(?) t_tid.ip_posid)\n(+) btinsert knows, where to insert a new tuple, even if there are\nlots of duplicates (no random())\n(-) this could result in more half-empty leaf pages?\n(+) dead index tuples can be removed on the fly\n(?) ...\n\nServus\n Manfred\n", "msg_date": "Mon, 27 May 2002 12:16:51 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> On leaf pages order index tuples by heap item pointer, if otherwise\n> equal. In IndexScanDescData remember the whole index tuple (including\n> the heap item pointer) instead of ItemPointerData. Then depending on\n> scan direction _bt_next() would look for the first index tuple greater\n> or less than currentItem respectively.\n\nDoesn't help, I fear. Finding your place again is only one part\nof the problem. The other part is being sure that VACUUM won't delete\nthe heap tuple before you get to it. The interlock at the index stage\nis partly a proxy to protect heap tuples that are about to be visited\nby indexscans (ie, indexscan has read an index tuple but hasn't yet\nacquired pin on the referenced heap page).\n\n> (+) btinsert knows, where to insert a new tuple, even if there are\n> lots of duplicates (no random())\n\nThis is not a (+) but a (-), I think. Given the current CVS tip\nbehavior it is better for a new tuple to be inserted at the front of\nthe series of matching keys --- in unique indexes this allows repeated\nupdates without degrading search time. We are not currently exploiting\nthat as much as we should --- I suspect btree insertion should be more\nwilling to split pages than it now is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 May 2002 13:48:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Think I see a btree vacuuming bug " }, { "msg_contents": "On Mon, 27 May 2002 13:48:43 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Manfred Koizar <mkoi-pg@aon.at> writes:\n>> On leaf pages order index tuples by heap item pointer, if otherwise\n>> equal. [blah, blah, ...]\n>\n>Doesn't help, I fear. Finding your place again is only one part\n>of the problem. The other part is being sure that VACUUM won't delete\n>the heap tuple before you get to it.\n\nOk, throwing away this one.\n\nAnother idea: As far as I understand, the problem arises from\n\"recent\" page splits. Let's store a transaction id in each leaf page.\nOn a page split the currently highest active xid (*not* the backend's\ncurrent xid) is stored into the new right page. btbulkdelete has a\nnotion of \"old\" and \"young\" pages: If the page xid is less than the\noldest active transaction, then the index page is considered old,\notherwise young.\n\n\"Old\" pages can savely be treated like up to yesterday: get a \"super\nexclusive\" lock just on this page, do the changes, release the lock.\n\nWhenever btbulkdelete is about to change a \"young\" index page, it has\nto get \"super exclusive\" locks on all pages from the last preceding\n\"old\" page (*) up to the current page. It does not have to hold all\nthese locks at the same time, it just has to get the locks in\n\"non-overtaking\" mode.\n\nTo avoid deadlocks it might be necessary to release the read lock held\non the current page, while approaching it with \"super exclusive\" locks\nfrom the left. Then the current page has to be rescanned.\n\n(*) or the page following the last page we already had a \"super\nexclusive\" lock on\n\nServus\n Manfred\n", "msg_date": "Tue, 28 May 2002 11:12:06 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug " }, { "msg_contents": "\nIs this fixed, and if not, can I have some TODO text?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> If a VACUUM running concurrently with someone else's indexscan were to\n> delete the index tuple that the indexscan is currently stopped on, then\n> we'd get a failure when the indexscan resumes and tries to re-find its\n> place. (This is the infamous \"my bits moved right off the end of the\n> world\" error condition.) What is supposed to prevent that from\n> happening is that the indexscan retains a buffer pin (but not a read\n> lock) on the index page containing the tuple it's stopped on. VACUUM\n> will not delete any tuple until it can get a \"super exclusive\" lock on\n> the page (cf. LockBufferForCleanup), and the pin prevents it from doing\n> so.\n> \n> However: suppose that some other activity causes the index page to be\n> split while the indexscan is stopped, and that the tuple it's stopped\n> on gets relocated into the new righthand page of the pair. Then the\n> indexscan is holding a pin on the wrong page --- not the one its tuple\n> is in. It would then be possible for the VACUUM to arrive at the tuple\n> and delete it before the indexscan is resumed.\n> \n> This is a pretty low-probability scenario, especially given the new\n> index-tuple-killing mechanism (which renders it less likely that an\n> indexscan will stop on a vacuum-able tuple). But it could happen.\n> \n> The only solution I've thought of is to make btbulkdelete acquire\n> \"super exclusive\" lock on *every* leaf page of the index as it scans,\n> rather than only locking the pages it actually needs to delete something\n> from. And we'd need to tweak _bt_restscan to chain its pins (pin the\n> next page to the right before releasing pin on the previous page).\n> This would prevent a btbulkdelete scan from overtaking ordinary\n> indexscans, and thereby ensure that it couldn't arrive at the tuple\n> on which an indexscan is stopped, even with splitting.\n> \n> I'm somewhat concerned that the more stringent locking will slow down\n> VACUUM a good deal when there's lots of concurrent activity, but I don't\n> see another answer. Ideas anyone?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 16:14:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is this fixed, and if not, can I have some TODO text?\n\nIt's not fixed. I'd like to fix it for 7.3, but I was hoping someone\nwould think of a better way to fix it than I did ...\n\n\t\t\tregards, tom lane\n\n> ---------------------------------------------------------------------------\n\n> Tom Lane wrote:\n>> If a VACUUM running concurrently with someone else's indexscan were to\n>> delete the index tuple that the indexscan is currently stopped on, then\n>> we'd get a failure when the indexscan resumes and tries to re-find its\n>> place. (This is the infamous \"my bits moved right off the end of the\n>> world\" error condition.) What is supposed to prevent that from\n>> happening is that the indexscan retains a buffer pin (but not a read\n>> lock) on the index page containing the tuple it's stopped on. VACUUM\n>> will not delete any tuple until it can get a \"super exclusive\" lock on\n>> the page (cf. LockBufferForCleanup), and the pin prevents it from doing\n>> so.\n>> \n>> However: suppose that some other activity causes the index page to be\n>> split while the indexscan is stopped, and that the tuple it's stopped\n>> on gets relocated into the new righthand page of the pair. Then the\n>> indexscan is holding a pin on the wrong page --- not the one its tuple\n>> is in. It would then be possible for the VACUUM to arrive at the tuple\n>> and delete it before the indexscan is resumed.\n>> \n>> This is a pretty low-probability scenario, especially given the new\n>> index-tuple-killing mechanism (which renders it less likely that an\n>> indexscan will stop on a vacuum-able tuple). But it could happen.\n>> \n>> The only solution I've thought of is to make btbulkdelete acquire\n>> \"super exclusive\" lock on *every* leaf page of the index as it scans,\n>> rather than only locking the pages it actually needs to delete something\n>> from. And we'd need to tweak _bt_restscan to chain its pins (pin the\n>> next page to the right before releasing pin on the previous page).\n>> This would prevent a btbulkdelete scan from overtaking ordinary\n>> indexscans, and thereby ensure that it couldn't arrive at the tuple\n>> on which an indexscan is stopped, even with splitting.\n>> \n>> I'm somewhat concerned that the more stringent locking will slow down\n>> VACUUM a good deal when there's lots of concurrent activity, but I don't\n>> see another answer. Ideas anyone?\n>> \n>> regards, tom lane\n", "msg_date": "Mon, 26 Aug 2002 16:25:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Think I see a btree vacuuming bug " }, { "msg_contents": "Tom Lane wrote:\n> If a VACUUM running concurrently with someone else's indexscan were to\n> delete the index tuple that the indexscan is currently stopped on, then\n> we'd get a failure when the indexscan resumes and tries to re-find its\n> place. (This is the infamous \"my bits moved right off the end of the\n> world\" error condition.) What is supposed to prevent that from\n\nCertainly anything that makes this error message less likely is bad. \nPerhaps we need to hook up a random() call to output the error message\nto balance this. :-)\n\n> happening is that the indexscan retains a buffer pin (but not a read\n> lock) on the index page containing the tuple it's stopped on. VACUUM\n> will not delete any tuple until it can get a \"super exclusive\" lock on\n> the page (cf. LockBufferForCleanup), and the pin prevents it from doing\n> so.\n> \n> However: suppose that some other activity causes the index page to be\n> split while the indexscan is stopped, and that the tuple it's stopped\n> on gets relocated into the new righthand page of the pair. Then the\n> indexscan is holding a pin on the wrong page --- not the one its tuple\n> is in. It would then be possible for the VACUUM to arrive at the tuple\n> and delete it before the indexscan is resumed.\n\nOK, let me see if I can summarize:\n\n\tIndex scan stops on dead index tuple, holds pin\n\tSomeone inserts/updates and the page splits\n\tVacuum comes along and removes the index tuple\n\nAnd the problem is that after the split, the index scan doesn't hold a\npin on the newly split page.\n\nThis seems more like a problem that the index scan pin doesn't prevent\nit from losing the pin after a split. Could we just block splits of\npages containing pins? If the page splits, how does the index scan\nfind the new page to start again? Could the index scan be made to\nhandle cases where the index tuple it was stopped on is gone?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 17:20:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Could we just block splits of\n> pages containing pins?\n\nThat's not an improvement IMHO. The objection to the fix I suggested is\nthat it makes it harder for VACUUM to make progress in the presence of\ncontention. Replacing that with an approach that blocks foreground\nprocesses from making progress is not better.\n\n> If the page splits, how does the index scan\n> find the new page to start again?\n\nIt moves right until it finds the tuple it was on. That will either be\nin the pinned page, or some page to its right.\n\n> Could the index scan be made to\n> handle cases where the index tuple it was stopped on is gone?\n\nDon't see how. With no equal keys, you could test each tuple you scan\nover to see if it's > the expected key; but that would slow things down\ntremendously I fear. In any case it fails completely when there are\nequal keys, since you could not tell where in a run of equal keys to\nresume scanning. You really have to find the exact index tuple you\nstopped on, AFAICS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Aug 2002 17:36:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Think I see a btree vacuuming bug " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Could we just block splits of\n> > pages containing pins?\n> \n> That's not an improvement IMHO. The objection to the fix I suggested is\n> that it makes it harder for VACUUM to make progress in the presence of\n> contention. Replacing that with an approach that blocks foreground\n> processes from making progress is not better.\n\nYes. Considering there are splits going on where backends are losing\ntheir pins, it seems you have to either prevent the backends from losing\ntheir pins, prevent the splits, or prevent vacuum from removing tuples\non split pages that hold pins.\n\nRather than having vacuum pin all the pages, could vacuum block in cases\nwhere pins exist in pages that _could_ contain tuples caused by a recent\nsplit, meaning there are pins in pre-split locations?\n\n\n> > If the page splits, how does the index scan\n> > find the new page to start again?\n> \n> It moves right until it finds the tuple it was on. That will either be\n> in the pinned page, or some page to its right.\n> \n> > Could the index scan be made to\n> > handle cases where the index tuple it was stopped on is gone?\n> \n> Don't see how. With no equal keys, you could test each tuple you scan\n> over to see if it's > the expected key; but that would slow things down\n> tremendously I fear. In any case it fails completely when there are\n> equal keys, since you could not tell where in a run of equal keys to\n> resume scanning. You really have to find the exact index tuple you\n> stopped on, AFAICS.\n\nSo it uses the tid to find the old spot. Got it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 17:48:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "\nAny status on this? I know we talked about it but never came to a\ngood solution. Is it TODO?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is this fixed, and if not, can I have some TODO text?\n> \n> It's not fixed. I'd like to fix it for 7.3, but I was hoping someone\n> would think of a better way to fix it than I did ...\n> \n> \t\t\tregards, tom lane\n> \n> > ---------------------------------------------------------------------------\n> \n> > Tom Lane wrote:\n> >> If a VACUUM running concurrently with someone else's indexscan were to\n> >> delete the index tuple that the indexscan is currently stopped on, then\n> >> we'd get a failure when the indexscan resumes and tries to re-find its\n> >> place. (This is the infamous \"my bits moved right off the end of the\n> >> world\" error condition.) What is supposed to prevent that from\n> >> happening is that the indexscan retains a buffer pin (but not a read\n> >> lock) on the index page containing the tuple it's stopped on. VACUUM\n> >> will not delete any tuple until it can get a \"super exclusive\" lock on\n> >> the page (cf. LockBufferForCleanup), and the pin prevents it from doing\n> >> so.\n> >> \n> >> However: suppose that some other activity causes the index page to be\n> >> split while the indexscan is stopped, and that the tuple it's stopped\n> >> on gets relocated into the new righthand page of the pair. Then the\n> >> indexscan is holding a pin on the wrong page --- not the one its tuple\n> >> is in. It would then be possible for the VACUUM to arrive at the tuple\n> >> and delete it before the indexscan is resumed.\n> >> \n> >> This is a pretty low-probability scenario, especially given the new\n> >> index-tuple-killing mechanism (which renders it less likely that an\n> >> indexscan will stop on a vacuum-able tuple). But it could happen.\n> >> \n> >> The only solution I've thought of is to make btbulkdelete acquire\n> >> \"super exclusive\" lock on *every* leaf page of the index as it scans,\n> >> rather than only locking the pages it actually needs to delete something\n> >> from. And we'd need to tweak _bt_restscan to chain its pins (pin the\n> >> next page to the right before releasing pin on the previous page).\n> >> This would prevent a btbulkdelete scan from overtaking ordinary\n> >> indexscans, and thereby ensure that it couldn't arrive at the tuple\n> >> on which an indexscan is stopped, even with splitting.\n> >> \n> >> I'm somewhat concerned that the more stringent locking will slow down\n> >> VACUUM a good deal when there's lots of concurrent activity, but I don't\n> >> see another answer. Ideas anyone?\n> >> \n> >> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 1 Sep 2002 22:51:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Any status on this? I know we talked about it but never came to a\n> good solution. Is it TODO?\n\nI think it's more like MUSTFIX ... but it's a bug not a feature\naddition, so it'd fair game to work on in beta, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 00:08:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Think I see a btree vacuuming bug " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Any status on this? I know we talked about it but never came to a\n> > good solution. Is it TODO?\n> \n> I think it's more like MUSTFIX ... but it's a bug not a feature\n> addition, so it'd fair game to work on in beta, no?\n\nYes. I will add it to the open items list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 00:21:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Any status on this? I know we talked about it but never came to a\n> > > good solution. Is it TODO?\n> > \n> > I think it's more like MUSTFIX ... but it's a bug not a feature\n> > addition, so it'd fair game to work on in beta, no?\n> \n> Yes. I will add it to the open items list.\n\nActually, the open items list is pretty small now:\n\n---------------------------------------------------------------------------\n\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nAllow easy display of usernames in a group (pg_hba.conf uses groups now)\nFix BeOS and QNX4 ports\nGet bison upgrade on postgresql.org\nFix vacuum btree bug (Tom)\n\nOn Hold\n-------\nPoint-in-time recovery\nWin32 port\nSecurity audit\n\nDocumentation Changes\n---------------------\nDocument need to add permissions to loaded functions and languages\nMove documation to gborg for moved projects\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 00:23:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "Bruce Momjian wrote:\n> Allow easy display of usernames in a group (pg_hba.conf uses groups now)\n\nHows this:\n\nparts=# select * from pg_group ;\n groname | grosysid | grolist\n---------+----------+---------------\n grp | 100 | {100,101,102}\n grp2 | 101 | {102}\n(2 rows)\n\nparts=# select usename,usesysid from pg_user;\n usename | usesysid\n----------+----------\n postgres | 1\n user1 | 100\n user2 | 101\n user3 | 102\n(4 rows)\n\nCREATE FUNCTION show_group(text) RETURNS SETOF text AS '\nDECLARE\n loginname text;\n low int;\n high int;\nBEGIN\n SELECT INTO low\n replace(split(array_dims(grolist),'':'',1),''['','''')::int\n FROM pg_group WHERE groname = $1;\n SELECT INTO high\n replace(split(array_dims(grolist),'':'',2),'']'','''')::int\n FROM pg_group WHERE groname = $1;\n\n FOR i IN low..high LOOP\n SELECT INTO loginname s.usename\n FROM pg_shadow s join pg_group g on s.usesysid = g.grolist[i];\n RETURN NEXT loginname;\n END LOOP;\n RETURN;\nEND;\n' LANGUAGE 'plpgsql';\n\nparts=# select * from show_group('grp');\n show_group\n------------\n user1\n user2\n user3\n(3 rows)\n\nparts=# select * from show_group('grp2');\n show_group\n------------\n user1\n(1 row)\n\n\n--Joe\n\n", "msg_date": "Sun, 01 Sep 2002 22:45:43 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" }, { "msg_contents": "\nYep, that's it!\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> Bruce Momjian wrote:\n> > Allow easy display of usernames in a group (pg_hba.conf uses groups now)\n> \n> Hows this:\n> \n> parts=# select * from pg_group ;\n> groname | grosysid | grolist\n> ---------+----------+---------------\n> grp | 100 | {100,101,102}\n> grp2 | 101 | {102}\n> (2 rows)\n> \n> parts=# select usename,usesysid from pg_user;\n> usename | usesysid\n> ----------+----------\n> postgres | 1\n> user1 | 100\n> user2 | 101\n> user3 | 102\n> (4 rows)\n> \n> CREATE FUNCTION show_group(text) RETURNS SETOF text AS '\n> DECLARE\n> loginname text;\n> low int;\n> high int;\n> BEGIN\n> SELECT INTO low\n> replace(split(array_dims(grolist),'':'',1),''['','''')::int\n> FROM pg_group WHERE groname = $1;\n> SELECT INTO high\n> replace(split(array_dims(grolist),'':'',2),'']'','''')::int\n> FROM pg_group WHERE groname = $1;\n> \n> FOR i IN low..high LOOP\n> SELECT INTO loginname s.usename\n> FROM pg_shadow s join pg_group g on s.usesysid = g.grolist[i];\n> RETURN NEXT loginname;\n> END LOOP;\n> RETURN;\n> END;\n> ' LANGUAGE 'plpgsql';\n> \n> parts=# select * from show_group('grp');\n> show_group\n> ------------\n> user1\n> user2\n> user3\n> (3 rows)\n> \n> parts=# select * from show_group('grp2');\n> show_group\n> ------------\n> user1\n> (1 row)\n> \n> \n> --Joe\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 01:48:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Think I see a btree vacuuming bug" } ]
[ { "msg_contents": "The attached patch is my first pass at a sample C function returning \nsetof composite. It is a clone of SHOW ALL as an SRF. For the moment, \nthe function is implemented as contrib/showguc, although a few minor \nchanges to guc.c and guc.h were required to support it.\n\nI wanted to post it to HACKERS because several people asked for such an \nexample. I also wanted to see if there was any interest in having this \nfolded into the backend, either in addition to, or in place of, the \ncurrent SHOW ALL functionality.\n\nComments?\n\nThanks,\n\nJoe", "msg_date": "Sat, 25 May 2002 23:24:21 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "sample SRF: SHOW ALL equiv C function returning setof composite" } ]
[ { "msg_contents": "\nUsing a recent build (22.5) from CVS, if I create a set returning\nfunction in SQL like this:\n\nfunc_test=# CREATE TABLE foo (id INT, txt1 TEXT, txt2 TEXT);\nCREATE TABLE\nfunc_test=# INSERT INTO foo VALUES(1, 'Hello','World');\nINSERT 24819 1\nfunc_test=# \nfunc_test=# CREATE OR REPLACE FUNCTION bar(int)\nfunc_test-# RETURNS SETOF foo\nfunc_test-# AS 'SELECT * FROM foo WHERE id = $1'\nfunc_test-# LANGUAGE 'sql';\nCREATE FUNCTION\n\nI can do this (expected result):\n\nfunc_test=# SELECT txt1, txt2 FROM bar(1);\n txt1 | txt2 \n-------+-------\n Hello | World\n(1 row)\n\nbut also this:\n\nfunc_test=# select bar(1);\n bar \n-----------\n 139059784\n(1 row)\n\nWhat is this number? It often varies from query to query.\nPossibly an error-in-disguise because of something to do\nwith the calling context?\n\n\nJust curious ;-)\n\n\nIan Barwick\n\n", "msg_date": "Sun, 26 May 2002 08:28:12 +0200", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": true, "msg_subject": "Q: unexpected result from SRF in SQL" }, { "msg_contents": "Ian Barwick wrote:\n> but also this:\n> \n> func_test=# select bar(1);\n> bar \n> -----------\n> 139059784\n> (1 row)\n> \n> What is this number? It often varies from query to query.\n> Possibly an error-in-disguise because of something to do\n> with the calling context?\n\nThis is an illustration of why the expression SRF API isn't very useful \nfor returning composite types ;)\n\nThe number is actually a pointer to the result row. There is no way \nunder the expression API to get at the individual columns directly. If \nyou're really curious, see contrib/dblink in 7.2.x for an example of a \n(ugly) workaround.\n\nJoe\n\n\n", "msg_date": "Sun, 26 May 2002 08:11:35 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Q: unexpected result from SRF in SQL" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> This is an illustration of why the expression SRF API isn't very useful \n> for returning composite types ;)\n> The number is actually a pointer to the result row. There is no way \n> under the expression API to get at the individual columns directly.\n\nYou can get at one column --- as of 7.3 it is possible to do\n\n\tSELECT (bar(1)).field2;\n\n(the parens are required to avoid syntax conflicts). However SELECT is\nnot bright enough to do anything useful with a composite value directly.\n\nLong ago (ie, in Postquel days) there seems to have been support for\nbreaking apart a composite result into multiple output columns.\n(I *think* that was what the \"fjoin\" variant of targetlists was for.)\nBut it's been dead code for a long time --- probably Yu and Chen broke\nit while converting the system to use SQL-spec syntax for SELECTs.\n\nI am thinking that in 7.3 we might admit that that code's never gonna\nget fixed, and alter SELECT so that a composite result appearing in a\nSELECT targetlist draws an error.\n\nIf anyone does someday resurrect fjoin-like functionality, a reasonable\nSQL-style syntax for invoking it would be\n\n\tSELECT (bar(1)).*;\n\nwhich would still leave us wanting to raise an error if you just write\n\"SELECT bar(1)\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 May 2002 11:58:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Q: unexpected result from SRF in SQL " }, { "msg_contents": "On Sunday 26 May 2002 17:58, Tom Lane wrote:\n(...)\n> If anyone does someday resurrect fjoin-like functionality, a reasonable\n> SQL-style syntax for invoking it would be\n>\n> \tSELECT (bar(1)).*;\n>\n> which would still leave us wanting to raise an error if you just write\n> \"SELECT bar(1)\".\n\nThe reason why I posted the question is that I had defined a function\nthat should have worked, but kept giving me back strange numbers,\nso I spent a whole five minutes trying to debug the function before\nI realised I was calling it in the wrong way (doh). An error here would\nbe a Good Idea, IMHO.\n\nIan Barwick\n\n\n", "msg_date": "Sun, 26 May 2002 19:04:04 +0200", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Q: unexpected result from SRF in SQL" } ]
[ { "msg_contents": "Is there a reason for the following behavior?\n\nnconway=# create table a (col1 int);\nCREATE TABLE\nnconway=# insert into a default values;\nINSERT 1883513 1\nnconway=# copy a to '/tmp/output';\nCOPY\nnconway=# create view myview as select * from a;\nCREATE VIEW\nnconway=# copy myview to '/tmp/output';\nERROR: You cannot copy view myview\n\nI can understand not allowing COPY FROM to target a view\n(or at least, a view without an insertion rule defined) --\nbut is there a similar reason for disallowing copying data\nout of a view?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sun, 26 May 2002 02:32:01 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "COPY and views" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> I can understand not allowing COPY FROM to target a view\n> (or at least, a view without an insertion rule defined) --\n> but is there a similar reason for disallowing copying data\n> out of a view?\n\nAllowing either would take COPY out of the realm of utility statements\nand into the realm of plannable queries --- in particular, COPY from a\nview would have to have full SELECT capability, with only a slightly\ndifferent user interface for delivering the tuples.\n\nThis would not necessarily be a bad idea ... but it would be a major\nrewrite of COPY.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 May 2002 11:37:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY and views " } ]
[ { "msg_contents": "Hi every one.\n\nI just moved (at last!) to 7.2.1. Works like a charm...\nI'm suprised though by the number of WAL files.\n\nI have 8 files where postgresql.conf says WAL_FILES=4.\n\nWhat did I miss ? (I have no outstanding transaction)\n\nFWIW, t's on UW711.\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Sun, 26 May 2002 18:55:21 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "WAL FILES" }, { "msg_contents": "Olivier PRENANT wrote:\n> Hi every one.\n> \n> I just moved (at last!) to 7.2.1. Works like a charm...\n> I'm suprised though by the number of WAL files.\n> \n> I have 8 files where postgresql.conf says WAL_FILES=4.\n> \n> What did I miss ? (I have no outstanding transaction)\n> \n> FWIW, t's on UW711.\n\nNo, you are fine. The current GUC params are confusing. I did update\nthe documentation for 7.3, but I plan to reorganize those params to be\nmore meaningful.\n\nActually, I have in TODO:\n\n Remove wal_files postgresql.conf option because WAL files are now\n recycled \n\nbecause the param no longer controls what you think it controls. In 7.1\nWAL files where not recycled, so WAL_FILES was used to pre-allocate\nfiles so there wasn't as much happening during checkpoint. Now, with\nrecycling, there is no need.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 May 2002 17:17:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL FILES" }, { "msg_contents": "Hi Bruce,\n\nThank you for your reply. It makes a lot of sense!\nHowever I don't really understand why we can't control the NUMBER of\nfiles.\nAre the 8 files I see a maximum usage when I reloaded the databases on the\nne system or is it some sort of \"plugged in value\"?\n\nThank you for your explanation.\nOn Mon, 27 May 2002, Bruce Momjian wrote:\n\n> Date: Mon, 27 May 2002 17:17:58 -0400 (EDT)\n> From: Bruce Momjian <pgman@candle.pha.pa.us>\n> To: ohp@pyrenet.fr\n> Cc: pgsql-hackers list <pgsql-hackers@postgresql.org>\n> Subject: Re: [HACKERS] WAL FILES\n> \n> Olivier PRENANT wrote:\n> > Hi every one.\n> > \n> > I just moved (at last!) to 7.2.1. Works like a charm...\n> > I'm suprised though by the number of WAL files.\n> > \n> > I have 8 files where postgresql.conf says WAL_FILES=4.\n> > \n> > What did I miss ? (I have no outstanding transaction)\n> > \n> > FWIW, t's on UW711.\n> \n> No, you are fine. The current GUC params are confusing. I did update\n> the documentation for 7.3, but I plan to reorganize those params to be\n> more meaningful.\n> \n> Actually, I have in TODO:\n> \n> Remove wal_files postgresql.conf option because WAL files are now\n> recycled \n> \n> because the param no longer controls what you think it controls. In 7.1\n> WAL files where not recycled, so WAL_FILES was used to pre-allocate\n> files so there wasn't as much happening during checkpoint. Now, with\n> recycling, there is no need.\n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Tue, 28 May 2002 14:28:13 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: WAL FILES" }, { "msg_contents": "\n8 is the maximum unless WAL files have to be created _while_ the\ncheckpoint is taking place.\n\nCurrent CVS SGML has:\n\n The number of 16MB segment files will always be at least\n <varname>WAL_FILES</varname> + 1, and will normally not exceed\n <varname>WAL_FILES</varname> + MAX(<varname>WAL_FILES</varname>,\n <varname>CHECKPOINT_SEGMENTS</varname>) + 1.\n\nThe real driver here is CHECKPOINT_SEGMENTS because WAL_FILES is going\naway in 7.3 and will just be dynamically used. The typical setup is\ncheckpoint_segments files. I will also add better reporting so you can\nknow if your checkpoint_segments is too small, causing checkpoints too\nfrequently.\n\n---------------------------------------------------------------------------\n\nOlivier PRENANT wrote:\n> Hi Bruce,\n> \n> Thank you for your reply. It makes a lot of sense!\n> However I don't really understand why we can't control the NUMBER of\n> files.\n> Are the 8 files I see a maximum usage when I reloaded the databases on the\n> ne system or is it some sort of \"plugged in value\"?\n> \n> Thank you for your explanation.\n> On Mon, 27 May 2002, Bruce Momjian wrote:\n> \n> > Date: Mon, 27 May 2002 17:17:58 -0400 (EDT)\n> > From: Bruce Momjian <pgman@candle.pha.pa.us>\n> > To: ohp@pyrenet.fr\n> > Cc: pgsql-hackers list <pgsql-hackers@postgresql.org>\n> > Subject: Re: [HACKERS] WAL FILES\n> > \n> > Olivier PRENANT wrote:\n> > > Hi every one.\n> > > \n> > > I just moved (at last!) to 7.2.1. Works like a charm...\n> > > I'm suprised though by the number of WAL files.\n> > > \n> > > I have 8 files where postgresql.conf says WAL_FILES=4.\n> > > \n> > > What did I miss ? (I have no outstanding transaction)\n> > > \n> > > FWIW, t's on UW711.\n> > \n> > No, you are fine. The current GUC params are confusing. I did update\n> > the documentation for 7.3, but I plan to reorganize those params to be\n> > more meaningful.\n> > \n> > Actually, I have in TODO:\n> > \n> > Remove wal_files postgresql.conf option because WAL files are now\n> > recycled \n> > \n> > because the param no longer controls what you think it controls. In 7.1\n> > WAL files where not recycled, so WAL_FILES was used to pre-allocate\n> > files so there wasn't as much happening during checkpoint. Now, with\n> > recycling, there is no need.\n> > \n> > \n> \n> -- \n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: ohp@pyrenet.fr\n> ------------------------------------------------------------------------------\n> Make your life a dream, make your dream a reality. (St Exupery)\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 May 2002 21:02:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL FILES" } ]
[ { "msg_contents": "Here is a revised patch for a sample C function returning setof \ncomposite. (Same comments as last time -- It is a clone of SHOW ALL as \nan SRF. For the moment, the function is implemented as contrib/showguc, \nalthough a few minor changes to guc.c and guc.h were required to support \nit.)\n\nThis version includes pieces that may be appropriate for fmgr.c and \nfmgr.h, to hide some of the ugliness and facilitate writing C functions \nwhich return setof composite. The API is something like this:\n\nAn SRF written in C must define a variable of type (FuncCallContext *). \nThe structure of FuncCallContext looks like:\n\ntypedef struct\n{\n /* Number of times we've been called before */\n uint call_cntr;\n\n /* Maximum number of calls */\n uint max_calls;\n\n /* pointer to result slot */\n TupleTableSlot *slot;\n\n /* pointer to misc context info */\n void\t*fctx;\n\n /* pointer to array of attribute \"type\"in finfo */\n FmgrInfo *att_in_funcinfo;\n\n /* pointer to array of attribute type typelem */\n Oid *elements;\n\n /* memory context used to initialize structure */\n MemoryContext fmctx;\n} FuncCallContext;\n\nThe first line after the function declarations should be:\n\n FUNC_MULTIPLE_RESULT(funcctx, relname, max_calls, misc_ctx);\n\nwhere\n funcctx is the pointer to FuncCallContext. This is required.\n relname is the relation name for the composite type the function\n returns. This is required.\n max_calls is the maximum number of times the function is expected\n to return results. You don't have to provide or use this.\n misc_ctx is a pointer available for the user to store anything needed\n to retain context from call to call (i.e. this is what you\n previously might have assigned to fcinfo->flinfo->fn_extra).\n You don't have to provide or use this.\n\nNext, use funcctx->call_cntr and funcctx->max_calls (or whatever method \nyou want) to determine whether or not the function is done returning \nresults. If not, prepare an array of C strings representing the \nattribute values of your return tuple, and call:\n\n FUNC_BUILD_SLOT(values, funcctx);\n\nThis applies the attribute \"in\" functions to your values, and stores the \nresults in a TupleTableSlot.\n\nNext, clean up as appropriate, and call:\n\n FUNC_RETURN_NEXT(funcctx);\n\nThis increments funcctx->call_cntr in preparation for the next call, \nsets rsi->isDone = ExprMultipleResult to let the caller know we're not \ndone yet, and returns the slot.\n\nFinally, when funcctx->call_cntr = funcctx->max_calls, call:\n\n FUNC_RETURN_DONE(funcctx);\n\nThis does some cleanup, sets rsi->isDone = ExprEndResult, and returns a \nNULL slot.\n\nI made some changes to pull as much into the first call initialization \nas possible to save redundant work on subsequent calls. For the 96 rows \nreturned by this function, EXPLAIN ANALYZE time went from ~1.6 msec \nusing the first patch, to ~0.9 msec using this one.\n\nComments? Should this (maybe with a few tweaks) go into fmgr.c and \nfmgr.h as the SRF API?\n\nThanks,\n\nJoe", "msg_date": "Sun, 26 May 2002 18:30:32 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Joe Conway wrote:\n> Here is a revised patch for a sample C function returning setof \n> composite. (Same comments as last time -- It is a clone of SHOW ALL as \n> an SRF. For the moment, the function is implemented as contrib/showguc, \n> although a few minor changes to guc.c and guc.h were required to support \n> it.)\n> \n> This version includes pieces that may be appropriate for fmgr.c and \n> fmgr.h, to hide some of the ugliness and facilitate writing C functions \n> which return setof composite. The API is something like this:\n> \n\nSorry -- I was a bit too quick with the last patch :-(. It generates a \n\"Cache reference leak\" warning. This one is better.\n\nJoe", "msg_date": "Sun, 26 May 2002 19:26:36 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> If not, prepare an array of C strings representing the \n> attribute values of your return tuple, and call:\n> FUNC_BUILD_SLOT(values, funcctx);\n\nI think that's a poor choice of abstraction, as it forces the user into\nthe least-efficient-possible way of building a return tuple. What if\nhe's already got a tuple (eg, he read it off disk), or at any rate has\ndatums already in internal format? I'd say make it\n\n\tFUNC_RETURN_NEXT(funcctx, HeapTuple)\n\nand let the caller worry about calling heap_formtuple or otherwise\nconstructing the tuple.\n\nFor similar reasons I think the initial call ought to provide a\nTupleDesc structure, not a relation name (which is at least two lookups\nremoved from the information you actually need).\n\nThe max_calls thing doesn't seem quite right either; at least not as\nsomething that has to be provided in the \"first line after the function\ndeclarations\". It might be quite expensive to derive, and you don't\nneed to do so on every call. Perhaps better have the macro return a\nboolean indicating whether this is the first call or not, and then\npeople can do\n\n\tif (FUNC_MULTIPLE_RESULT(funcctx))\n\t{\n\t\t// do one-time setup here,\n\t\t// including possibly computing a max_calls value;\n\t\t// also find or make a TupleDesc to be stored into the\n\t\t// funcctx.\n\t}\n\nSimilarly I'm confused about the usefulness of misc_ctx if it has to be\nre-provided on every call.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 May 2002 16:18:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: revised sample SRF C function; proposed SRF API " }, { "msg_contents": "Joe Conway writes:\n\n> Here is a revised patch for a sample C function returning setof\n> composite. (Same comments as last time -- It is a clone of SHOW ALL as\n> an SRF. For the moment, the function is implemented as contrib/showguc,\n> although a few minor changes to guc.c and guc.h were required to support\n> it.)\n\nWe need a function like this in the main line. The \"show all\" variety\nisn't top priority, but we need something that gets you the \"show\" result\nas a query output. The original idea was to make SHOW return a query\nresult directly, but a function is fine with me too.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 27 May 2002 22:24:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Peter Eisentraut wrote:\n> We need a function like this in the main line. The \"show all\" variety\n> isn't top priority, but we need something that gets you the \"show\" result\n> as a query output. The original idea was to make SHOW return a query\n> result directly, but a function is fine with me too.\n> \n\nOriginally I wrote this as \"showvars(varname)\" and accepted 'all' in a \nsimilar fashion to SHOW ALL. But it seemed redundant since you can still do:\n\ntest=# select * from showvars() where varname = 'wal_sync_method';\n varname | varval\n-----------------+-----------\n wal_sync_method | fdatasync\n(1 row)\n\nbut you can also do:\n\ntest=# select * from showvars() where varname like 'show%';\n varname | varval\n---------------------+--------\n show_executor_stats | off\n show_parser_stats | off\n show_planner_stats | off\n show_query_stats | off\n show_source_port | off\n(5 rows)\n\nwhich also seemed useful.\n\nI was thinking that if we wanted to replace SHOW X with this, it could \nbe done in the parser by rewriting it as \"SELECT * FROM showvars() WHERE \nvarname = 'X'\", or for SHOW ALL just \"SELECT * FROM showvars()\".\n\nIn any case, I'll fit the showvars() function into the backend and \nsubmit a patch.\n\nThanks,\n\nJoe\n\n", "msg_date": "Mon, 27 May 2002 14:56:07 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Tom Lane wrote:\n > Joe Conway <mail@joeconway.com> writes:\n >\n >> If not, prepare an array of C strings representing the attribute\n >> values of your return tuple, and call: FUNC_BUILD_SLOT(values,\n >> funcctx);\n >\n > I think that's a poor choice of abstraction, as it forces the user\n > into the least-efficient-possible way of building a return tuple.\n > What if he's already got a tuple (eg, he read it off disk), or at\n > any rate has datums already in internal format? I'd say make it\n >\n > FUNC_RETURN_NEXT(funcctx, HeapTuple)\n >\n > and let the caller worry about calling heap_formtuple or otherwise\n > constructing the tuple.\n\nHmmm - well, I agree that FUNC_RETURN_NEXT(funcctx, HeapTuple) is a\nbetter abstraction, particularly for experience backend hackers ;)\nbut I was trying to also make this accessable to someone writing a\ncustom C function that isn't necessarily very familiar with forming\ntheir own HeapTuples manually. What if we also had something like:\n\n FUNC_BUILD_TUPLE(values, funcctx);\n\nwhich returns a tuple for the less experienced folks (or people like me\nwhen I'm being lazy :)) It could be used when desired, or skipped\nentirely if a HeapTuple is already easily available.\n\n\n >\n > For similar reasons I think the initial call ought to provide a TupleDesc\n > structure, not a relation name (which is at least two lookups removed\n > from the information you actually need).\n\nSame comments. How about:\n FUNC_BUILD_TUPDESC(_relname)\nand\n FUNC_MULTIPLE_RESULT(_funcctx, _tupdesc, _max_calls, _fctx)\n?\n\nPower hackers could skip FUNC_BUILD_TUPDESC if they wanted to or already\nhad a TupleDesc available.\n\nOf course you would only want to build your tupdesc during the first\npass, so maybe we'd need\n FUNC_IS_FIRSTPASS()\nwhich would just check for (fcinfo->flinfo->fn_extra == NULL)\n\n >\n > The max_calls thing doesn't seem quite right either; at least not as\n > something that has to be provided in the \"first line after the\n > function declarations\". It might be quite expensive to derive, and\n > you don't need to do so on every call.\n\nI thought about that, but the value is not required at all, and you can\neasily set it later when more convenient. Perhaps it should be taken out\nof the initialization and we just document how it might be used?\n\n\n > Perhaps better have the macro return a boolean indicating whether\n > this is the first call or not, and then people can do\n >\n > if (FUNC_MULTIPLE_RESULT(funcctx)) { // do one-time setup here, //\n > including possibly computing a max_calls value; // also find or make\n > a TupleDesc to be stored into the // funcctx. }\n\nhmm - see complete new example below.\n\n >\n > Similarly I'm confused about the usefulness of misc_ctx if it has to\n > be re-provided on every call.\n\nLike max_calls, maybe it should be taken out of the initialization and\nits potential use documented.\n\nOn second thought, I think maybe I tried to do too much with \nFUNC_MULTIPLE_RESULT. It does initialization during the first pass, and \nthen does per call setup for subsequent calls. Maybe there should be:\n\n FUNC_FIRSTCALL_INIT\nand\n FUNC_PERCALL_SETUP\n\nThen the whole API looks something like:\n\nDatum\nmy_Set_Returning_Function(PG_FUNCTION_ARGS)\n{\n FuncCallContext *funcctx;\n <user defined declarations>\n\n /*\n * Optional - user defined code needed to be called\n * on every pass\n */\n <user defined code>\n\n if(FUNC_IS_FIRSTPASS())\n {\n /*\n * Optional - user defined initialization which is only\n * required during the first pass through the function\n */\n <user defined code>\n\n /*\n * Optional - if desired, use this to get a TupleDesc\n * based on the function's return type relation\n */\n FUNC_BUILD_TUPDESC(_relname);\n\n /*\n * Required - memory allocation and initialization\n * which is only required during the first pass through\n * the function\n */\n FUNC_FIRSTCALL_INIT(funcctx, tupdesc);\n\n /*\n * optional - total number of tuples to be returned.\n *\n */\n funcctx->max_calls = my_max_calls;\n\n /*\n * optional - pointer to structure containing\n * user defined context\n */\n funcctx->fctx = my_func_context_pointer;\n }\n\n /*\n * Required - per call setup\n */\n FUNC_PERCALL_SETUP(funcctx)\n\n /*\n * Here we need to test whether or not we're all out\n * of tuples to return. The test does not have to be\n * this one, but in many cases this is probably what\n * you'll want.\n */\n if (call_cntr < max_calls)\n {\n /*\n * user code to derive data to be returned\n */\n <user defined code>\n\n /*\n * Optional - build a HeapTuple given user data\n * in C string form\n * values is an array of C strings, one for each\n * attribute of the return tuple\n */\n tuple = FUNC_BUILD_TUPLE(values, funcctx);\n\n /*\n * Required - returns the tuple and notifies\n * the caller that we still have more to do\n */\n FUNC_RETURN_NEXT(funcctx, HeapTuple);\n }\n else\n {\n /*\n * Required - returns NULL tuple and notifies\n * caller that we're all done now.\n */\n FUNC_RETURN_DONE(funcctx);\n }\n}\n\nHow's this look? Any better?\n\nThanks,\n\nJoe\n\n", "msg_date": "Mon, 27 May 2002 22:00:25 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> What if we also had something like:\n> FUNC_BUILD_TUPLE(values, funcctx);\n> which returns a tuple for the less experienced folks\n\nSure, as long as it's not getting in the way when you don't want it.\nFor that matter the FUNC stuff shouldn't get in the way of using it\nin other contexts, so I'd suggest decoupling it from funcctx. Why\nnot\n\tHeapTuple BuildTupleFromStrings(TupDesc, char**)\n(better choice of name welcome).\n\n\n> Then the whole API looks something like:\n\n> Datum\n> my_Set_Returning_Function(PG_FUNCTION_ARGS)\n> {\n> FuncCallContext *funcctx;\n> <user defined declarations>\n\n> /*\n> * Optional - user defined code needed to be called\n> * on every pass\n> */\n> <user defined code>\n\n> if(FUNC_IS_FIRSTPASS())\n> {\n> /*\n> * Optional - user defined initialization which is only\n> * required during the first pass through the function\n> */\n> <user defined code>\n\n> /*\n> * Optional - if desired, use this to get a TupleDesc\n> * based on the function's return type relation\n> */\n> FUNC_BUILD_TUPDESC(_relname);\n\n> /*\n> * Required - memory allocation and initialization\n> * which is only required during the first pass through\n> * the function\n> */\n> FUNC_FIRSTCALL_INIT(funcctx, tupdesc);\n\nI think this should be\n\n\tfuncctx = FUNC_FIRSTCALL_INIT(tupdesc);\n\nto make it clearer that it is initializing funcctx. Similarly\nFUNC_BUILD_TUPDESC should be more like\n\n\ttupdesc = RelationNameGetTupleDesc(relname);\n\nsince it's not particularly tied to this usage.\n\n> /*\n> * optional - total number of tuples to be returned.\n> *\n> */\n> funcctx->max_calls = my_max_calls;\n\n> /*\n> * optional - pointer to structure containing\n> * user defined context\n> */\n> funcctx->fctx = my_func_context_pointer;\n> }\n\n> /*\n> * Required - per call setup\n> */\n> FUNC_PERCALL_SETUP(funcctx)\n\nAgain I'd prefer\n\n\tfuncctx = FUNC_PERCALL_SETUP();\n\nI think this is easier for both humans and compilers to recognize\nas an initialization of funcctx.\n\n> How's this look? Any better?\n\nDefinitely better. I'd suggest also thinking about whether the\nsame/similar macros can support functions that return a set of a\nscalar (non-tuple) datatype. In my mind, the cleanest design would\nbe some base macros that support functions-returning-set (of anything),\nand if you want to return a set of scalar then you just use these\ndirectly (handing a Datum to FUNC_RETURN_NEXT). If you want to return\na set of tuples then there are a couple extra steps that you need to\ndo to build a tupdesc, build a tuple, and convert the tuple to Datum\n(which at the moment you do by putting it into a slot, but I think we\nought to change that soon). If it were really clean then the macros\nsupporting these extra steps would also work without the SRF macros,\nso that you could use 'em in a function returning a single tuple.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 10:50:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: revised sample SRF C function; proposed SRF API " }, { "msg_contents": "Tom Lane wrote:\n> Definitely better. I'd suggest also thinking about whether the\n> same/similar macros can support functions that return a set of a\n> scalar (non-tuple) datatype. In my mind, the cleanest design would\n> be some base macros that support functions-returning-set (of anything),\n> and if you want to return a set of scalar then you just use these\n> directly (handing a Datum to FUNC_RETURN_NEXT). If you want to return\n> a set of tuples then there are a couple extra steps that you need to\n> do to build a tupdesc, build a tuple, and convert the tuple to Datum\n> (which at the moment you do by putting it into a slot, but I think we\n> ought to change that soon). If it were really clean then the macros\n> supporting these extra steps would also work without the SRF macros,\n> so that you could use 'em in a function returning a single tuple.\n> \n\nSorry for the long delay. I just got back to this today, and I've run \ninto an interesting question.\n\nI have a proposal and patch almost ready which I think pretty much meets \nthe above design requirements. I also wanted to incorporate a built-in \nfunction for returning guc variables (varname text, varval text), \nconsistent with previous posts. This is both useful and a good test of \nthe Composite & SRF function API (the API includes functions/macros to \nfacilitate returning composite types, and an independent set of \nfunctions/macros for returning sets, whether composite or scalar).\n\nThe question is how to best bootstrap this new function. In order to \ncreate the pg_proc entry I need the return type oid. If I understand \ncorrectly, in order to get a composite return type, with a known oid, I \nwould need to create a bootstrapped relation and the corresponding \nbootstrapped pg_type entry.\n\nIs there any alternative? It seems ugly to bootstrap so many objects for \nevery (future) builtin function which returns a composite type.\n\nThanks,\n\nJoe\n\n\n\n", "msg_date": "Thu, 06 Jun 2002 22:18:14 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Tom Lane wrote:\n> Definitely better. I'd suggest also thinking about whether the\n> same/similar macros can support functions that return a set of a\n> scalar (non-tuple) datatype. In my mind, the cleanest design would\n> be some base macros that support functions-returning-set (of anything),\n> and if you want to return a set of scalar then you just use these\n> directly (handing a Datum to FUNC_RETURN_NEXT). If you want to return\n> a set of tuples then there are a couple extra steps that you need to\n> do to build a tupdesc, build a tuple, and convert the tuple to Datum\n> (which at the moment you do by putting it into a slot, but I think we\n> ought to change that soon). If it were really clean then the macros\n> supporting these extra steps would also work without the SRF macros,\n> so that you could use 'em in a function returning a single tuple.\n\nI have a patch ready now which I think meets the design requirements \nabove for the most part. The API is in two pieces: one which aids in \ncreation of functions which return composite types; the other helps with \nSRFs. The comments in funcapi.h summarize the API:\n\n/*-------------------------------------------------------------------------\n *\tSupport to ease writing Functions returning composite types\n *\n * External declarations:\n * TupleDesc RelationNameGetTupleDesc(char *relname) - Use to get a\n * TupleDesc based on the function's return type relation.\n * TupleDesc TypeGetTupleDesc(Oid typeoid, List *colaliases) - Use to\n * get a TupleDesc based on the function's type oid. This can be\n * used to get a TupleDesc for a base (scalar), or composite type.\n * TupleTableSlot *TupleDescGetSlot(TupleDesc tupdesc) - Initialize a\n * slot given a TupleDesc.\n * AttInMetadata *TupleDescGetAttInMetadata(TupleDesc tupdesc) - Get a\n * pointer to AttInMetadata based on the function's TupleDesc.\n * AttInMetadata can be used in conjunction with C strings to\n * produce a properly formed tuple. Store the metadata here for\n * use across calls to avoid redundant work.\n * HeapTuple BuildTupleFromCStrings(AttInMetadata *attinmeta,\n * char **values) -\n * build a HeapTuple given user data in C string form. values is an\n * array of C strings, one for each attribute of the return tuple.\n *\n * Macro declarations:\n * TupleGetDatum(TupleTableSlot *slot, HeapTuple tuple) - get a Datum\n * given a tuple and a slot.\n */\n\n\n/*-------------------------------------------------------------------------\n *\t\tSupport for Set Returning Functions (SRFs)\n *\n * The basic API for SRFs looks something like:\n *\n * Datum\n * my_Set_Returning_Function(PG_FUNCTION_ARGS)\n * {\n * \tFuncCallContext\t *funcctx;\n * \tDatum\t\tresult;\n * \t<user defined declarations>\n *\n * \tif(SRF_IS_FIRSTPASS())\n * \t{\n * \t\t<user defined code>\n * \t\t<obtain slot>\n * \t\tfuncctx = SRF_FIRSTCALL_INIT(slot);\n * \t\t<user defined code>\n * \t}\n * \t<user defined code>\n * \tfuncctx = SRF_PERCALL_SETUP(funcctx);\n * \t<user defined code>\n *\n * \tif (funcctx->call_cntr < funcctx->max_calls)\n * \t{\n * \t\t<user defined code>\n * \t\t<obtain result Datum>\n * \t\tSRF_RETURN_NEXT(funcctx, result);\n * \t}\n * \telse\n * \t{\n * \t\tSRF_RETURN_DONE(funcctx);\n * \t}\n * }\n *\n */\n\nIf interested in more details, the patch will be sent shortly to \nPATCHES. Included in the patch is a reference implementation of the API, \nshow_all_vars() (showguc_all() in guc.c). This returns the same \ninformation as SHOW ALL, but as an SRF, allowing, for example, the \nfollowing:\n\ntest=# select * from show_all_vars() where varname like 'cpu%';\n varname | varval\n----------------------+--------\n cpu_index_tuple_cost | 0.001\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n(3 rows)\n\nComments/thoughts?\n\nThanks,\n\nJoe\n\n", "msg_date": "Fri, 07 Jun 2002 16:05:19 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Tom Lane wrote:\n> Definitely better. I'd suggest also thinking about whether the\n> same/similar macros can support functions that return a set of a\n> scalar (non-tuple) datatype. In my mind, the cleanest design would\n> be some base macros that support functions-returning-set (of anything),\n> and if you want to return a set of scalar then you just use these\n> directly (handing a Datum to FUNC_RETURN_NEXT). If you want to return\n> a set of tuples then there are a couple extra steps that you need to\n> do to build a tupdesc, build a tuple, and convert the tuple to Datum\n> (which at the moment you do by putting it into a slot, but I think we\n> ought to change that soon). If it were really clean then the macros\n> supporting these extra steps would also work without the SRF macros,\n> so that you could use 'em in a function returning a single tuple.\n\nAs promised on HACKERS, here's the Composite and SRF function API patch. \nIncluded is a new builtin guc function, exported as show_all_vars(). In \norder to avoid creating a new bootstrapped relation, I made the pg_proc \nentry specify 0 as the function return type, and then fixed it in \ninitdb.sh as follows:\n\n$ECHO_N \"setting return type for composite returning functions... \"$ECHO_C\n\n\"$PGPATH\"/postgres $PGSQL_OPT template1 >/dev/null <<EOF\nCREATE VIEW pg__guc AS \\\n SELECT \\\n ''::text as varname, \\\n ''::text as varval;\n\nUPDATE pg_proc SET \\\n prorettype = (SELECT oid FROM pg_type WHERE typname = 'pg__guc') \\\n WHERE \\\n proname = 'show_all_vars';\n\nEOF\nif [ \"$?\" -ne 0 ]; then\n exit_nicely\nfi\necho \"ok\"\n\nAny concerns with this approach? Is it too much of a hack, or \npreferrable to adding new bootstrapped relations solely for this purpose?\n\nThanks,\n\nJoe", "msg_date": "Fri, 07 Jun 2002 16:11:20 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> The question is how to best bootstrap this new function. In order to \n> create the pg_proc entry I need the return type oid. If I understand \n> correctly, in order to get a composite return type, with a known oid, I \n> would need to create a bootstrapped relation and the corresponding \n> bootstrapped pg_type entry.\n\nWell, we're not doing that; and I see no good reason to make the thing\nbe a builtin function at all. Since it's just an example, it can very\nwell be a contrib item with a creation script. Probably *should* be,\nin fact, because dynamically created functions are what other people are\ngoing to be building; an example of how to do it as a builtin function\nisn't as helpful.\n\nFurther down the road it may be that we'll get around to allowing\nfreestanding composite types (ie, ones with no associated table).\nThat would make it less painful to have builtin functions returning\ntuples --- though not by a lot, since you'd still have to manufacture\npg_type and pg_attribute rows for 'em by hand. I'm not in a hurry to do\nthat in any case, because of the extent of restructuring of pg_class,\npg_type, and pg_attribute that would be needed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 21:00:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: revised sample SRF C function; proposed SRF API " }, { "msg_contents": "Tom Lane wrote:\n> Well, we're not doing that; and I see no good reason to make the thing\n> be a builtin function at all. Since it's just an example, it can very\n> well be a contrib item with a creation script. Probably *should* be,\n> in fact, because dynamically created functions are what other people are\n> going to be building; an example of how to do it as a builtin function\n> isn't as helpful.\n\nTrue enough, although I could always create another example for contrib. \nReturning GUC variable \"SHOW ALL\" results as a query result has been \ndiscussed before, and I thought there was agreement that it was a \ndesirable backend feature.\n\nIs the approach in my patch still too ugly to allow a builtin SRF (set \nthe function return type to 0 in pg_proc.h, create a view and fix the \npg_proc entry during initdb)? If so, I'll rework the patch into two \npatches: one for the composite/set returning function api, and one for \nshow_all_vars() as a contrib/SRF example. If not, I'll just come up with \nanother function for contrib to serve as a reference implementation for \nothers.\n\nThanks,\n\nJoe\n\n", "msg_date": "Sat, 08 Jun 2002 19:32:04 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: revised sample SRF C function; proposed SRF API" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Returning GUC variable \"SHOW ALL\" results as a query result has been \n> discussed before, and I thought there was agreement that it was a \n> desirable backend feature.\n\nSo it is, but I had expected it to be implemented by changing the\nbehavior of SHOW, same as we did for EXPLAIN.\n\n> Is the approach in my patch still too ugly to allow a builtin SRF (set \n> the function return type to 0 in pg_proc.h, create a view and fix the \n> pg_proc entry during initdb)?\n\nToo ugly for my taste anyway ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jun 2002 11:59:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: revised sample SRF C function; proposed SRF API " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>Is the approach in my patch still too ugly to allow a builtin SRF (set \n>>the function return type to 0 in pg_proc.h, create a view and fix the \n>>pg_proc entry during initdb)?\n> \n> Too ugly for my taste anyway ...\n\nOK.\n\nHere is a patch for Composite and Set returning function support. I made \ntwo small changes to the API since last patch, which hopefully completes \nthe decoupling of composite function support from SRF specific support. \nIf there are no (further ;-)) objections, please apply. I'll send \nanother post with a patch for contrib/showguc.\n\nThanks,\n\nJoe", "msg_date": "Sun, 09 Jun 2002 18:19:24 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "C&SRF API patch (was Re: [HACKERS] revised sample SRF C function;\n\tproposed SRF API)" }, { "msg_contents": "Tom Lane wrote:\n> Well, we're not doing that; and I see no good reason to make the thing\n> be a builtin function at all. Since it's just an example, it can very\n> well be a contrib item with a creation script. Probably *should* be,\n> in fact, because dynamically created functions are what other people are\n> going to be building; an example of how to do it as a builtin function\n> isn't as helpful.\n\nHere is a patch for contrib/showguc. It can serve as a reference \nimplementation for a C function which returns setof composite. It \nrequired some small changes in guc.c and guc.h so that the number of GUC \nvariables, and their values, could be accessed. Example usage as shown \nbelow:\n\ntest=# select * from show_all_vars() where varname = 'wal_sync_method';\n varname | varval\n-----------------+-----------\n wal_sync_method | fdatasync\n(1 row)\n\ntest=# select show_var('wal_sync_method');\n show_var\n-----------\n fdatasync\n(1 row)\n\n\nshow_var() is neither composite nor set returning, but it seemed like a \nworthwhile addition. Please apply if there are no objections.\n\nThanks,\n\nJoe", "msg_date": "Sun, 09 Jun 2002 18:27:39 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "contrib/showguc (was Re: [HACKERS] revised sample SRF C function;\n\tproposed SRF API)" }, { "msg_contents": "> Tom Lane wrote:\n> > Well, we're not doing that; and I see no good reason to make the thing\n> > be a builtin function at all. Since it's just an example, it can very\n> > well be a contrib item with a creation script. Probably *should* be,\n> > in fact, because dynamically created functions are what other people are\n> > going to be building; an example of how to do it as a builtin function\n> > isn't as helpful.\n>\n> True enough, although I could always create another example for contrib.\n> Returning GUC variable \"SHOW ALL\" results as a query result has been\n> discussed before, and I thought there was agreement that it was a\n> desirable backend feature.\n\nSure would be. Means we can show config variables nicely in phpPgAdmin like\nphpMyAdmin does...\n\nChris\n\n", "msg_date": "Mon, 10 Jun 2002 14:21:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: revised sample SRF C function; proposed SRF API" } ]
[ { "msg_contents": "Hi,\n\nwe ( me and Teodor) are looking for some postgresql, Web short-time contracts.\nIf somebody have some offering, please contact for details.\nI estimate we'll have financial problem till autumn.\nOur experience:\n\n1. Search engines - small and medium scale for dynamic sites\n (customized OpenFTS - openfts.sourceforge.net)\n2. Full scale search engine for indexing web\n3. Customized data types and indexed access\n4. Dynamic web sites (mod_perl + Mason), distributive CMS with\n role-based authorization, versioning, staging. Proved working\n under high load (we did rather big informational web sites)\n\nSome information is available from http://www.xware.ru/,\nhttp://www.sai.msu.su/~megera/postgres/gist/\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 27 May 2002 15:35:05 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Two smart guys are looking for contracts :-)" } ]
[ { "msg_contents": "Hi,\n\ncould anyone please enlighten me about the status of replication? I do\nexpect lots of questions about this, and I'm not really sure if I can\npromise it for 7.3. :-)\n\nYes, I know it#s marked urgent in the TODO list, but no one seems to be\nlisted as tackling this topic.\n\nThanks a lot.\n\nMichael\n--\nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 27 May 2002 11:40:20 -0400 (EDT)", "msg_from": "meskes@postgresql.org (Michael Meskes)", "msg_from_op": true, "msg_subject": "Replication status" }, { "msg_contents": "Sorry, for sending this twice, but my ssh session broke while I was\nwriting and I didn't know it went out.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 27 May 2002 18:00:12 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Replication status" }, { "msg_contents": "On Mon, May 27, 2002 at 11:40:20AM -0400, Michael Meskes wrote:\n\n> could anyone please enlighten me about the status of replication? I do\n> expect lots of questions about this, and I'm not really sure if I can\n> promise it for 7.3. :-)\n\n 8.0 ;-) (?)\n\n I add the other quesion: how is current status of on-line backup log\n based on WAL? The enterprise usage require it maybe more than\n replication.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 28 May 2002 09:53:53 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Replication status" }, { "msg_contents": "On Tue, May 28, 2002 at 09:53:53AM +0200, Karel Zak wrote:\n> On Mon, May 27, 2002 at 11:40:20AM -0400, Michael Meskes wrote:\n> \n> > could anyone please enlighten me about the status of replication? I do\n> > expect lots of questions about this, and I'm not really sure if I can\n> > promise it for 7.3. :-)\n\n> I add the other quesion: how is current status of on-line backup log\n> based on WAL? \n\nI've cc:d the replication list, in order to cast the net wider.\n\nIt's funny that this should come up just now. I've been fishing\naround for some answers to these questions as well.\n\nDarren Johnson tells me that 2 developers, working on the replication\ncode full time, and sufficiently familiar with the PostgreSQL\ninternals, could probably finish that project (really finished,\nclustering and all) in 18-24 months. I'm pretty happy with the\neRserver code we're using, but it's not a competitor to ORAC, and I'm\ngetting some flack about that these days. For me, then, the\nmaster-slave replication problem is solved; but I need something\nwhich will go beyond it and do the sort of cluster that will make\nmarketing guys go \"ooh-ahh\". Sorry as I am to say it, that's the\ntruth.\n\nAlso, I'm getting some flack about the lack of point-in-time\nrecovery, which _might_ be possible to get by playing with WAL.\n\nThose are the two big technical reasons I can get beat up with the\nOracle stick. Happily for me, Oracle costs a mint, and producing\ncost line-items with fewer zeros after the one is something that\ncarries a lot of weight.\n\nSo I'm thinking of making a sales pitch, internally, to get some cash\nto sponsor some development. I have no authorisation to spend\nanything -- I'm just trying to get the info I need to propose to\nspend money. Think of this post as pre-speculation. (Gee, I hope\nthe corpoate guys don't read -hackers. Well, not likely, is it?)\n\nI need to know the following in order to make my pitch:\n\n1.\tWho's interested (in which project) and how soon available.\n2.\tDegree of comfort with the internals (yes, in some cases, I\nguess I'll know perfectly well).\n3.\tRates.\n\nPlease understand that I am (as one of the corp guys called me) just\none of the official geeks around here (which is depressing,\nconsidering how many meetings I have to go to). So I am Not Allowed\nto Touch the Money. But I might be able to get them to open the\npurse strings a little, especially when they contemplate the cost of\nmoving to Oracle with ORAC.\n\nIf people respond in confidence to me, I'll post something here to\nthe effect of additional interest I've heard expressed (assuming such\nposts don't also go here). I can't guarantee any actual results, of\ncourse (have I said that enough?). Also, if they decide to spend\nmoney, I won't be allowed to announce it -- the marketing guys will\nmake some splashy thing (possibly which annoys everyone at the same\ntime), I suppose. But I'll try to tell everyone as much as I'm\nallowed.\n\nThanks.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 28 May 2002 10:50:25 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Replication status & point-in-time recovery" }, { "msg_contents": "Karel Zak wrote:\n> On Mon, May 27, 2002 at 11:40:20AM -0400, Michael Meskes wrote:\n> \n> > could anyone please enlighten me about the status of replication? I do\n> > expect lots of questions about this, and I'm not really sure if I can\n> > promise it for 7.3. :-)\n> \n> 8.0 ;-) (?)\n> \n> I add the other quesion: how is current status of on-line backup log\n> based on WAL? The enterprise usage require it maybe more than\n> replication.\n\nYes! Point-in-time recovery and replication are our only to \"urgent\"\nitems on the TODO list.\n\nJan's idea of implementing point-in-time recovery as a playback of the\nreplication logs seems like a great idea, so I think replication may\nsolve both issues.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 May 2002 20:48:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication status" } ]
[ { "msg_contents": "Hi,\n\ncould anyone please enlighten me about the status of replication? I do\nexpect lots of questions about this, and I'm not really sure if I can\npromise it for 7.3. :-)\n\nYes, I know it's marked urgent in the TODO list, but no one seems to be\nlisted as tackling this topic.\n\nThanks a lot.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 27 May 2002 17:46:05 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Replication status" }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> could anyone please enlighten me about the status of replication? I do\n> expect lots of questions about this, and I'm not really sure if I can\n> promise it for 7.3. :-)\n\nUnless 7.3 slips drastically from our current intended schedule\n(beta in late August), I think it's pretty safe to say there will\nbe no replication in 7.3, beyond what's already available (rserv\nand so forth).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 May 2002 12:35:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication status " }, { "msg_contents": ">\n>\n>\n>Unless 7.3 slips drastically from our current intended schedule\n>(beta in late August), I think it's pretty safe to say there will\n>be no replication in 7.3, beyond what's already available (rserv\n>and so forth).\n>\n\nI can't speak for any of the other replication projects, but \npgreplication won't be\nready for 7.3. If all goes according to plan, I should have some free \ntime over\nthe summer months to put a good dent in the first phase, but at best it \nwould\nbe a very limited experimental patch.\n\nMore information on pgreplication can be found @\n\nhttp://gborg.postgresql.org/project/pgreplication/projdisplay.php\n\n\nDarren\n\n\n", "msg_date": "Mon, 27 May 2002 14:44:17 -0400", "msg_from": "Darren Johnson <darren@up.hrcoxmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication status" }, { "msg_contents": "Tom Lane wrote:\n> Michael Meskes <meskes@postgresql.org> writes:\n> > could anyone please enlighten me about the status of replication? I do\n> > expect lots of questions about this, and I'm not really sure if I can\n> > promise it for 7.3. :-)\n> \n> Unless 7.3 slips drastically from our current intended schedule\n> (beta in late August), I think it's pretty safe to say there will\n> be no replication in 7.3, beyond what's already available (rserv\n> and so forth).\n\nLast I talked to Darren, the replication code was modified to merge into\nour 7.2 tree. There are still pieces missing so it will not be\nfunctional when applied. It is remotely possible there could be\nmaster-slave in 7.3, but I doubt it.\n\nI was hoping to spend major time on it myself (and SRA/Japan has\nencouraged me to get involved), but have been too busy to dive in. I\nthink once it is in CVS, it will be easier to grasp what is going on,\nand perhaps to move it forward.\n\nI saw a message (I think for Darrren) saying he hoped to restart on it\nin two weeks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 May 2002 17:12:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication status" }, { "msg_contents": "On Mon, May 27, 2002 at 05:12:33PM -0400, Bruce Momjian wrote:\n> Last I talked to Darren, the replication code was modified to merge into\n> our 7.2 tree. There are still pieces missing so it will not be\n> functional when applied. It is remotely possible there could be\n> master-slave in 7.3, but I doubt it.\n\nThis is about pgreplication I think. Is the the replication project of\nchoice for pgsql? IIRC there quite some projects for this topic:\n\nPostgreSQL replicator\nRserver\nUsogres\ndbbalancer\n\nWhat about these? We seem to have some proof-of-concept code of rserver\nin contrib. Dbbalancer seems to be more focussed on balancing access and\nnot replication, but can do this too.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 28 May 2002 09:40:03 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Replication status" }, { "msg_contents": "On Tue, 28 May 2002, Michael Meskes wrote:\n\n> \n> This is about pgreplication I think. Is the the replication project of\n> choice for pgsql? IIRC there quite some projects for this topic:\n> \n> PostgreSQL replicator\n> Rserver\n> Usogres\n> dbbalancer\n\n\nThere's also DBMirror which I submitted to the contrib directory just \nafter the 7.2 release. I got an email last month saying that it had been \napplied against the 7.3 tree but I don't see it there.\n\nIts a trigger based lazy replication system and has all the associated \ndrawbacks but works for master-slave. I've been working on adding \nselective replication to it and hope to be able to release another version \nof that in June.\n\n\n> \n> Michael\n> \n> \n\n-- \nSteven Singer ssinger@navtechinc.com\nAircraft Performance Systems Phone: 519-747-1170 ext 282\nNavtech Systems Support Inc. AFTN: CYYZXNSX SITA: YYZNSCR\nWaterloo, Ontario ARINC: YKFNSCR\n\n\n", "msg_date": "Tue, 28 May 2002 17:43:48 +0000 (GMT)", "msg_from": "Steven Singer <ssinger@navtechinc.com>", "msg_from_op": false, "msg_subject": "Re: Replication status" }, { "msg_contents": "Michael Meskes wrote:\n> On Mon, May 27, 2002 at 05:12:33PM -0400, Bruce Momjian wrote:\n> > Last I talked to Darren, the replication code was modified to merge into\n> > our 7.2 tree. There are still pieces missing so it will not be\n> > functional when applied. It is remotely possible there could be\n> > master-slave in 7.3, but I doubt it.\n> \n> This is about pgreplication I think. Is the the replication project of\n> choice for pgsql? IIRC there quite some projects for this topic:\n> \n> PostgreSQL replicator\n> Rserver\n> Usogres\n> dbbalancer\n> \n> What about these? We seem to have some proof-of-concept code of rserver\n> in contrib. Dbbalancer seems to be more focussed on balancing access and\n> not replication, but can do this too.\n\nrserver only does single-master, while most people want multi-master. \nUsogres is more of a load balancer/replication, where the query is sent\nto both servers. Not sure about the others.\n\nThe only multi-master solution proposed is pgreplication. I think there\nis a PDF on that web site that describes the various replication\noptions. I should probably write up a little replication FAQ.\n\nJan is doing a replication talk at O'Reilly in July and hopefully we can\nget a PDF of that.\n\npgreplication is not good for nodes over slow links or nodes that are\nintermittently connected, so it is not going to solve all cases either.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 May 2002 20:54:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication status" }, { "msg_contents": "...\n> rserver only does single-master, while most people want multi-master.\n\nAs you probably know, rserv is not limited to only a single instance of\na single master. Many replication problems can be described as a \"single\nsource\" problem (or should be described as such; moving to a fully\ndistributed database brings a host of other issues). So any problem\nwhich can be decomposed to having single sources of subsets of\ninformation can be handled with this system.\n\nThe contrib/rserv code has received no contributions from the community\nbeyond our original submission, which of course pushes all of the\ndevelopment and recurring costs back onto PostgreSQL Inc and their\nclients. We have been very low-key (imho) in representing this solution\nto the developer community, but it should be considered for applications\nmatching its capabilities. Full transactional integrity across primary\nand secondary servers is not easy to come by and not offered by most\nother solutions. fwiw we have demonstrated well over 2000 updates per\nsecond flowing through rserv systems.\n\n - Thomas\n", "msg_date": "Tue, 28 May 2002 18:08:21 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Replication status" }, { "msg_contents": "\nAgreed. It would be nice to see both a single-master and multi-master\nserver included in our main tree and a clear description of when to use\neach. The confusion over the various replication solutions and their\nstrengths/weaknesses is a major problem.\n\nI always felt a clearer README for rserv would help greatly. We do get\nlots of questions about how to get it working. README.rserv goes over\nthe major 'toolset' items and describes a demo, but that is it. (I\ndon't even know what the 'toolset' items are or how to access them, at\nleast from reading the README.) I thought of doing the README\nimprovements myself, but because I didn't write it, I left it alone.\n\n---------------------------------------------------------------------------\n\nThomas Lockhart wrote:\n> ...\n> > rserver only does single-master, while most people want multi-master.\n> \n> As you probably know, rserv is not limited to only a single instance of\n> a single master. Many replication problems can be described as a \"single\n> source\" problem (or should be described as such; moving to a fully\n> distributed database brings a host of other issues). So any problem\n> which can be decomposed to having single sources of subsets of\n> information can be handled with this system.\n> \n> The contrib/rserv code has received no contributions from the community\n> beyond our original submission, which of course pushes all of the\n> development and recurring costs back onto PostgreSQL Inc and their\n> clients. We have been very low-key (imho) in representing this solution\n> to the developer community, but it should be considered for applications\n> matching its capabilities. Full transactional integrity across primary\n> and secondary servers is not easy to come by and not offered by most\n> other solutions. fwiw we have demonstrated well over 2000 updates per\n> second flowing through rserv systems.\n> \n> - Thomas\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 May 2002 21:33:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication status" }, { "msg_contents": "On Tue, May 28, 2002 at 06:08:21PM -0700, Thomas Lockhart wrote:\n\n> clients. We have been very low-key (imho) in representing this solution\n> to the developer community, but it should be considered for applications\n> matching its capabilities. \n\nI should like to emphasise that I have no desire to run down rserv --\nI think it's pretty good, and I'm more than happy with its\nperformance. That I'm now facing a feature-lust argument for ORAC is\na political, and not technical problem. \n\n> Full transactional integrity across primary\n> and secondary servers is not easy to come by and not offered by most\n> other solutions. \n\nExactly, plus there appears to be a big price to be paid for that\nfull integrity.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 30 May 2002 08:22:59 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Replication status" } ]
[ { "msg_contents": "Please,\nI saw some articles about de implementation off some security rules like\nNO CREATE TABLE and the possibility of the implementation in version 7.2\nof PostgreSQL.\nCould you confirm this information? Is there this implementation in 7.2?\n\nIf not, what coul I do to create a user without the privilege CREATE\nTABLE?", "msg_date": "Mon, 27 May 2002 12:59:30 -0300", "msg_from": "Marcia Abade <mabade@metrosp.com.br>", "msg_from_op": true, "msg_subject": "NO CREATE TABLE" }, { "msg_contents": "Marcia Abade wrote:\n> Please,\n> I saw some articles about de implementation off some security rules like\n> NO CREATE TABLE and the possibility of the implementation in version 7.2\n> of PostgreSQL.\n> Could you confirm this information? Is there this implementation in 7.2?\n> \n> If not, what coul I do to create a user without the privilege CREATE\n> TABLE?\n\nFeature will be in 7.3 only.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 18:54:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: NO CREATE TABLE" } ]
[ { "msg_contents": "\nHello hackers!\n\nDoes anyone know what the message \"invalid length of startup packet\"\nin /var/log/messages means? It says it's \"fatal\" - so what is the reason\nfor this message, what does it mean and what can I do against it?\n\nI use the latest postgresql-release on a heavily loaded dedicated pentium iv\nmachine (redhat linux).\n\nAny help or information appreciated,\n\nthanks\n\n(this has been posted on general-list earlier today)\n\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: steffen@topconcepts.com Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n\n", "msg_date": "Mon, 27 May 2002 18:28:59 +0200", "msg_from": "\"Henrik Steffen\" <steffen@city-map.de>", "msg_from_op": true, "msg_subject": "Invalid length of startup packet" }, { "msg_contents": "\"Henrik Steffen\" <steffen@city-map.de> writes:\n> Does anyone know what the message \"invalid length of startup packet\"\n> in /var/log/messages means?\n\nSomething is connecting to your postmaster and sending invalid data.\n\n> It says it's \"fatal\" - so what is the reason\n> for this message, what does it mean and what can I do against it?\n\nIn this context \"fatal\" just means that that connection will be dropped.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 May 2002 13:02:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Invalid length of startup packet " }, { "msg_contents": "\nDear Tom,\n\nI have just been talking to Hans-Juergen Schoening from the hackers-list\non the telephone. I found out, that I was really using postgres 7.2-1.72,\n(I took this as 7.2.1 :(( ) - so I updated the server, and the webserver\nthat's connecting to the database to the latest current rpm-release.\n\nUnfortunately I still receive the same messages...\n\nThe Webserver is using latest mod_perl and Pg.pm for connecting.\n\nCould this be a problem?\n\nThanks again for your help!\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: steffen@topconcepts.com Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Henrik Steffen\" <steffen@city-map.de>\nCc: <pgsql-hackers@postgresql.org>\nSent: Monday, May 27, 2002 7:02 PM\nSubject: Re: [HACKERS] Invalid length of startup packet\n\n\n> \"Henrik Steffen\" <steffen@city-map.de> writes:\n> > Does anyone know what the message \"invalid length of startup packet\"\n> > in /var/log/messages means?\n>\n> Something is connecting to your postmaster and sending invalid data.\n>\n> > It says it's \"fatal\" - so what is the reason\n> > for this message, what does it mean and what can I do against it?\n>\n> In this context \"fatal\" just means that that connection will be dropped.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Mon, 27 May 2002 19:27:05 +0200", "msg_from": "\"Henrik Steffen\" <steffen@city-map.de>", "msg_from_op": true, "msg_subject": "Re: Invalid length of startup packet " } ]
[ { "msg_contents": "\nHi,\n\nI just noticed plpgsql evaluates all AND'ed conditions even if the first\none fails. Example:\n\n\telsif TG_OP = ''UPDATE'' and old.type_reponse = ''abandon''\n\nThis will break stuff if the trigger is used on INSERT as\n\"old.type_reponse\" will be substituted and return an error.\n\nShouldn't plpgsql shortcut AND conditions when a previous one fails, as\nperl does?\n\n-- \n OENONE: Quoi ?\n PHEDRE: Je te l'ai pr�dit, mais tu n'as pas voulu.\n (Ph�dre, J-B Racine, acte 3, sc�ne 3)\n", "msg_date": "Tue, 28 May 2002 09:20:42 +0200", "msg_from": "Louis-David Mitterrand <vindex@apartia.org>", "msg_from_op": true, "msg_subject": "wierd AND condition evaluation for plpgsql" }, { "msg_contents": "Actually, at least in some cases, PG does short-circuit logic:\n\ncreate function seeme() returns bool as '\n begin\n raise notice ''seeme'';\n return true;\n end'\nlanguage plpgsql;\n\njoel@joel=# select false and seeme();\n ?column?\n----------\n f\n(1 row)\n\njoel@joel=# select true and seeme();\nNOTICE: seeme\n ?column?\n----------\n t\n(1 row)\n\n\nIn your case, the problem is short-circuiting a test, it's that the full\nstatement must be parsed and prepared, and it's probably in this stage that\nthe illegal use of old. in an insert jumps up.\n\nHTH.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Louis-David\n> Mitterrand\n> Sent: Tuesday, May 28, 2002 3:21 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] wierd AND condition evaluation for plpgsql\n>\n>\n>\n> Hi,\n>\n> I just noticed plpgsql evaluates all AND'ed conditions even if the first\n> one fails. Example:\n>\n> \telsif TG_OP = ''UPDATE'' and old.type_reponse = ''abandon''\n>\n> This will break stuff if the trigger is used on INSERT as\n> \"old.type_reponse\" will be substituted and return an error.\n>\n> Shouldn't plpgsql shortcut AND conditions when a previous one fails, as\n> perl does?\n>\n> --\n> OENONE: Quoi ?\n> PHEDRE: Je te l'ai pr�dit, mais tu n'as pas voulu.\n> (Ph�dre, J-B Racine,\n> acte 3, sc�ne 3)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Tue, 28 May 2002 09:09:04 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql" }, { "msg_contents": "Louis-David Mitterrand <vindex@apartia.org> writes:\n> I just noticed plpgsql evaluates all AND'ed conditions even if the first\n> one fails. Example:\n\n> \telsif TG_OP = ''UPDATE'' and old.type_reponse = ''abandon''\n\n> This will break stuff if the trigger is used on INSERT as\n> \"old.type_reponse\" will be substituted and return an error.\n\nI think you are confusing \"evaluation\" with \"syntax checking\".\n\nTry putting the reference to OLD inside a nested IF command.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 10:06:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql " }, { "msg_contents": "Louis-David Mitterrand writes:\n\n> Shouldn't plpgsql shortcut AND conditions when a previous one fails, as\n> perl does?\n\nShouldn't perl evaluate all operands unconditionally, like plpgsql does?\n\nSeriously, if you want to change this you have to complain to the SQL\nstandards committee.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 28 May 2002 18:52:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql" }, { "msg_contents": "On Tue, 2002-05-28 at 21:52, Peter Eisentraut wrote:\n> Louis-David Mitterrand writes:\n> \n> > Shouldn't plpgsql shortcut AND conditions when a previous one fails, as\n> > perl does?\n> \n> Shouldn't perl evaluate all operands unconditionally, like plpgsql does?\n> \n> Seriously, if you want to change this you have to complain to the SQL\n> standards committee.\n\nIs plpgsl a SQL standards committee standard ?\n\n\nand is the following non-standard ?\n\n(itest is a 16k row test table with i in 1-16k)\n\nhannu=# create sequence itest_seq;\nCREATE\nhannu=# select nextval('itest_seq');\n nextval \n---------\n 1\n(1 row)\n\nhannu=# select count(*) from itest where false and true;\n count \n-------\n 0\n(1 row)\n\nhannu=# select count(*) from itest where false and i =\nnextval('itest_seq');\n count \n-------\n 0\n(1 row)\n\nhannu=# select nextval('itest_seq');\n nextval \n---------\n 2\n(1 row)\n\nhannu=# select count(*) from itest where i = nextval('itest_seq');\n count \n-------\n 0\n(1 row)\n\nhannu=# select nextval('itest_seq');\n nextval \n---------\n 16387\n(1 row)\n\n---------------------\nHannu\n\n\n", "msg_date": "29 May 2002 00:16:07 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql" }, { "msg_contents": "On Wed, 2002-05-29 at 02:36, Joel Burton wrote:\n> > -----Original Message-----\n> joel@joel=# select true and seeme();\n> NOTICE: seeme\n> ?column?\n> ----------\n> t\n> (1 row)\n> \n> \n> It certainly appears to be short circuiting for \"select false and seeme()\",\n> for instance.\n> \n> It appears that this isn't short-circuiting by order of expressions, however\n> (as Perl and other languages do); for example, \"select seeme() or true\"\n> doesn't ever get to seeme(). I assume PG can simply see that the statement\n> \"true\" will evaluate to true (clever, that PG!), and therefore it doesn't\n> have to evaluate seeme() ?\n\nAre these intricacies of SQL standardised anywhere ?\n\nI know that gcc and other ccs can achieve different results depending on\noptimisation level - usually this is considered a bug.\n\nBut as PG runs always (?) at the maximum optimisation, should there be\nsuch guarantees ?\n\nOr is it something that should be ind doc's/faq's (- don't rely on side\neffects) ?\n\n------------------------\nHannu\n\n\n", "msg_date": "29 May 2002 00:48:07 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql" }, { "msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Peter Eisentraut\n> Sent: Tuesday, May 28, 2002 12:53 PM\n> To: Louis-David Mitterrand\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] wierd AND condition evaluation for plpgsql\n>\n>\n> Louis-David Mitterrand writes:\n>\n> > Shouldn't plpgsql shortcut AND conditions when a previous one fails, as\n> > perl does?\n>\n> Shouldn't perl evaluate all operands unconditionally, like plpgsql does?\n>\n> Seriously, if you want to change this you have to complain to the SQL\n> standards committee.\n\nPeter --\n\nBut PG does short-circuit for evaluation, doesn't it? His question was\nconfusing evaluation versus syntax checking and statement preparation.\n\ncreate function seeme() returns bool as '\n begin\n raise notice ''seeme'';\n return true;\n end'\nlanguage plpgsql;\n\njoel@joel=# select false and seeme();\n ?column?\n----------\n f\n(1 row)\n\njoel@joel=# select true and seeme();\nNOTICE: seeme\n ?column?\n----------\n t\n(1 row)\n\n\nIt certainly appears to be short circuiting for \"select false and seeme()\",\nfor instance.\n\nIt appears that this isn't short-circuiting by order of expressions, however\n(as Perl and other languages do); for example, \"select seeme() or true\"\ndoesn't ever get to seeme(). I assume PG can simply see that the statement\n\"true\" will evaluate to true (clever, that PG!), and therefore it doesn't\nhave to evaluate seeme() ?\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Tue, 28 May 2002 17:36:19 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Are these intricacies of SQL standardised anywhere ?\n\nSQL92 section 3.3.4.4, \"rule evaluation order\" appears to sanction PG's\nbehavior. In particular note the part that says syntax rules and access\nrules are \"effectively applied at the same time\" (ie, this checking is\ndone before execution starts --- that legitimizes the error originally\ncomplained of) and the parts that say that inessential portions of\nexpressions need not be evaluated and that implementations are not\nrequired to perform evaluations strictly left-to-right.\n\n 3.3.4.4 Rule evaluation order\n\n A conforming implementation is not required to perform the exact\n sequence of actions defined in the General Rules, but shall achieve\n the same effect on SQL-data and schemas as that sequence. The term\n effectively is used to emphasize actions whose effect might be\n achieved in other ways by an implementation.\n\n The Syntax Rules and Access Rules for contained syntactic elements\n are effectively applied at the same time as the Syntax Rules and\n Access Rules for the containing syntactic elements. The General\n Rules for contained syntactic elements are effectively applied be-\n fore the General Rules for the containing syntactic elements. Where\n the precedence of operators is determined by the Formats of this\n International Standard or by parentheses, those operators are ef-\n fectively applied in the order specified by that precedence. Where\n the precedence is not determined by the Formats or by parentheses,\n effective evaluation of expressions is generally performed from\n left to right. However, it is implementation-dependent whether ex-\n pressions are actually evaluated left to right, particularly when\n operands or operators might cause conditions to be raised or if\n the results of the expressions can be determined without completely\n evaluating all parts of the expression. In general, if some syn-\n tactic element contains more than one other syntactic element, then\n the General Rules for contained elements that appear earlier in the\n production for the containing syntactic element are applied before\n the General Rules for contained elements that appear later.\n\n For example, in the production:\n\n <A> ::= <B> <C>\n\n the Syntax Rules and Access Rules for <A>, <B>, and <C> are ef-\n fectively applied simultaneously. The General Rules for <B> are\n applied before the General Rules for <C>, and the General Rules for\n <A> are applied after the General Rules for both <B> and <C>.\n\n If the result of an expression or search condition can be deter-\n mined without completely evaluating all parts of the expression or\n search condition, then the parts of the expression or search condi-\n tion whose evaluation is not necessary are called the inessential\n parts. If the Access Rules pertaining to inessential parts are not\n satisfied, then the syntax error or access rule violation exception\n condition is raised regardless of whether or not the inessential\n parts are actually evaluated. If evaluation of the inessential\n parts would cause an exception condition to be raised, then it is\n implementation-dependent whether or not that exception condition is\n raised.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 18:56:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql " }, { "msg_contents": "On Tue, 2002-05-28 at 16:09, Joel Burton wrote:\n\n> Actually, at least in some cases, PG does short-circuit logic:\n\n> joel@joel=# select false and seeme();\n\n> joel@joel=# select true and seeme();\n\nIf seeme() returns NULL, shouldn't both SELECTs return NULL, and\ntherefore not be short-circuit-able?\n\nSorry, I am a little confused.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n", "msg_date": "30 May 2002 16:04:19 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql" }, { "msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Alessio\n> Bragadini\n> Sent: Thursday, May 30, 2002 9:04 AM\n> To: PostgreSQL Hackers\n> Subject: Re: [HACKERS] wierd AND condition evaluation for plpgsql\n>\n>\n> On Tue, 2002-05-28 at 16:09, Joel Burton wrote:\n>\n> > Actually, at least in some cases, PG does short-circuit logic:\n>\n> > joel@joel=# select false and seeme();\n>\n> > joel@joel=# select true and seeme();\n>\n> If seeme() returns NULL, shouldn't both SELECTs return NULL, and\n> therefore not be short-circuit-able?\n>\n> Sorry, I am a little confused.\n\nIn my example, seeme() returns true, not NULL. However, the short-circuiting\ncame from the other part (the simple true or false) being evaluated first.\nSo, regardless of the returned value of seeme(), \"SELECT FALSE AND seeme()\"\nwould short-circuit, since \"FALSE AND ___\" can never be true. Of course, if\nseemme() returns NULL, then the end result would be false.\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Thu, 30 May 2002 09:21:17 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql" }, { "msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n>>> Actually, at least in some cases, PG does short-circuit logic:\n>>> joel@joel=# select false and seeme();\n>>> joel@joel=# select true and seeme();\n\n>> If seeme() returns NULL, shouldn't both SELECTs return NULL, and\n>> therefore not be short-circuit-able?\n\n> In my example, seeme() returns true, not NULL. However, the short-circuiting\n> came from the other part (the simple true or false) being evaluated first.\n> So, regardless of the returned value of seeme(), \"SELECT FALSE AND seeme()\"\n> would short-circuit, since \"FALSE AND ___\" can never be true.\n\nYes. Per the SQL standard, some cases involving AND and OR can be\nsimplified without evaluating all the arguments, and PG uses this\nflexibility to the hilt. You might care to read eval_const_expressions()\nin src/backend/optimizer/util/clauses.c. Some relevant tidbits:\n\n * Reduce any recognizably constant subexpressions of the given\n * expression tree, for example \"2 + 2\" => \"4\". More interestingly,\n * we can reduce certain boolean expressions even when they contain\n * non-constant subexpressions: \"x OR true\" => \"true\" no matter what\n * the subexpression x is. (XXX We assume that no such subexpression\n * will have important side-effects, which is not necessarily a good\n * assumption in the presence of user-defined functions; do we need a\n * pg_proc flag that prevents discarding the execution of a function?)\n\n * We do understand that certain functions may deliver non-constant\n * results even with constant inputs, \"nextval()\" being the classic\n * example. Functions that are not marked \"immutable\" in pg_proc\n * will not be pre-evaluated here, although we will reduce their\n * arguments as far as possible.\n\n * OR arguments are handled as follows:\n * non constant: keep\n * FALSE: drop (does not affect result)\n * TRUE: force result to TRUE\n * NULL: keep only one\n * We keep one NULL input because ExecEvalOr returns NULL\n * when no input is TRUE and at least one is NULL.\n\n * AND arguments are handled as follows:\n * non constant: keep\n * TRUE: drop (does not affect result)\n * FALSE: force result to FALSE\n * NULL: keep only one\n * We keep one NULL input because ExecEvalAnd returns NULL\n * when no input is FALSE and at least one is NULL.\n\nOther relevant manipulations include canonicalize_qual() in\nsrc/backend/optimizer/prep/prepqual.c (tries to convert boolean\nWHERE expressions to normal form by application of DeMorgan's laws)\nand for that matter the entire planner --- the fact that we have\na choice of execution plans at all really comes from the fact that\nwe are allowed to evaluate WHERE clauses in any order. So there's\nnot likely to be much support for any proposal that we constrain the\nevaluation order or guarantee the evaluation or non-evaluation of\nspecific clauses in WHERE. (The XXX comment above is an idle aside,\nnot something that is likely to really happen.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 May 2002 10:44:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Thursday, May 30, 2002 10:44 AM\n> To: Joel Burton\n> Cc: Alessio Bragadini; PostgreSQL Hackers\n> Subject: Re: [HACKERS] wierd AND condition evaluation for plpgsql\n>\n>\n> \"Joel Burton\" <joel@joelburton.com> writes:\n> >>> Actually, at least in some cases, PG does short-circuit logic:\n> >>> joel@joel=# select false and seeme();\n> >>> joel@joel=# select true and seeme();\n>\n> >> If seeme() returns NULL, shouldn't both SELECTs return NULL, and\n> >> therefore not be short-circuit-able?\n>\n> > In my example, seeme() returns true, not NULL. However, the\n> short-circuiting\n> > came from the other part (the simple true or false) being\n> evaluated first.\n> > So, regardless of the returned value of seeme(), \"SELECT FALSE\n> AND seeme()\"\n> > would short-circuit, since \"FALSE AND ___\" can never be true.\n>\n> Yes. Per the SQL standard, some cases involving AND and OR can be\n> simplified without evaluating all the arguments, and PG uses this\n> flexibility to the hilt. You might care to read eval_const_expressions()\n> in src/backend/optimizer/util/clauses.c. Some relevant tidbits:\n>\n> * Reduce any recognizably constant subexpressions of the given\n> * expression tree, for example \"2 + 2\" => \"4\". More interestingly,\n> * we can reduce certain boolean expressions even when they contain\n> * non-constant subexpressions: \"x OR true\" => \"true\" no matter what\n> * the subexpression x is. (XXX We assume that no such subexpression\n> * will have important side-effects, which is not necessarily a good\n> * assumption in the presence of user-defined functions; do we need a\n> * pg_proc flag that prevents discarding the execution of a function?)\n>\n> * We do understand that certain functions may deliver non-constant\n> * results even with constant inputs, \"nextval()\" being the classic\n> * example. Functions that are not marked \"immutable\" in pg_proc\n> * will not be pre-evaluated here, although we will reduce their\n> * arguments as far as possible.\n>\n> ...\n>\n> Other relevant manipulations include canonicalize_qual() in\n> src/backend/optimizer/prep/prepqual.c (tries to convert boolean\n> WHERE expressions to normal form by application of DeMorgan's laws)\n> and for that matter the entire planner --- the fact that we have\n> a choice of execution plans at all really comes from the fact that\n> we are allowed to evaluate WHERE clauses in any order. So there's\n> not likely to be much support for any proposal that we constrain the\n> evaluation order or guarantee the evaluation or non-evaluation of\n> specific clauses in WHERE. (The XXX comment above is an idle aside,\n> not something that is likely to really happen.)\n\nThanks, Tom, for the pointers to the full story.\n\nIs there any generalizable help would could offer to people who write\nfunctions that have side effects? Don't use them in WHERE (or ON or HAVING)\nclauses? Evaluate the function in a earlier db call, then plug the resolved\nresults into the SQL WHERE statement?\n\nI've lived without having this bite me; I'd think that side-effect functions\nwould be unusual in a WHERE clause. I'm just wondering if we should work\nthis into the docs somewhere. (Or is it? I took a look, but didn't see\nanything).\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Thu, 30 May 2002 10:57:35 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql " }, { "msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n> Is there any generalizable help would could offer to people who write\n> functions that have side effects? Don't use them in WHERE (or ON or HAVING)\n> clauses? Evaluate the function in a earlier db call, then plug the resolved\n> results into the SQL WHERE statement?\n\nCertainly putting side-effects into WHERE clauses is a recipe for\ntrouble, and it'd not be a bad idea to point that out in the docs.\n(I don't think it is mentioned at the moment.)\n\nWhen you really need to control order of evaluation, you can do it\nusing CASE or by pushing the whole expression into a function. But\nthese defeat optimization so should be avoided if possible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 May 2002 12:14:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql " }, { "msg_contents": "Joel Burton writes:\n\n> I've lived without having this bite me; I'd think that side-effect functions\n> would be unusual in a WHERE clause. I'm just wondering if we should work\n> this into the docs somewhere. (Or is it? I took a look, but didn't see\n> anything).\n\nI've written up a section about it which I'll check in momentarily.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 1 Jun 2002 22:58:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: wierd AND condition evaluation for plpgsql " } ]
[ { "msg_contents": "uff, I am ashamed....\n\nTracing the problem I found out that the invalid startup packets were not\ntriggered by the webserver either... looking more precisely I found out\nthat the messages appeared regularly every 180 seconds in /var/log/messages\n\nThis led me to the thought, that this has got to be the monitoring software\nwhich checks if the daemon is still there on a regular basis.\n\nOh, what stupid I am !\n\nPS. An ip-address in the log-file would have helped me - maybe this could be\nadded\nin a future release?\n\nSorry, if I bothered you - thanks for your help anyway\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: steffen@topconcepts.com Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n\n", "msg_date": "Tue, 28 May 2002 09:53:58 +0200", "msg_from": "\"Henrik Steffen\" <steffen@city-map.de>", "msg_from_op": true, "msg_subject": "Invalid length of startup packet - solved!" } ]
[ { "msg_contents": "\n> FOR row IN select_query LOOP\n> statements\n> RETURN NEXT row;\n> END LOOP;\n\nInformix has \n\tRETURN x1, x2, x3 WITH RESUME;\n\nThis seems reasonable to me. PostgreSQL could also allow\nreturn x with resume, where x is already a composite type.\n\nAndreas\n", "msg_date": "Tue, 28 May 2002 14:06:10 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: SRF rescan testing" } ]
[ { "msg_contents": "I'm trying to implement some code to recreate tables as we discussed\nformerly. But it's not so easy... :-) My first blind alley is that\ndropping a function which is occured in a CHECK constraint or\na DEFAULT constraint, I get \"fmgr_info: function 12345678: cache\nlookup failed\" or \"Function OID 12345678 does not exist\" (this one\nonly in 7.1.3; I tested this also in 7.2.1).\n\nFor me the appropriate hack seems to update some table about pg_*,\nis this the proper way? Or shall I forget to hack the pg_* tables,\njust drop the table and recreate it...?\n\nHere comes my test:\n\ncreate function tmp_f (integer) returns bool as\n'select $1 > 0;' language 'sql';\ncreate table tmp_t (x integer check (tmp_f(x)));\ninsert into tmp_t values (5);\ndrop function tmp_f(integer);\ncreate function tmp_f (integer) returns bool as\n'select $1 > 0;' language 'sql';\ninsert into tmp_t values (5);\ndrop function tmp_f(integer);\ndrop table tmp_t;\n\ncreate function tmp_f (integer) returns bool as\n'select $1 > 0;' language 'sql';\ncreate table tmp_t (x integer, y bool default tmp_f(5));\ninsert into tmp_t values (5);\ndrop function tmp_f(integer);\ncreate function tmp_f (integer) returns bool as\n'select $1 > 0;' language 'sql';\ninsert into tmp_t values (5);\ndrop function tmp_f(integer);\ndrop table tmp_t;\n\n\nTIA, Zoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Tue, 28 May 2002 15:53:34 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "cache lookup failed: hack pg_* tables?" }, { "msg_contents": "On Tue, 28 May 2002, Tom Lane wrote:\n\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > I'm trying to implement some code to recreate tables as we discussed\n> > formerly. But it's not so easy... :-) My first blind alley is that\n> > dropping a function which is occured in a CHECK constraint or\n> > a DEFAULT constraint, I get \"fmgr_info: function 12345678: cache\n> > lookup failed\" or \"Function OID 12345678 does not exist\" (this one\n> > only in 7.1.3; I tested this also in 7.2.1).\n> \n> Use CREATE OR REPLACE FUNCTION to modify an existing function ...\n\nWell... it works... :-) Thanks and sorry for this silly question... :-)\n\nZoltan\n\n", "msg_date": "Tue, 28 May 2002 16:26:09 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "Re: cache lookup failed: hack pg_* tables? " }, { "msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> I'm trying to implement some code to recreate tables as we discussed\n> formerly. But it's not so easy... :-) My first blind alley is that\n> dropping a function which is occured in a CHECK constraint or\n> a DEFAULT constraint, I get \"fmgr_info: function 12345678: cache\n> lookup failed\" or \"Function OID 12345678 does not exist\" (this one\n> only in 7.1.3; I tested this also in 7.2.1).\n\nUse CREATE OR REPLACE FUNCTION to modify an existing function ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 11:01:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cache lookup failed: hack pg_* tables? " } ]
[ { "msg_contents": "The Perl build (PL/Perl and the Pg interface) now use the configured\ncompiler and flags and none of the MakeMaker stuff. (I've kept the\ninterfaces/perl5/Makefile.PL file in case someone wants to resurrect it\nfor a Win32 build, for instance.) Since doing Perl builds without\nMakeMaker is poorly documented I've reverse-engineered much of this from\nthe MakeMaker source code. It works here, but if it doesn't work\nsomewhere, please let me know.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 28 May 2002 18:57:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Perl build fix attempted" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The Perl build (PL/Perl and the Pg interface) now use the configured\n> compiler and flags and none of the MakeMaker stuff. (I've kept the\n> interfaces/perl5/Makefile.PL file in case someone wants to resurrect it\n> for a Win32 build, for instance.) Since doing Perl builds without\n> MakeMaker is poorly documented I've reverse-engineered much of this from\n> the MakeMaker source code. It works here, but if it doesn't work\n> somewhere, please let me know.\n\nOn HPUX 10.20, using perl 5.6.1, plperl builds without complaint but\nSIGSEGV's upon use. AFAIR this worked last time I tried it; any idea\nwhat you might have changed?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 May 2002 19:55:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Perl build fix attempted " }, { "msg_contents": "Tom Lane writes:\n\n> On HPUX 10.20, using perl 5.6.1, plperl builds without complaint but\n> SIGSEGV's upon use. AFAIR this worked last time I tried it; any idea\n> what you might have changed?\n\nI have written it so that the commands that are executed during the build\nshould be the same. Can you send me the build output from current and\nfrom before the change (7.2 should work), and the generated Makefile from\nbefore the change?\n\nI suspect that the linkage is wrong now, possibly a PIC problem.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 1 Jun 2002 17:57:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Perl build fix attempted " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> On HPUX 10.20, using perl 5.6.1, plperl builds without complaint but\n>> SIGSEGV's upon use. AFAIR this worked last time I tried it; any idea\n>> what you might have changed?\n\n> I have written it so that the commands that are executed during the build\n> should be the same. Can you send me the build output from current and\n> from before the change (7.2 should work), and the generated Makefile from\n> before the change?\n\nAttached. However, I have just found that in fact the 7.2 build does\nnot work either :-( ... which is peculiar, because it worked fine in\nJanuary. (I think. It could be that I tested the changes I made in\nJanuary on a different machine --- can't recall for sure. I am pretty\nsure that I tested plperl on this machine last June.)\n\nI am suspicious of the -D symbols that were used before and are not\nbeing supplied now. (A less appetizing prospect is that gcc code just\nplain won't interoperate with cc-generated code.) The perl installation\nis vanilla 5.6.1 configuration except for requesting that a shared\nlibperl be built.\n\n\t\t\tregards, tom lane\n\n\nCurrent sources \"make\" log:\n\nmake[3]: Entering directory `/home/postgres/pgsql/src/pl/plperl'\ngcc -O1 -g -fPIC -I. -I/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE -I../../../src/include -c -o plperl.o plperl.c\ngcc -O1 -g -fPIC -I. -I/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE -I../../../src/include -c -o eloglvl.o eloglvl.c\n/opt/perl5.6.1/bin/perl /opt/perl5.6.1/lib/5.6.1/ExtUtils/xsubpp -typemap /opt/perl5.6.1/lib/5.6.1/ExtUtils/typemap SPI.xs >SPI.c\ngcc -O1 -g -fPIC -I. -I/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE -I../../../src/include -c -o SPI.o SPI.c\nar crs libplperl.a `lorder plperl.o eloglvl.o SPI.o | tsort`\nranlib libplperl.a\n/usr/ccs/bin/ld -b +b /home/postgres/testversion/lib plperl.o eloglvl.o SPI.o -L/usr/local/lib /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/auto/DynaLoader/DynaLoader.a -L/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE -lperl -lnsl_s -ldld -lm -lc -lndir -lcrypt -lsec -o libplperl.sl\n/usr/ccs/bin/ld: (Warning) At least one PA 2.0 object file (/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/auto/DynaLoader/DynaLoader.a(DynaLoader.o)) was detected. The linked output may not run on a PA 1.x system.\nrm -f libplperl.sl.0\nln -s libplperl.sl libplperl.sl.0\nmake[3]: Leaving directory `/home/postgres/pgsql/src/pl/plperl'\n\nREL7_2 \"make\" log:\n\nmake[3]: Entering directory `/home/postgres/REL7_2/pgsql/src/pl/plperl'\nplperl_installdir='$(DESTDIR)/home/postgres/version72/lib' \\\n/opt/perl5.6.1/bin/perl Makefile.PL INC='-I. -I../../../src/include'\nWriting Makefile for plperl\nmake -f Makefile all VPATH=\nmake[4]: Entering directory `/home/postgres/REL7_2/pgsql/src/pl/plperl'\ncc -c -I. -I../../../src/include -D_HPUX_SOURCE -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Ae -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" +z -I/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE plperl.c\ncpp: \"perl.h\", line 2155: warning 2001: Redefinition of macro DEBUG.\ncc: \"plperl.c\", line 244: warning 558: Empty declaration.\ncc -c -I. -I../../../src/include -D_HPUX_SOURCE -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Ae -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" +z -I/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE eloglvl.c\n/opt/perl5.6.1/bin/perl -I/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0 -I/opt/perl5.6.1/lib/5.6.1 /opt/perl5.6.1/lib/5.6.1/ExtUtils/xsubpp -typemap /opt/perl5.6.1/lib/5.6.1/ExtUtils/typemap SPI.xs > SPI.c\ncc -c -I. -I../../../src/include -D_HPUX_SOURCE -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Ae -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" +z -I/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE SPI.c\ncpp: \"perl.h\", line 2155: warning 2001: Redefinition of macro DEBUG.\nRunning Mkbootstrap for plperl ()\nchmod 644 plperl.bs\nrm -f blib/arch/auto/plperl/plperl.sl\nLD_RUN_PATH=\"\" ld -b +vnocompatwarnings -L/usr/local/lib plperl.o eloglvl.o SPI.o -L/usr/local/lib /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/auto/DynaLoader/DynaLoader.a -L/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE -lperl -lnsl_s -ldld -lm -lc -lndir -lcrypt -lsec -o blib/arch/auto/plperl/plperl.sl \nchmod 755 blib/arch/auto/plperl/plperl.sl\ncp plperl.bs blib/arch/auto/plperl/plperl.bs\nchmod 644 blib/arch/auto/plperl/plperl.bs\nmake[4]: Leaving directory `/home/postgres/REL7_2/pgsql/src/pl/plperl'\nmake[3]: Leaving directory `/home/postgres/REL7_2/pgsql/src/pl/plperl'\n\nREL7_2 generated Makefile:\n\n# This Makefile is for the plperl extension to perl.\n#\n# It was generated automatically by MakeMaker version\n# 5.45 (Revision: 1.222) from the contents of\n# Makefile.PL. Don't edit this file, edit Makefile.PL instead.\n#\n#\tANY CHANGES MADE HERE WILL BE LOST!\n#\n# MakeMaker ARGV: (q[INC=-I. -I../../../src/include])\n#\n# MakeMaker Parameters:\n\n#\tNAME => q[plperl]\n#\tOBJECT => q[plperl.o eloglvl.o SPI.o]\n#\tXS => { SPI.xs=>q[SPI.c] }\n#\tdynamic_lib => { OTHERLDFLAGS=>q[ -L/usr/local/lib /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/auto/DynaLoader/DynaLoader.a -L/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE -lperl -lnsl_s -ldld -lm -lc -lndir -lcrypt -lsec ] }\n\n# --- MakeMaker post_initialize section:\n\n\n# --- MakeMaker const_config section:\n\n# These definitions are from config.sh (via /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/Config.pm)\n\n# They may have been overridden via Makefile.PL or on the command line\nAR = ar\nCC = cc\nCCCDLFLAGS = +z\nCCDLFLAGS = -Wl,-E -Wl,-B,deferred \nDLEXT = sl\nDLSRC = dl_hpux.xs\nLD = ld\nLDDLFLAGS = -b +vnocompatwarnings -L/usr/local/lib\nLDFLAGS = -L/usr/local/lib\nLIBC = /lib/libc.sl\nLIB_EXT = .a\nOBJ_EXT = .o\nOSNAME = hpux\nOSVERS = 10.20\nRANLIB = :\nSO = sl\nEXE_EXT = \nFULL_AR = /usr/bin/ar\n\n\n# --- MakeMaker constants section:\nAR_STATIC_ARGS = cr\nNAME = plperl\nDISTNAME = plperl\nNAME_SYM = plperl\nVERSION = 0.10\nVERSION_SYM = 0_10\nXS_VERSION = 0.10\nINST_BIN = blib/bin\nINST_EXE = blib/script\nINST_LIB = blib/lib\nINST_ARCHLIB = blib/arch\nINST_SCRIPT = blib/script\nPREFIX = /opt/perl5.6.1\nINSTALLDIRS = site\nINSTALLPRIVLIB = $(PREFIX)/lib/5.6.1\nINSTALLARCHLIB = $(PREFIX)/lib/5.6.1/PA-RISC2.0\nINSTALLSITELIB = $(PREFIX)/lib/site_perl/5.6.1\nINSTALLSITEARCH = $(PREFIX)/lib/site_perl/5.6.1/PA-RISC2.0\nINSTALLBIN = $(PREFIX)/bin\nINSTALLSCRIPT = $(PREFIX)/bin\nPERL_LIB = /opt/perl5.6.1/lib/5.6.1\nPERL_ARCHLIB = /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0\nSITELIBEXP = /opt/perl5.6.1/lib/site_perl/5.6.1\nSITEARCHEXP = /opt/perl5.6.1/lib/site_perl/5.6.1/PA-RISC2.0\nLIBPERL_A = libperl.a\nFIRST_MAKEFILE = Makefile\nMAKE_APERL_FILE = Makefile.aperl\nPERLMAINCC = $(CC)\nPERL_INC = /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE\nPERL = /opt/perl5.6.1/bin/perl\nFULLPERL = /opt/perl5.6.1/bin/perl\nFULL_AR = /usr/bin/ar\n\nVERSION_MACRO = VERSION\nDEFINE_VERSION = -D$(VERSION_MACRO)=\\\"$(VERSION)\\\"\nXS_VERSION_MACRO = XS_VERSION\nXS_DEFINE_VERSION = -D$(XS_VERSION_MACRO)=\\\"$(XS_VERSION)\\\"\nPERL_MALLOC_DEF = -DPERL_EXTMALLOC_DEF -Dmalloc=Perl_malloc -Dfree=Perl_mfree -Drealloc=Perl_realloc -Dcalloc=Perl_calloc\n\nMAKEMAKER = /opt/perl5.6.1/lib/5.6.1/ExtUtils/MakeMaker.pm\nMM_VERSION = 5.45\n\n# FULLEXT = Pathname for extension directory (eg Foo/Bar/Oracle).\n# BASEEXT = Basename part of FULLEXT. May be just equal FULLEXT. (eg Oracle)\n# ROOTEXT = Directory part of FULLEXT with leading slash (eg /DBD) !!! Deprecated from MM 5.32 !!!\n# PARENT_NAME = NAME without BASEEXT and no trailing :: (eg Foo::Bar)\n# DLBASE = Basename part of dynamic library. May be just equal BASEEXT.\nFULLEXT = plperl\nBASEEXT = plperl\nDLBASE = $(BASEEXT)\nINC = -I. -I../../../src/include\nOBJECT = plperl$(OBJ_EXT) eloglvl$(OBJ_EXT) SPI$(OBJ_EXT)\nLDFROM = $(OBJECT)\nLINKTYPE = dynamic\n\n# Handy lists of source code files:\nXS_FILES= SPI.xs\nC_FILES = SPI.c \\\n\teloglvl.c \\\n\tplperl.c\nO_FILES = SPI.o \\\n\teloglvl.o \\\n\tplperl.o\nH_FILES = eloglvl.h \\\n\tppport.h\nHTMLLIBPODS = \nHTMLSCRIPTPODS = \nMAN1PODS = \nMAN3PODS = \nHTMLEXT = html\nINST_MAN1DIR = blib/man1\nINSTALLMAN1DIR = $(PREFIX)/man/man1\nMAN1EXT = 1\nINST_MAN3DIR = blib/man3\nINSTALLMAN3DIR = $(PREFIX)/man/man3\nMAN3EXT = 3\nPERM_RW = 644\nPERM_RWX = 755\n\n# work around a famous dec-osf make(1) feature(?):\nmakemakerdflt: all\n\n.SUFFIXES: .xs .c .C .cpp .cxx .cc $(OBJ_EXT)\n\n# Nick wanted to get rid of .PRECIOUS. I don't remember why. I seem to recall, that\n# some make implementations will delete the Makefile when we rebuild it. Because\n# we call false(1) when we rebuild it. So make(1) is not completely wrong when it\n# does so. Our milage may vary.\n# .PRECIOUS: Makefile # seems to be not necessary anymore\n\n.PHONY: all config static dynamic test linkext manifest\n\n# Where is the Config information that we are using/depend on\nCONFIGDEP = $(PERL_ARCHLIB)/Config.pm $(PERL_INC)/config.h\n\n# Where to put things:\nINST_LIBDIR = $(INST_LIB)\nINST_ARCHLIBDIR = $(INST_ARCHLIB)\n\nINST_AUTODIR = $(INST_LIB)/auto/$(FULLEXT)\nINST_ARCHAUTODIR = $(INST_ARCHLIB)/auto/$(FULLEXT)\n\nINST_STATIC = $(INST_ARCHAUTODIR)/$(BASEEXT)$(LIB_EXT)\nINST_DYNAMIC = $(INST_ARCHAUTODIR)/$(DLBASE).$(DLEXT)\nINST_BOOT = $(INST_ARCHAUTODIR)/$(BASEEXT).bs\n\nEXPORT_LIST = \n\nPERL_ARCHIVE = \n\nPERL_ARCHIVE_AFTER = \n\nTO_INST_PM = \n\nPM_TO_BLIB = \n\n\n# --- MakeMaker tool_autosplit section:\n\n# Usage: $(AUTOSPLITFILE) FileToSplit AutoDirToSplitInto\nAUTOSPLITFILE = $(PERL) \"-I$(PERL_ARCHLIB)\" \"-I$(PERL_LIB)\" -e 'use AutoSplit;autosplit($$ARGV[0], $$ARGV[1], 0, 1, 1) ;'\n\n\n# --- MakeMaker tool_xsubpp section:\n\nXSUBPPDIR = /opt/perl5.6.1/lib/5.6.1/ExtUtils\nXSUBPP = $(XSUBPPDIR)/xsubpp\nXSPROTOARG = \nXSUBPPDEPS = $(XSUBPPDIR)/typemap $(XSUBPP)\nXSUBPPARGS = -typemap $(XSUBPPDIR)/typemap\n\n\n# --- MakeMaker tools_other section:\n\nSHELL = /bin/sh\nCHMOD = chmod\nCP = cp\nLD = ld\nMV = mv\nNOOP = $(SHELL) -c true\nRM_F = rm -f\nRM_RF = rm -rf\nTEST_F = test -f\nTOUCH = touch\nUMASK_NULL = umask 0\nDEV_NULL = > /dev/null 2>&1\n\n# The following is a portable way to say mkdir -p\n# To see which directories are created, change the if 0 to if 1\nMKPATH = $(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) -MExtUtils::Command -e mkpath\n\n# This helps us to minimize the effect of the .exists files A yet\n# better solution would be to have a stable file in the perl\n# distribution with a timestamp of zero. But this solution doesn't\n# need any changes to the core distribution and works with older perls\nEQUALIZE_TIMESTAMP = $(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) -MExtUtils::Command -e eqtime\n\n# Here we warn users that an old packlist file was found somewhere,\n# and that they should call some uninstall routine\nWARN_IF_OLD_PACKLIST = $(PERL) -we 'exit unless -f $$ARGV[0];' \\\n-e 'print \"WARNING: I have found an old package in\\n\";' \\\n-e 'print \"\\t$$ARGV[0].\\n\";' \\\n-e 'print \"Please make sure the two installations are not conflicting\\n\";'\n\nUNINST=0\nVERBINST=0\n\nMOD_INSTALL = $(PERL) -I$(INST_LIB) -I$(PERL_LIB) -MExtUtils::Install \\\n-e \"install({@ARGV},'$(VERBINST)',0,'$(UNINST)');\"\n\nDOC_INSTALL = $(PERL) -e '$$\\=\"\\n\\n\";' \\\n-e 'print \"=head2 \", scalar(localtime), \": C<\", shift, \">\", \" L<\", $$arg=shift, \"|\", $$arg, \">\";' \\\n-e 'print \"=over 4\";' \\\n-e 'while (defined($$key = shift) and defined($$val = shift)){print \"=item *\";print \"C<$$key: $$val>\";}' \\\n-e 'print \"=back\";'\n\nUNINSTALL = $(PERL) -MExtUtils::Install \\\n-e 'uninstall($$ARGV[0],1,1); print \"\\nUninstall is deprecated. Please check the\";' \\\n-e 'print \" packlist above carefully.\\n There may be errors. Remove the\";' \\\n-e 'print \" appropriate files manually.\\n Sorry for the inconveniences.\\n\"'\n\n\n# --- MakeMaker dist section:\n\nDISTVNAME = $(DISTNAME)-$(VERSION)\nTAR = tar\nTARFLAGS = cvf\nZIP = zip\nZIPFLAGS = -r\nCOMPRESS = gzip --best\nSUFFIX = .gz\nSHAR = shar\nPREOP = @$(NOOP)\nPOSTOP = @$(NOOP)\nTO_UNIX = @$(NOOP)\nCI = ci -u\nRCS_LABEL = rcs -Nv$(VERSION_SYM): -q\nDIST_CP = best\nDIST_DEFAULT = tardist\n\n\n# --- MakeMaker macro section:\n\n\n# --- MakeMaker depend section:\n\n\n# --- MakeMaker cflags section:\n\nCCFLAGS = -D_HPUX_SOURCE -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Ae\nOPTIMIZE = -O\nPERLTYPE = \nMPOLLUTE = \n\n\n# --- MakeMaker const_loadlibs section:\n\n# plperl might depend on some other libraries:\n# See ExtUtils::Liblist for details\n#\nLD_RUN_PATH = \n\n\n# --- MakeMaker const_cccmd section:\nCCCMD = $(CC) -c $(INC) $(CCFLAGS) $(OPTIMIZE) \\\n\t$(PERLTYPE) $(MPOLLUTE) $(DEFINE_VERSION) \\\n\t$(XS_DEFINE_VERSION)\n\n# --- MakeMaker post_constants section:\n\n\n# --- MakeMaker pasthru section:\n\nPASTHRU = LIB=\"$(LIB)\"\\\n\tLIBPERL_A=\"$(LIBPERL_A)\"\\\n\tLINKTYPE=\"$(LINKTYPE)\"\\\n\tPREFIX=\"$(PREFIX)\"\\\n\tOPTIMIZE=\"$(OPTIMIZE)\"\n\n\n# --- MakeMaker c_o section:\n\n.c$(OBJ_EXT):\n\t$(CCCMD) $(CCCDLFLAGS) -I$(PERL_INC) $(DEFINE) $<\n\n.C$(OBJ_EXT):\n\t$(CCCMD) $(CCCDLFLAGS) -I$(PERL_INC) $(DEFINE) $<\n\n.cpp$(OBJ_EXT):\n\t$(CCCMD) $(CCCDLFLAGS) -I$(PERL_INC) $(DEFINE) $<\n\n.cxx$(OBJ_EXT):\n\t$(CCCMD) $(CCCDLFLAGS) -I$(PERL_INC) $(DEFINE) $<\n\n.cc$(OBJ_EXT):\n\t$(CCCMD) $(CCCDLFLAGS) -I$(PERL_INC) $(DEFINE) $<\n\n\n# --- MakeMaker xs_c section:\n\n.xs.c:\n\t$(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) $(XSUBPP) $(XSPROTOARG) $(XSUBPPARGS) $< > $@\n\n\n# --- MakeMaker xs_o section:\n\n\n# --- MakeMaker top_targets section:\n\n#all ::\tconfig $(INST_PM) subdirs linkext manifypods\n\nall :: pure_all htmlifypods manifypods\n\t@$(NOOP)\n\npure_all :: config pm_to_blib subdirs linkext\n\t@$(NOOP)\n\nsubdirs :: $(MYEXTLIB)\n\t@$(NOOP)\n\nconfig :: Makefile $(INST_LIBDIR)/.exists\n\t@$(NOOP)\n\nconfig :: $(INST_ARCHAUTODIR)/.exists\n\t@$(NOOP)\n\nconfig :: $(INST_AUTODIR)/.exists\n\t@$(NOOP)\n\n$(INST_AUTODIR)/.exists :: /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/perl.h\n\t@$(MKPATH) $(INST_AUTODIR)\n\t@$(EQUALIZE_TIMESTAMP) /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/perl.h $(INST_AUTODIR)/.exists\n\n\t-@$(CHMOD) $(PERM_RWX) $(INST_AUTODIR)\n\n$(INST_LIBDIR)/.exists :: /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/perl.h\n\t@$(MKPATH) $(INST_LIBDIR)\n\t@$(EQUALIZE_TIMESTAMP) /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/perl.h $(INST_LIBDIR)/.exists\n\n\t-@$(CHMOD) $(PERM_RWX) $(INST_LIBDIR)\n\n$(INST_ARCHAUTODIR)/.exists :: /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/perl.h\n\t@$(MKPATH) $(INST_ARCHAUTODIR)\n\t@$(EQUALIZE_TIMESTAMP) /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/perl.h $(INST_ARCHAUTODIR)/.exists\n\n\t-@$(CHMOD) $(PERM_RWX) $(INST_ARCHAUTODIR)\n\n$(O_FILES): $(H_FILES)\n\nhelp:\n\tperldoc ExtUtils::MakeMaker\n\nVersion_check:\n\t@$(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) \\\n\t\t-MExtUtils::MakeMaker=Version_check \\\n\t\t-e \"Version_check('$(MM_VERSION)')\"\n\n\n# --- MakeMaker linkext section:\n\nlinkext :: $(LINKTYPE)\n\t@$(NOOP)\n\n\n# --- MakeMaker dlsyms section:\n\n\n# --- MakeMaker dynamic section:\n\n## $(INST_PM) has been moved to the all: target.\n## It remains here for awhile to allow for old usage: \"make dynamic\"\n#dynamic :: Makefile $(INST_DYNAMIC) $(INST_BOOT) $(INST_PM)\ndynamic :: Makefile $(INST_DYNAMIC) $(INST_BOOT)\n\t@$(NOOP)\n\n\n# --- MakeMaker dynamic_bs section:\n\nBOOTSTRAP = plperl.bs\n\n# As Mkbootstrap might not write a file (if none is required)\n# we use touch to prevent make continually trying to remake it.\n# The DynaLoader only reads a non-empty file.\n$(BOOTSTRAP): Makefile $(INST_ARCHAUTODIR)/.exists\n\t@echo \"Running Mkbootstrap for $(NAME) ($(BSLOADLIBS))\"\n\t@$(PERL) \"-I$(PERL_ARCHLIB)\" \"-I$(PERL_LIB)\" \\\n\t\t-MExtUtils::Mkbootstrap \\\n\t\t-e \"Mkbootstrap('$(BASEEXT)','$(BSLOADLIBS)');\"\n\t@$(TOUCH) $(BOOTSTRAP)\n\t$(CHMOD) $(PERM_RW) $@\n\n$(INST_BOOT): $(BOOTSTRAP) $(INST_ARCHAUTODIR)/.exists\n\t@rm -rf $(INST_BOOT)\n\t-cp $(BOOTSTRAP) $(INST_BOOT)\n\t$(CHMOD) $(PERM_RW) $@\n\n\n# --- MakeMaker dynamic_lib section:\n\n# This section creates the dynamically loadable $(INST_DYNAMIC)\n# from $(OBJECT) and possibly $(MYEXTLIB).\nARMAYBE = :\nOTHERLDFLAGS = -L/usr/local/lib /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/auto/DynaLoader/DynaLoader.a -L/opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE -lperl -lnsl_s -ldld -lm -lc -lndir -lcrypt -lsec\n\nINST_DYNAMIC_DEP = \n\n$(INST_DYNAMIC): $(OBJECT) $(MYEXTLIB) $(BOOTSTRAP) $(INST_ARCHAUTODIR)/.exists $(EXPORT_LIST) $(PERL_ARCHIVE) $(PERL_ARCHIVE_AFTER) $(INST_DYNAMIC_DEP)\n\t$(RM_F) $@\n\tLD_RUN_PATH=\"$(LD_RUN_PATH)\" $(LD) $(LDDLFLAGS) $(LDFROM) $(OTHERLDFLAGS) -o $@ $(MYEXTLIB) $(PERL_ARCHIVE) $(LDLOADLIBS) $(PERL_ARCHIVE_AFTER) $(EXPORT_LIST)\n\t$(CHMOD) $(PERM_RWX) $@\n\n\n# --- MakeMaker static section:\n\n## $(INST_PM) has been moved to the all: target.\n## It remains here for awhile to allow for old usage: \"make static\"\n#static :: Makefile $(INST_STATIC) $(INST_PM)\nstatic :: Makefile $(INST_STATIC)\n\t@$(NOOP)\n\n\n# --- MakeMaker static_lib section:\n\n$(INST_STATIC): $(OBJECT) $(MYEXTLIB) $(INST_ARCHAUTODIR)/.exists\n\t$(RM_RF) $@\n\t$(FULL_AR) $(AR_STATIC_ARGS) $@ $(OBJECT) && $(RANLIB) $@\n\t$(CHMOD) $(PERM_RWX) $@\n\t@echo \"$(EXTRALIBS)\" > $(INST_ARCHAUTODIR)/extralibs.ld\n\n\n\n# --- MakeMaker htmlifypods section:\n\nhtmlifypods : pure_all\n\t@$(NOOP)\n\n\n# --- MakeMaker manifypods section:\n\nmanifypods : pure_all\n\t@$(NOOP)\n\n\n# --- MakeMaker processPL section:\n\n\n# --- MakeMaker installbin section:\n\n\n# --- MakeMaker subdirs section:\n\n# none\n\n# --- MakeMaker clean section:\n\n# Delete temporary files but do not touch installed files. We don't delete\n# the Makefile here so a later make realclean still has a makefile to use.\n\nclean ::\n\t-rm -rf SPI.c ./blib $(MAKE_APERL_FILE) $(INST_ARCHAUTODIR)/extralibs.all perlmain.c mon.out core core.*perl.*.? *perl.core so_locations pm_to_blib *$(OBJ_EXT) *$(LIB_EXT) perl.exe $(BOOTSTRAP) $(BASEEXT).bso $(BASEEXT).def $(BASEEXT).exp\n\t-mv Makefile Makefile.old $(DEV_NULL)\n\n\n# --- MakeMaker realclean section:\n\n# Delete temporary files (via clean) and also delete installed files\nrealclean purge :: clean\n\trm -rf $(INST_AUTODIR) $(INST_ARCHAUTODIR)\n\trm -f $(INST_DYNAMIC) $(INST_BOOT)\n\trm -f $(INST_STATIC)\n\trm -rf Makefile Makefile.old\n\n\n# --- MakeMaker dist_basics section:\n\ndistclean :: realclean distcheck\n\ndistcheck :\n\t$(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) -MExtUtils::Manifest=fullcheck \\\n\t\t-e fullcheck\n\nskipcheck :\n\t$(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) -MExtUtils::Manifest=skipcheck \\\n\t\t-e skipcheck\n\nmanifest :\n\t$(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) -MExtUtils::Manifest=mkmanifest \\\n\t\t-e mkmanifest\n\nveryclean : realclean\n\t$(RM_F) *~ *.orig */*~ */*.orig\n\n\n# --- MakeMaker dist_core section:\n\ndist : $(DIST_DEFAULT)\n\t@$(PERL) -le 'print \"Warning: Makefile possibly out of date with $$vf\" if ' \\\n\t -e '-e ($$vf=\"$(VERSION_FROM)\") and -M $$vf < -M \"Makefile\";'\n\ntardist : $(DISTVNAME).tar$(SUFFIX)\n\nzipdist : $(DISTVNAME).zip\n\n$(DISTVNAME).tar$(SUFFIX) : distdir\n\t$(PREOP)\n\t$(TO_UNIX)\n\t$(TAR) $(TARFLAGS) $(DISTVNAME).tar $(DISTVNAME)\n\t$(RM_RF) $(DISTVNAME)\n\t$(COMPRESS) $(DISTVNAME).tar\n\t$(POSTOP)\n\n$(DISTVNAME).zip : distdir\n\t$(PREOP)\n\t$(ZIP) $(ZIPFLAGS) $(DISTVNAME).zip $(DISTVNAME)\n\t$(RM_RF) $(DISTVNAME)\n\t$(POSTOP)\n\nuutardist : $(DISTVNAME).tar$(SUFFIX)\n\tuuencode $(DISTVNAME).tar$(SUFFIX) \\\n\t\t$(DISTVNAME).tar$(SUFFIX) > \\\n\t\t$(DISTVNAME).tar$(SUFFIX)_uu\n\nshdist : distdir\n\t$(PREOP)\n\t$(SHAR) $(DISTVNAME) > $(DISTVNAME).shar\n\t$(RM_RF) $(DISTVNAME)\n\t$(POSTOP)\n\n\n# --- MakeMaker dist_dir section:\n\ndistdir :\n\t$(RM_RF) $(DISTVNAME)\n\t$(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) -MExtUtils::Manifest=manicopy,maniread \\\n\t\t-e \"manicopy(maniread(),'$(DISTVNAME)', '$(DIST_CP)');\"\n\n\n# --- MakeMaker dist_test section:\n\ndisttest : distdir\n\tcd $(DISTVNAME) && $(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) Makefile.PL\n\tcd $(DISTVNAME) && $(MAKE)\n\tcd $(DISTVNAME) && $(MAKE) test\n\n\n# --- MakeMaker dist_ci section:\n\nci :\n\t$(PERL) -I$(PERL_ARCHLIB) -I$(PERL_LIB) -MExtUtils::Manifest=maniread \\\n\t\t-e \"@all = keys %{ maniread() };\" \\\n\t\t-e 'print(\"Executing $(CI) @all\\n\"); system(\"$(CI) @all\");' \\\n\t\t-e 'print(\"Executing $(RCS_LABEL) ...\\n\"); system(\"$(RCS_LABEL) @all\");'\n\n\n# --- MakeMaker install section:\n\ninstall :: all\n\tcp $(INST_DYNAMIC) $(DESTDIR)/home/postgres/version72/lib\n\n\n# --- MakeMaker force section:\n# Phony target to force checking subdirectories.\nFORCE:\n\t@$(NOOP)\n\n\n# --- MakeMaker perldepend section:\n\nPERL_HDRS = \\\n\t$(PERL_INC)/EXTERN.h\t\t\\\n\t$(PERL_INC)/INTERN.h\t\t\\\n\t$(PERL_INC)/XSUB.h\t\t\\\n\t$(PERL_INC)/av.h\t\t\\\n\t$(PERL_INC)/cc_runtime.h\t\\\n\t$(PERL_INC)/config.h\t\t\\\n\t$(PERL_INC)/cop.h\t\t\\\n\t$(PERL_INC)/cv.h\t\t\\\n\t$(PERL_INC)/dosish.h\t\t\\\n\t$(PERL_INC)/embed.h\t\t\\\n\t$(PERL_INC)/embedvar.h\t\t\\\n\t$(PERL_INC)/fakethr.h\t\t\\\n\t$(PERL_INC)/form.h\t\t\\\n\t$(PERL_INC)/gv.h\t\t\\\n\t$(PERL_INC)/handy.h\t\t\\\n\t$(PERL_INC)/hv.h\t\t\\\n\t$(PERL_INC)/intrpvar.h\t\t\\\n\t$(PERL_INC)/iperlsys.h\t\t\\\n\t$(PERL_INC)/keywords.h\t\t\\\n\t$(PERL_INC)/mg.h\t\t\\\n\t$(PERL_INC)/nostdio.h\t\t\\\n\t$(PERL_INC)/objXSUB.h\t\t\\\n\t$(PERL_INC)/op.h\t\t\\\n\t$(PERL_INC)/opcode.h\t\t\\\n\t$(PERL_INC)/opnames.h\t\t\\\n\t$(PERL_INC)/patchlevel.h\t\\\n\t$(PERL_INC)/perl.h\t\t\\\n\t$(PERL_INC)/perlapi.h\t\t\\\n\t$(PERL_INC)/perlio.h\t\t\\\n\t$(PERL_INC)/perlsdio.h\t\t\\\n\t$(PERL_INC)/perlsfio.h\t\t\\\n\t$(PERL_INC)/perlvars.h\t\t\\\n\t$(PERL_INC)/perly.h\t\t\\\n\t$(PERL_INC)/pp.h\t\t\\\n\t$(PERL_INC)/pp_proto.h\t\t\\\n\t$(PERL_INC)/proto.h\t\t\\\n\t$(PERL_INC)/regcomp.h\t\t\\\n\t$(PERL_INC)/regexp.h\t\t\\\n\t$(PERL_INC)/regnodes.h\t\t\\\n\t$(PERL_INC)/scope.h\t\t\\\n\t$(PERL_INC)/sv.h\t\t\\\n\t$(PERL_INC)/thrdvar.h\t\t\\\n\t$(PERL_INC)/thread.h\t\t\\\n\t$(PERL_INC)/unixish.h\t\t\\\n\t$(PERL_INC)/utf8.h\t\t\\\n\t$(PERL_INC)/util.h\t\t\\\n\t$(PERL_INC)/warnings.h\n\n$(OBJECT) : $(PERL_HDRS)\n\nSPI.c : $(XSUBPPDEPS)\n\n\n# --- MakeMaker makefile section:\n\n\n# --- MakeMaker staticmake section:\n\n# --- MakeMaker makeaperl section ---\nMAP_TARGET = perl\nFULLPERL = /opt/perl5.6.1/bin/perl\n\n$(MAP_TARGET) :: static $(MAKE_APERL_FILE)\n\t$(MAKE) -f $(MAKE_APERL_FILE) $@\n\n$(MAKE_APERL_FILE) : $(FIRST_MAKEFILE)\n\t@echo Writing \\\"$(MAKE_APERL_FILE)\\\" for this $(MAP_TARGET)\n\t@$(PERL) -I$(INST_ARCHLIB) -I$(INST_LIB) -I$(PERL_ARCHLIB) -I$(PERL_LIB) \\\n\t\tMakefile.PL DIR= \\\n\t\tMAKEFILE=$(MAKE_APERL_FILE) LINKTYPE=static \\\n\t\tMAKEAPERL=1 NORECURS=1 CCCDLFLAGS= \\\n\t\tINC='-I. -I../../../src/include'\n\n\n# --- MakeMaker test section:\n\nTEST_VERBOSE=0\nTEST_TYPE=test_$(LINKTYPE)\nTEST_FILE = test.pl\nTEST_FILES = \nTESTDB_SW = -d\n\ntestdb :: testdb_$(LINKTYPE)\n\ntest :: $(TEST_TYPE)\n\t@echo 'No tests defined for $(NAME) extension.'\n\ntest_dynamic :: pure_all\n\ntestdb_dynamic :: pure_all\n\tPERL_DL_NONLAZY=1 $(FULLPERL) $(TESTDB_SW) -I$(INST_ARCHLIB) -I$(INST_LIB) -I$(PERL_ARCHLIB) -I$(PERL_LIB) $(TEST_FILE)\n\ntest_ : test_dynamic\n\ntest_static :: pure_all $(MAP_TARGET)\n\ntestdb_static :: pure_all $(MAP_TARGET)\n\tPERL_DL_NONLAZY=1 ./$(MAP_TARGET) $(TESTDB_SW) -I$(INST_ARCHLIB) -I$(INST_LIB) -I$(PERL_ARCHLIB) -I$(PERL_LIB) $(TEST_FILE)\n\n\n\n# --- MakeMaker ppd section:\n# Creates a PPD (Perl Package Description) for a binary distribution.\nppd:\n\t@$(PERL) -e \"print qq{<SOFTPKG NAME=\\\"plperl\\\" VERSION=\\\"0,10,0,0\\\">\\n}. qq{\\t<TITLE>plperl</TITLE>\\n}. qq{\\t<ABSTRACT></ABSTRACT>\\n}. qq{\\t<AUTHOR></AUTHOR>\\n}. qq{\\t<IMPLEMENTATION>\\n}. qq{\\t\\t<OS NAME=\\\"$(OSNAME)\\\" />\\n}. qq{\\t\\t<ARCHITECTURE NAME=\\\"PA-RISC2.0\\\" />\\n}. qq{\\t\\t<CODEBASE HREF=\\\"\\\" />\\n}. qq{\\t</IMPLEMENTATION>\\n}. qq{</SOFTPKG>\\n}\" > plperl.ppd\n\n# --- MakeMaker pm_to_blib section:\n\npm_to_blib: $(TO_INST_PM)\n\t@$(PERL) \"-I$(INST_ARCHLIB)\" \"-I$(INST_LIB)\" \\\n\t\"-I$(PERL_ARCHLIB)\" \"-I$(PERL_LIB)\" -MExtUtils::Install \\\n -e \"pm_to_blib({qw{$(PM_TO_BLIB)}},'$(INST_LIB)/auto','$(PM_FILTER)')\"\n\t@$(TOUCH) $@\n\n\n# --- MakeMaker selfdocument section:\n\n\n# --- MakeMaker postamble section:\n\n\n# End.\n", "msg_date": "Sun, 09 Jun 2002 23:45:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Perl build fix attempted " }, { "msg_contents": "Further digging: the backtrace from the SIGSEGV looks like\n\n#0 0xc00a02fc in ?? () from /usr/lib/libc.1\n\tmalloc + 1132\n#1 0xc267cbb4 in ?? ()\n\tPerl_sv_grow + 244\n from /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/libperl.sl\n#2 0xc26815b0 in ?? ()\n\tPerl_sv_setpv + 312\n from /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/libperl.sl\n#3 0xc26152c0 in ?? ()\n\tS_incpush + 288\n from /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/libperl.sl\n#4 0xc2615110 in ?? ()\n\tS_init_perllib + 136\n from /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/libperl.sl\n#5 0xc261092c in ?? ()\n\tS_parse_body + 2396\n from /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/libperl.sl\n#6 0xc260fc88 in ?? ()\n\tperl_parse + 272\n from /opt/perl5.6.1/lib/5.6.1/PA-RISC2.0/CORE/libperl.sl\n#7 0xc09ec5d0 in plperl_init_interp () at plperl.c:207\n#8 0xc09ec4a4 in plperl_init_all () at plperl.c:175\n#9 0xc09ec684 in plperl_call_handler (fcinfo=0x7b011418) at plperl.c:240\n#10 0x132808 in ExecMakeFunctionResult (fcache=0x402cf108, arguments=0xf000,\n econtext=0x1, isNull=0x3a <Address 0x3a out of bounds>, isDone=0x7b03ba30)\n at execQual.c:825\n\nSince gdb doesn't really know anything about code generated by HP's cc,\nit's difficult to get any more info than this. However, the fact that\nthe crash is inside malloc() suggests a memory clobber very strongly.\nMy best theory at this point is that there's some conflict of ideas\nabout the size of data structures, perhaps triggered by those missing\n-D symbols ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jun 2002 01:37:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Perl build fix attempted " } ]
[ { "msg_contents": "This �feature� does not affect the original version of poly_overlap as only a bounding box test is preformed. I modified poly_overlap in an attempt to improve the preciseness of poly_overlap. The function works when the column is not indexed or when the column is indexed using rtree_gist from the contrib section, but fails when the column is indexed using rtree. Turned out that npts of the polygon retrieved from the table is 0 (the other polygon is a constant and its attributes are correct). I suspect the �feature� might affect other functions that uses polygons->npts like poly_contain. Would anyone happens to know the identity of the �offending� function might be?\n\nTIA \n\nKenneth Chan\n-- \n_______________________________________________\nSign-up for your own FREE Personalized E-mail at Mail.com\nhttp://www.mail.com/?sr=signup\n\n", "msg_date": "Tue, 28 May 2002 13:06:58 -0500", "msg_from": "\"Kenneth Chan\" <kkchan@technologist.com>", "msg_from_op": true, "msg_subject": "Polygons passed to poly_overlap have 0 pts when column is indexed\n\tusing rtree" }, { "msg_contents": "\"Kenneth Chan\" <kkchan@technologist.com> writes:\n> ... Turned out that npts of the\n> polygon retrieved from the table is 0 (the other polygon is a constant\n> and its attributes are correct). I suspect the �feature� might\n> affect other functions that uses polygons->npts like poly_contain.\n> Would anyone happens to know the identity of the �offending�\n> function might be? TIA\n\nNo, but if you can post an example demonstrating the problem, I'm sure\nwe can find it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 15:06:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Polygons passed to poly_overlap have 0 pts when column is indexed\n\tusing rtree" } ]
[ { "msg_contents": "With 7.1.3, large indexes with null values allowed in one or more of the\ncolumns would cause crashes. (I have definitely seen this happen).\n\nHere is a project that mentions repairs:\nhttp://postgis.refractions.net/news/index.php?file=20020425.data\n\nHave repairs been effected in 7.2? Are they delayed until 7.3?\n", "msg_date": "Tue, 28 May 2002 12:06:49 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Null values in indexes" }, { "msg_contents": "Not really a followup,but this has been on my mind for some time :\n\n\nHow hard would it be to _not_ include nulls in indexes \nas they are not used anyway. \n\n(IIRC postgres initially did not include nulls, but it\n was added for multi-key btree indexes)\n\nThis would be a rough approximation of partial indexes \nif used together with functions, i.e. the optimiser \nwould immediately realize that\n\nWHERE my_ifunc(partfield) = 'header'\n\ncan use index on my_ifunc(partfield)\n\nbut my_ifunc() has an easy way of skipping indexing \noverhaed for non-interesting fields by returning NULL for them.\n\nThe following seems to prove thet there is currently \nno use of putting NULLS in a single-field index:\n\n--------------------------\n\nhannu=# create table itest (i int, n int);\nCREATE\nhannu=# create index itest_n_idx on itest(n);\nCREATE\n\nthen I inserted 16k tuples\n\nhannu=# insert into itest(i) select i+2 from itest;\nINSERT 0 2\nhannu=# insert into itest(i) select i+4 from itest;\nINSERT 0 4\nhannu=# insert into itest(i) select i+8 from itest;\nINSERT 0 1024\n...\nhannu=# insert into itest(i) select i+2048 from itest;\nINSERT 0 2048\nhannu=# insert into itest(i) select i+4096 from itest;\nINSERT 0 4096\nhannu=# insert into itest(i) select i+8192 from itest;\nUPDATE 16380\n\nset most of n's to is but left 4 as NULLs\n\nhannu=# update itest set n=1 where i>1;\nUPDATE 16383\n\nand vacuumed just in case \n\nhannu=# vacuum analyze itest;\nVACUUM\n\nnow selects for real value do use index\n\nhannu=# explain select * from itest where n = 7;\nNOTICE: QUERY PLAN:\n\nIndex Scan using itest_n_idx on itest (cost=0.00..2.01 rows=1 width=8)\n\nbut IS NULL does not.\n\nhannu=# explain select * from itest where n is null;\nNOTICE: QUERY PLAN:\n\nSeq Scan on itest (cost=0.00..341.84 rows=16 width=8)\n\nEXPLAIN\n\n------------------------\nHannu\n\n\n\n", "msg_date": "29 May 2002 00:10:15 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> With 7.1.3, large indexes with null values allowed in one or more of the\n> columns would cause crashes. (I have definitely seen this happen).\n> Have repairs been effected in 7.2?\n\nSubmit a test case and we'll tell you ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 15:24:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes " }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> How hard would it be to _not_ include nulls in indexes \n> as they are not used anyway. \n\nSeems to me that would be a step backwards.\n\nWhat should someday happen is to make IS NULL an indexable operator.\nThe fact that we haven't got around to doing so is not a reason to\nrip out the underpinnings for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 18:32:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes " }, { "msg_contents": "Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > How hard would it be to _not_ include nulls in indexes\n> > as they are not used anyway.\n>\n> Seems to me that would be a step backwards.\n\n It would cause multi-key indexes beeing unusable for partial\n key lookup. Imagine you have a key over (a, b, c) and query\n with WHERE a = 1 AND b = 2. This query cannot use the index\n if a NULL value in c would cause the index entry to be\n suppressed.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Wed, 29 May 2002 10:38:37 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Tom Lane wrote:\n>> Hannu Krosing <hannu@tm.ee> writes:\n> How hard would it be to _not_ include nulls in indexes\n> as they are not used anyway.\n>> \n>> Seems to me that would be a step backwards.\n\n> It would cause multi-key indexes beeing unusable for partial\n> key lookup. Imagine you have a key over (a, b, c) and query\n> with WHERE a = 1 AND b = 2. This query cannot use the index\n> if a NULL value in c would cause the index entry to be\n> suppressed.\n\nUrgh ... that means GiST indexing is actually broken, because GiST\ncurrently handles multicolumns but not nulls. AFAIR the planner\nwill try to use partial qualification on any multicolumn index...\nit had better avoid doing so for non-null-capable AMs.\n\nAlternatively, we could fix GiST to support nulls. Oleg, Teodor:\nhow far away might that be?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 May 2002 12:07:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes " } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Tuesday, May 28, 2002 5:37 PM\n> To: Andrew Sullivan\n> Cc: pgsql-general@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] [GENERAL] Re : Solaris Performance - 64 bit\n> puzzle\n> \n> \n> Andrew Sullivan wrote:\n> > On Mon, May 27, 2002 at 09:00:43PM -0400, Bruce Momjian wrote:\n> > > \n> > > TODO updated:\n> > > \n> > > \tAdd BSD-licensed qsort() for 32-bit Solaris \n> > \n> > I've received an email noting that someone else ran a test program\n> > with the 64 bit library, and had just as bad performance as the 32\n> > bit one. I haven't had a chance to look at it yet, but it suggests\n> > that the result is still inconclusive. Maybe, if just one more fire\n> > goes out here, I can look at it this week.\n> \n> TODO reverted to be:\n> \n> \tAdd BSD-licensed qsort() for Solaris\n> \n> My guess is that your test case didn't tickle the bug.\n\nI am the author of several special sort functions [I wrote the sorting\nchapter in this book: http://users.powernet.co.uk/eton/unleashed/].\n\nI would be happy to contribute sort routines to the project under the\nBerkeley style license.\n\nLikely (if C++ is allowed) a large efficiency gain can be had through\nthe use of templates.\n", "msg_date": "Tue, 28 May 2002 17:50:21 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Re : Solaris Performance - 64 bit puzzle" }, { "msg_contents": "Dann Corbit wrote:\n> > TODO reverted to be:\n> > \n> > \tAdd BSD-licensed qsort() for Solaris\n> > \n> > My guess is that your test case didn't tickle the bug.\n> \n> I am the author of several special sort functions [I wrote the sorting\n> chapter in this book: http://users.powernet.co.uk/eton/unleashed/].\n> \n> I would be happy to contribute sort routines to the project under the\n> Berkeley style license.\n> \n> Likely (if C++ is allowed) a large efficiency gain can be had through\n> the use of templates.\n\nThanks. I think we will go with Free/NetBSD code because they are\nalready tested. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jun 2002 13:35:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re : Solaris Performance - 64 bit puzzle" } ]
[ { "msg_contents": "> ... Turned out that npts of the\n> polygon retrieved from the table is 0 (the other polygon is a constant\n> and its attributes are correct). I suspect the �feature� might\n> affect other functions that uses polygons->npts like poly_contain.\n> Would anyone happens to know the identity of the �offending�\n> function might be? TIA\n\nIt appears that the issue is not rtree itself, but the rt_poly_union\nand rt_poly_inter functions, which produce \"polygons\" that have only\nbounding boxes. Not sure whether that should be considered erroneous\nor not. The dummy polygons are evidently used as internal node keys\nin the rtree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 May 2002 23:29:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Polygons passed to poly_overlap have 0 pts when column is indexed\n\tusing rtree" } ]
[ { "msg_contents": "\nHello all,\n\nsometimes I have got a problem with one of my tables.\n\nIt has got 25 columns, one unique key and an additional index.\nThere are about 368.000 rows in this particular table.\n\nSometimes after running for several days the table is corrupted.\n\nIf the tablename is \"tablename\" and the column-names are \"a\"-\"z\"\nI can for example do a \"select a,b,c from tablename where a like '0101%'\"\nwithout problem. But I can't do a \"select a,b,c,x\" from tablename where a\nlike '0101%'\"\n\nRunning a \"vacuum tablename\" didn't help. Dropping and recreating the\nindexes didn't help,\nthen finally a \"vacuum full tablename\" did the job and I could do all\nselects as usual\nagain.\n\nI had this the last time about three days ago, using 7.2-1-72.\nI updated to 7.2.1 only two days ago. Do you know if this problem is solved\nin the\nlatest release allready?\n\nBTW: Using postgresql for several years I noticed that it was worth dropping\nand recreating\nthe indexes once per week. Do you still recommend this? Before, I had a\nmajor performance-\nloss if I didn't recreate the indexes once per week.\n\n\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: steffen@topconcepts.com Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n\n", "msg_date": "Wed, 29 May 2002 10:56:56 +0200", "msg_from": "\"Henrik Steffen\" <steffen@city-map.de>", "msg_from_op": true, "msg_subject": "Backend died abnormally" }, { "msg_contents": "\"Henrik Steffen\" <steffen@city-map.de> writes:\n> sometimes I have got a problem with one of my tables.\n\nIf you are getting a backend coredump, please provide a stack backtrace.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 May 2002 08:47:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backend died abnormally " } ]
[ { "msg_contents": "Hmmm, the very simple example didn't expose the issue probably because it has only two rows of data and the index is not being used. \n\nFor poly_contain, I guess one alternative is to use the result of the box_contain test and not proceed to the point_inside test if one of the poly->npts == 0.\n\nKen\n\n----- Original Message -----\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue, 28 May 2002 23:29:08 -0400\nTo: \"Kenneth Chan\" <kkchan@technologist.com>\nSubject: Re: [HACKERS] Polygons passed to poly_overlap have 0 pts when column is indexed using rtree \n\n> ... Turned out that npts of the\n> polygon retrieved from the table is 0 (the other polygon is a constant\n> and its attributes are correct). I suspect the �feature� might\n> affect other functions that uses polygons->npts like poly_contain.\n> Would anyone happens to know the identity of the �offending�\n> function might be? TIA\n\nIt appears that the issue is not rtree itself, but the rt_poly_union\nand rt_poly_inter functions, which produce \"polygons\" that have only\nbounding boxes. Not sure whether that should be considered erroneous\nor not. The dummy polygons are evidently used as internal node keys\nin the rtree.\n\n\t\t\tregards, tom lane\n\n\n-- \n_______________________________________________\nSign-up for your own FREE Personalized E-mail at Mail.com\nhttp://www.mail.com/?sr=signup\n\n", "msg_date": "Wed, 29 May 2002 09:44:53 -0500", "msg_from": "\"Kenneth Chan\" <kkchan@technologist.com>", "msg_from_op": true, "msg_subject": "Re: Polygons passed to poly_overlap have 0 pts when" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Wednesday, May 29, 2002 9:07 AM\n> To: Jan Wieck\n> Cc: Oleg Bartunov; Teodor Sigaev; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Null values in indexes \n> \n> \n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Tom Lane wrote:\n> >> Hannu Krosing <hannu@tm.ee> writes:\n> > How hard would it be to _not_ include nulls in indexes\n> > as they are not used anyway.\n> >> \n> >> Seems to me that would be a step backwards.\n> \n> > It would cause multi-key indexes beeing unusable for partial\n> > key lookup. Imagine you have a key over (a, b, c) and query\n> > with WHERE a = 1 AND b = 2. This query cannot use the index\n> > if a NULL value in c would cause the index entry to be\n> > suppressed.\n> \n> Urgh ... that means GiST indexing is actually broken, because GiST\n> currently handles multicolumns but not nulls. AFAIR the planner\n> will try to use partial qualification on any multicolumn index...\n> it had better avoid doing so for non-null-capable AMs.\n> \n> Alternatively, we could fix GiST to support nulls. Oleg, Teodor:\n> how far away might that be?\n\nThe PostGIS people have already fixed it. However, they may not be\nwilling to contribute the patch. On the other hand, I think it would be\nin their interest, since the source code trees will fork if they don't\nand they will have trouble staying in synch with PostgreSQL\ndevelopments. (See the 7.2 index project here:\nhttp://postgis.refractions.net/ \nhttp://postgis.refractions.net/news/index.php?file=20020425.data\n)\n\nIf they are not willing to commit a patch, I suspect that they will at\nleast tell you what they had to do to fix it and it could be performed\ninternally.\n", "msg_date": "Wed, 29 May 2002 10:00:03 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Null values in indexes " }, { "msg_contents": "Just to clarify:\n\n- With respect to null safety. My understanding is the Oleg and Teodor\nput support for nulls into the GiST indexing prior to the 7.2 release,\nso 7.2 GiST should already be null-safe. Our project was just to take\nour GiST bindings in PostGIS and update them to the new 7.2 GiST API, we\ndid no work on null-safety, null-safety was just one of the side\nbenefits we received as a result of updating our code to the 7.2 GiST\nindexes.\n\n- With respect to code contribution. If we find ourselves making changes\nto the mainline PgSQL distribution we will always submit back. End of\nstory. All our changes have been to PostGIS itself, with the aim of\nsupporting 7.2. 7.2 rocks, we love it. :)\n\n- There is one outstanding bug which we identified and Oleg and Teodor\nfixed, but it is to the code in contrib/rtree, not in the mainline, and\nOleg and Teodor have already submitted that patch to Bruce. I believe\nthere was some unresolved discussion regarding whether to cut a 7.2.2\nrelease including that patch and a few other housekeeping items.\n\nPaul\n\n\nDann Corbit wrote:\n> \n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: Wednesday, May 29, 2002 9:07 AM\n> > To: Jan Wieck\n> > Cc: Oleg Bartunov; Teodor Sigaev; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] Null values in indexes\n> >\n> >\n> > Jan Wieck <janwieck@yahoo.com> writes:\n> > > Tom Lane wrote:\n> > >> Hannu Krosing <hannu@tm.ee> writes:\n> > > How hard would it be to _not_ include nulls in indexes\n> > > as they are not used anyway.\n> > >>\n> > >> Seems to me that would be a step backwards.\n> >\n> > > It would cause multi-key indexes beeing unusable for partial\n> > > key lookup. Imagine you have a key over (a, b, c) and query\n> > > with WHERE a = 1 AND b = 2. This query cannot use the index\n> > > if a NULL value in c would cause the index entry to be\n> > > suppressed.\n> >\n> > Urgh ... that means GiST indexing is actually broken, because GiST\n> > currently handles multicolumns but not nulls. AFAIR the planner\n> > will try to use partial qualification on any multicolumn index...\n> > it had better avoid doing so for non-null-capable AMs.\n> >\n> > Alternatively, we could fix GiST to support nulls. Oleg, Teodor:\n> > how far away might that be?\n> \n> The PostGIS people have already fixed it. However, they may not be\n> willing to contribute the patch. On the other hand, I think it would be\n> in their interest, since the source code trees will fork if they don't\n> and they will have trouble staying in synch with PostgreSQL\n> developments. (See the 7.2 index project here:\n> http://postgis.refractions.net/\n> http://postgis.refractions.net/news/index.php?file=20020425.data\n> )\n> \n> If they are not willing to commit a patch, I suspect that they will at\n> least tell you what they had to do to fix it and it could be performed\n> internally.\n\n-- \n __\n /\n | Paul Ramsey\n | Refractions Research\n | Email: pramsey@refractions.net\n | Phone: (250) 885-0632\n \\_\n", "msg_date": "Wed, 29 May 2002 10:16:06 -0700", "msg_from": "Paul Ramsey <pramsey@refractions.net>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes" }, { "msg_contents": ">> Urgh ... that means GiST indexing is actually broken, because GiST\n>> currently handles multicolumns but not nulls.\n\nActually, it appears that 7.2 GiST does handle NULLs in columns after\nthe first one, which I think is enough to avoid the problem Jan\nmentioned. The boolean column pg_am.amindexnulls is not really\nsufficient to describe this behavior accurately. Looking at current\nuses it seems correct to leave it set FALSE for GiST.\n\nIn short: false alarm; the 7.2 code is okay as-is, at least on this\nparticular point.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 May 2002 13:21:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes " }, { "msg_contents": "Hmm... I think there is some confusion here.\n\nOleg and Teodor updated the GiST indexing to be null safe for postgresql 7.2. \nThe changes we made to PostGIS were just to allow our spacial indexing \nsupport functions to work with the changes made in the actual GiST indexing \ncode (the GiST interface changed somewhat from postgresql 7.1 -> 7.2).\n\nAnd for the record, I'm confident that we would submit a patch for postgresql \nif something like this did come up.\n\nChris Hodgson\n\nDann Corbit <DCorbit@connx.com> said:\n\n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: Wednesday, May 29, 2002 9:07 AM\n> > To: Jan Wieck\n> > Cc: Oleg Bartunov; Teodor Sigaev; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] Null values in indexes \n> > \n> > \n> > Jan Wieck <janwieck@yahoo.com> writes:\n> > > Tom Lane wrote:\n> > >> Hannu Krosing <hannu@tm.ee> writes:\n> > > How hard would it be to _not_ include nulls in indexes\n> > > as they are not used anyway.\n> > >> \n> > >> Seems to me that would be a step backwards.\n> > \n> > > It would cause multi-key indexes beeing unusable for partial\n> > > key lookup. Imagine you have a key over (a, b, c) and query\n> > > with WHERE a = 1 AND b = 2. This query cannot use the index\n> > > if a NULL value in c would cause the index entry to be\n> > > suppressed.\n> > \n> > Urgh ... that means GiST indexing is actually broken, because GiST\n> > currently handles multicolumns but not nulls. AFAIR the planner\n> > will try to use partial qualification on any multicolumn index...\n> > it had better avoid doing so for non-null-capable AMs.\n> > \n> > Alternatively, we could fix GiST to support nulls. Oleg, Teodor:\n> > how far away might that be?\n> \n> The PostGIS people have already fixed it. However, they may not be\n> willing to contribute the patch. On the other hand, I think it would be\n> in their interest, since the source code trees will fork if they don't\n> and they will have trouble staying in synch with PostgreSQL\n> developments. (See the 7.2 index project here:\n> http://postgis.refractions.net/ \n> http://postgis.refractions.net/news/index.php?file=20020425.data\n> )\n> \n> If they are not willing to commit a patch, I suspect that they will at\n> least tell you what they had to do to fix it and it could be performed\n> internally.\n> \n> \n\n\n\n-- \n\n\n\n", "msg_date": "Wed, 29 May 2002 17:22:16 -0000", "msg_from": "\"Chris Hodgson\" <chodgson@refractions.net>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes " }, { "msg_contents": "Glad to hear GiST in 7.2 isn't broken :-)\nWe miss the topic, what was the problem ?\nDo we need to fix GiST code for 7.3 ?\n\nproposal for null-safe GiST interface is available\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1028327\nand discussion\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1025848\n\n\n\tRegards,\n\t\tOleg\nOn Wed, 29 May 2002, Tom Lane wrote:\n\n> >> Urgh ... that means GiST indexing is actually broken, because GiST\n> >> currently handles multicolumns but not nulls.\n>\n> Actually, it appears that 7.2 GiST does handle NULLs in columns after\n> the first one, which I think is enough to avoid the problem Jan\n> mentioned. The boolean column pg_am.amindexnulls is not really\n> sufficient to describe this behavior accurately. Looking at current\n> uses it seems correct to leave it set FALSE for GiST.\n>\n> In short: false alarm; the 7.2 code is okay as-is, at least on this\n> particular point.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 30 May 2002 13:13:13 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Do we need to fix GiST code for 7.3 ?\n\nNo, I think it's fine. I had forgotten that old discussion ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 May 2002 09:57:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Null values in indexes " } ]
[ { "msg_contents": "Hello.\n\nWhatever happened to that patch that Paul Vixie sent in, that was supposed\nto be applied? Why is it still in the TODO list? I believe Bruce was the\none who handled it.\n\nOla\n\n-- \nOla Sundell\nola@miranda.org - olas@wiw.org\nhttp://miranda.org/~ola\n\n", "msg_date": "Thu, 30 May 2002 02:45:20 +0200 (CEST)", "msg_from": "Ola Sundell <ola@miranda.org>", "msg_from_op": true, "msg_subject": "ipv6" }, { "msg_contents": "Ola Sundell <ola@miranda.org> writes:\n> Whatever happened to that patch that Paul Vixie sent in, that was supposed\n> to be applied? Why is it still in the TODO list?\n\nBecause Paul hasn't fixed the outstanding problems with it: as\nsubmitted, it reverted the painfully-agreed-to formatting behavior\nfor the inet datatypes.\n\nThis could easily be dealt with (IMHO) by re-patching as we'd done\nfor 7.1. But Paul had an additional agenda item: he wanted to see\nour code restructured as a pure wrapper around the published libbind\nsubroutines. I have no problem with that --- as long as someone\nelse does the legwork. So far, neither Paul nor anyone else has.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 May 2002 23:58:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ipv6 " }, { "msg_contents": "Tom Lane wrote:\n> Ola Sundell <ola@miranda.org> writes:\n> > Whatever happened to that patch that Paul Vixie sent in, that was supposed\n> > to be applied? Why is it still in the TODO list?\n> \n> Because Paul hasn't fixed the outstanding problems with it: as\n> submitted, it reverted the painfully-agreed-to formatting behavior\n> for the inet datatypes.\n> \n> This could easily be dealt with (IMHO) by re-patching as we'd done\n> for 7.1. But Paul had an additional agenda item: he wanted to see\n> our code restructured as a pure wrapper around the published libbind\n> subroutines. I have no problem with that --- as long as someone\n> else does the legwork. So far, neither Paul nor anyone else has.\n\nYes, it is still in my review mailbox. Not sure how to handle it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 19:48:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ipv6" } ]
[ { "msg_contents": "Hmmm, the really simple example didn't work probably becaure there are not enough rows in the test tables for the system to use the index instead of doing a sequential scan.\n\nFor poly_contained and other functions that use npts, maybe the code can take into account that npts maybe 0 by doing the bounding box test and proceed to more accurate tests only if npts > 0. \n\nKen.\n\n\n----- Original Message -----\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue, 28 May 2002 23:29:08 -0400\nTo: \"Kenneth Chan\" <kkchan@technologist.com>\nSubject: Re: [HACKERS] Polygons passed to poly_overlap have 0 pts when column is indexed using rtree \n\n> ... Turned out that npts of the\n> polygon retrieved from the table is 0 (the other polygon is a constant\n> and its attributes are correct). I suspect the �feature� might\n> affect other functions that uses polygons->npts like poly_contain.\n> Would anyone happens to know the identity of the �offending�\n> function might be? TIA\n\nIt appears that the issue is not rtree itself, but the rt_poly_union\nand rt_poly_inter functions, which produce \"polygons\" that have only\nbounding boxes. Not sure whether that should be considered erroneous\nor not. The dummy polygons are evidently used as internal node keys\nin the rtree.\n\n\n-- \n_______________________________________________\nSign-up for your own FREE Personalized E-mail at Mail.com\nhttp://www.mail.com/?sr=signup\n\n", "msg_date": "Wed, 29 May 2002 20:40:15 -0500", "msg_from": "\"Kenneth Chan\" <kkchan@technologist.com>", "msg_from_op": true, "msg_subject": "Re: Polygons passed to poly_overlap have 0 pts when" } ]
[ { "msg_contents": "What does everyone think about adding self-tuning histograms\nto PostgreSQL?\n\nBriefly, a self-tuning histogram is one that is constructed without\nlooking at the data in the attribute; it uses the information provided\nby the query executor to adjust a default set of histograms that are\ncreated when the table is defined. Thus, the histograms automatically\nadapt to the data that is stored in the table -- as the data\ndistribution changes, so do the histograms.\n\nHistogram refinement can take place in two possible ways: online\n(as queries are executed, the histograms are updated immediately),\nor offline (the necessary data is written to a log after every\nquery, which is processed on a regular basis to refine the\nhistograms).\n\nThe paper I've looked at on this topic is \"Self-tuning\nHistograms: Building Histograms Without Looking at Data\", by\nAboulnaga and Shaudhuri (1999), which you can find here:\nhttp://citeseer.nj.nec.com/255752.html -- please refer to\nit for lots more information on this technique.\n\nI think that ST histograms would be useful because:\n\n(1) It would make it easier for us to implement multi-dimensional\n histograms (for more info, see the Aboulnaga and Shaudhuri).\n Since no commercial system currently implements them, I\n think this would be a neat thing to have.\n\n(2) I'm unsure of the accuracy of building histograms through\n statistical sampling. My guess would be that ST histograms\n would achieve better accuracy when it matters most -- i.e.\n on those tables accessed the most often (since those are\n the tables for which the most histogram refinement is done).\n\n(3) The need for manual DB maintainence through VACUUM and\n ANALYZE is problematic. This technique would be a step in\n the direction of removing that requirement. Self-tuning\n databases are something a lot of industry players (IBM,\n Microsoft, others) are working toward.\n\n(4) They scale well -- refining histograms on a 100 million\n tuple table is no different than on a 100 tuple table.\n\nThere are some disadvantages, however:\n\n(1) Reproduceability: At the moment, the system's performance\n only changes when the data is changed, or the DBA makes a\n configuration change. With this (and other \"self-tuning\"\n techniques, which are becoming very popular among\n commercial databases), the system can change the state of\n the system without the intervention of the DBA. While I'd\n hope that those changes are for the better (i.e. histograms\n eventually converging toward \"perfect\" accuracy), that\n won't always be the case. I don't really see a way around\n this, other than letting the DBA disable ST histograms\n when debugging problems.\n\n(2) Performance: As Aboulnaga and Shaudhuri point out, online\n histogram refinement can become a point of contention.\n Obviously, we want to avoid that. I think online refinement\n is still possible as long as we:\n\n (a) don't block waiting for locks: try to acquire the\n necessary locks to refine the histograms,\n immediately give up if not possible\n\n (b) delay histogram refinement so it doesn't interfere\n with the user: for example, store histogram data\n locally and only update the system catalogs when\n the backend is idle\n\n (c) only update the histogram when major changes can\n be applied: skip trivial refinements (or store those\n in the offline log for later processing)\n\n (d) allow the DBA to choose between offline and online\n histogram refinement (assuming we choose to implement\n both)\n\nAny comments?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 29 May 2002 23:05:18 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "self-tuning histograms" }, { "msg_contents": "Neil,\n\nI've also been thinking about this but haven't had time to collect my\nthoughts.\n\nOn Wed, 29 May 2002, Neil Conway wrote:\n\n> Histogram refinement can take place in two possible ways: online\n> (as queries are executed, the histograms are updated immediately),\n> or offline (the necessary data is written to a log after every\n> query, which is processed on a regular basis to refine the\n> histograms).\n\nI would have thought that offline would have meant that the histogram\nrefinement could be run at the DBA's leisure.\n\n> There are some disadvantages, however:\n> \n> (1) Reproduceability: At the moment, the system's performance\n> only changes when the data is changed, or the DBA makes a\n> configuration change. With this (and other \"self-tuning\"\n> techniques, which are becoming very popular among\n> commercial databases), the system can change the state of\n> the system without the intervention of the DBA. While I'd\n> hope that those changes are for the better (i.e. histograms\n> eventually converging toward \"perfect\" accuracy), that\n> won't always be the case. I don't really see a way around\n> this, other than letting the DBA disable ST histograms\n> when debugging problems.\n\nSelf-tuning would have to be optional.\n\n> (2) Performance: As Aboulnaga and Shaudhuri point out, online\n> histogram refinement can become a point of contention.\n> Obviously, we want to avoid that. I think online refinement\n> is still possible as long as we:\n> \n> (a) don't block waiting for locks: try to acquire the\n> necessary locks to refine the histograms,\n> immediately give up if not possible\n> \n> (b) delay histogram refinement so it doesn't interfere\n> with the user: for example, store histogram data\n> locally and only update the system catalogs when\n> the backend is idle\n\nThis should be fine as long as the refinement system works through MVCC.\n\nThere is another consideration. If a database is using histogram\nrefinement then the 'base' data it works on must be accurate. If not,\nrefinement would compound the inaccuracy of the histogram. As such,\nANALYZE would have to scan the whole table (if/when run), COPY would have\nto update the statistics, etc.\n\nGavin\n\n\n", "msg_date": "Thu, 30 May 2002 13:52:08 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: self-tuning histograms" }, { "msg_contents": "On Thu, 30 May 2002 13:52:08 +1000 (EST)\n\"Gavin Sherry\" <swm@linuxworld.com.au> wrote:\n> On Wed, 29 May 2002, Neil Conway wrote:\n> > Histogram refinement can take place in two possible ways: online\n> > (as queries are executed, the histograms are updated immediately),\n> > or offline (the necessary data is written to a log after every\n> > query, which is processed on a regular basis to refine the\n> > histograms).\n> \n> I would have thought that offline would have meant that the histogram\n> refinement could be run at the DBA's leisure.\n\nYeah -- that makes more sense.\n\n> > (2) Performance: As Aboulnaga and Shaudhuri point out, online\n> > histogram refinement can become a point of contention.\n> > Obviously, we want to avoid that.\n> \n> This should be fine as long as the refinement system works through MVCC.\n\nGood point -- the current pg_statistic routines are MVCC aware, so\nthere's no reason to change that. In that case, my concerns about\ncontention over histogram refinement may be unfounded. I still think we\nshould avoid redundant histogram refinements (i.e. don't update the\nhistograms on every single query), but I'm glad that MVCC solves most\nof this problem for us.\n\n> There is another consideration. If a database is using histogram\n> refinement then the 'base' data it works on must be accurate. If not,\n> refinement would compound the inaccuracy of the histogram.\n\nAboulnaga and Shaudhuri doesn't require this. The initial assumption\nis that the attribute is uniformly distributed over the initial\nbuckets of the histogram. Naturally, this is incorrect: as queries\nare executed, the initial histogram is modified by \"refinement\"\n(the frequences of individual buckets are adjusted) and\n\"restructuring\" (bucket boundaries are adjusted). For more\ninformation (and the exact algorithms used), see sections 3.2\nand 3.3 of the paper.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 30 May 2002 00:42:10 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: self-tuning histograms" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> What does everyone think about adding self-tuning histograms\n> to PostgreSQL?\n> [ snip ]\n> I think that ST histograms would be useful because:\n\n> (1) It would make it easier for us to implement multi-dimensional\n> histograms (for more info, see the Aboulnaga and Shaudhuri).\n\nThis seems potentially useful, although I think the paper seriously\nunderstates the difficulty of drawing meaningful deductions from real\nqueries. A complex query is likely to contain other constraints besides\nthe ones relevant to a particular histogram, which will make it\ndifficult to extract the needed selectivity data --- the final tuple\ncount certainly isn't what you need to know. Internal instrumentation\n(a la EXPLAIN ANALYZE) might give you the right numbers, but it depends\na lot on what the plan is.\n\nAn example: one of the main things you'd like multidimensional\nhistograms for is to estimate join selectivities more accurately (this\nrequires cross-table histograms, obviously). But in any join plan,\nyou are going to push down any available single-table restriction\nclauses to the individual scan subplans, whereupon counting the join\nplan's output tuples will *not* give you an unskewed estimate of the\noverall distribution of the joined variables.\n\n> (2) I'm unsure of the accuracy of building histograms through\n> statistical sampling. My guess would be that ST histograms\n> would achieve better accuracy when it matters most -- i.e.\n\nI think not. The paper says that ST histograms are at best in the same\nleague as traditional histograms, and in cases of high skew much worse.\nUnfortunately, high skew is exactly where you *need* a histogram; with\nlow-skew data you can get away with assuming uniform distribution. So\nI thought they were being a bit overoptimistic about the usefulness of\nthe technique.\n\n> (3) The need for manual DB maintainence through VACUUM and\n> ANALYZE is problematic. This technique would be a step in\n> the direction of removing that requirement. Self-tuning\n> databases are something a lot of industry players (IBM,\n> Microsoft, others) are working toward.\n\n\"Self tuning\" does not equate to \"get rid of VACUUM and ANALYZE\" in my\nview. I'd prefer to see those maintenance processes scheduled\nautomatically, but that doesn't mean we don't need them.\n\n\nI think it'd probably be premature to think about self-tuning histograms\nas such. They look useful for multivariable histograms, and for\nestimating queries involving remote data sources, but we are nowhere\nnear being able to make use of such histograms if we had them. I'd\ncounsel working first on the planner to see how we could make use of\nmultivariable histograms built using a more traditional method. If that\nflies, it'd be time enough to look at ST methods for collecting the\nhistograms.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 May 2002 13:33:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: self-tuning histograms " } ]
[ { "msg_contents": "Hi hackers!\n\nI would like to add configurablity to my language handler function, now it is compiled in. Is there any api in the server to do this?\n\nthanx:\nLaszlo Hornyak\n", "msg_date": "Thu, 30 May 2002 16:25:00 +0200", "msg_from": "Laszlo Hornyak <hornyakl@rootshell.be>", "msg_from_op": true, "msg_subject": "coniguration api" } ]
[ { "msg_contents": "On Fri, 2002-05-31 at 01:16, Josh Burdick wrote:\n> BUG: this isn't properly set up to deal with multiple users.\n> For example, if A computes a median, then B could read the data\n> from the median_tmp table. Possibly you could fiddle with\n> transaction isolation levels, or add a user field to median_tmp,\n> or something else complicated, to prevent this, but for now I'm\n> not worrying about this.\n\nYou could just use temp tables and indexes - they are local to\nconnection .\n\ncreate TEMP sequence median_id;\ncreate TEMP table median_tmp (\n median_id int,\n x float4\n);\n\n---------------\nHannu\n\n\n", "msg_date": "31 May 2002 01:16:41 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: finding medians" }, { "msg_contents": " At the end of this message is some code I used to find medians.\n It's kind of a hack, but approximately works, and is intended as a \nsomewhat awkward stopgap for people who need to use medians. It \nillustrates the limitations of the current aggregate function setup, \nwhich works so nicely for avg() and stddev().\n I don't have any good solutions. I tried using a float4[] to store \neach element as it's added, but I couldn't get array updates working in \nPL/PgSQL, so that didn't help.\n Perhaps aggregate functions could be passed an array? Or a cursor, \npointing at the first line? I'm not sure.\n\n Anyways, perhaps it'll be helpful.\n Josh\n\n-- \nJosh Burdick\njburdick@gradient.cis.upenn.edu\nhttp://www.cis.upenn.edu/~jburdick\n\n\n\n/* Implementing median-finding in \"pure Postgres.\" Does this by\ncopying data to a temporary table.\n\n A weakness of this code is that it uses sorting, instead of\nHoare's linear-time median algorithm. Presumably sorting is\nimplemented so efficiently that it'll be faster than anything\nwritten in PL/PgSQL. (Although Hoare's algorithm implemented\nin C would be faster than either.)\n\n BUG: this isn't properly set up to deal with multiple users.\nFor example, if A computes a median, then B could read the data\nfrom the median_tmp table. Possibly you could fiddle with\ntransaction isolation levels, or add a user field to median_tmp,\nor something else complicated, to prevent this, but for now I'm\nnot worrying about this.\n\n Written by Josh Burdick (jburdick@gradient.cis.upenn.edu).\nAnyone can use this under the same license as Postgres.\n\n 20020524, jtb: started. */\n\ndrop aggregate median(float4);\ndrop table median_tmp;\ndrop sequence median_id;\ndrop index median_tmp_median_id;\ndrop function median_sfunc_float4(bigint, float4);\ndrop function median_finalfunc_float4(bigint);\n\ncreate sequence median_id;\ncreate table median_tmp (\n median_id int,\n x float4\n);\ncreate index median_tmp_median_id on median_tmp(median_id);\n\ncreate function median_sfunc_float4\n(bigint, float4) returns bigint as '\n\ninsert into median_tmp\nvalues (case when $1 = 0 then nextval(''median_id'') else $1 end, $2);\n\nselect currval(''median_id'');\n\n' language 'SQL';\n\ncreate function median_finalfunc_float4\n(bigint) returns float4 as '\ndeclare\n\ni bigint;\nn bigint;\nc refcursor;\nm float4;\nm1 float4;\n\nbegin\n\nn := (select count(*) from median_tmp where median_id = $1);\n\nopen c for select x from median_tmp where median_id = $1 order by x;\n\nfor i in 1..((n+1)/2) loop\n fetch c into m;\nend loop;\n\n/* if n is even, fetch the next value, and average the two */\nif (n % int8(2) = int8(0)) then\n fetch c into m1;\n m := (m + m1) / 2;\nend if;\n\ndelete from median_tmp where median_id = $1;\n\nreturn m;\n\nend\n' language 'plpgsql';\n\ncreate aggregate median (\n basetype = float4,\n stype = bigint,\n initcond = 0,\n sfunc = median_sfunc_float4,\n finalfunc = median_finalfunc_float4\n);\n\n\n\n\n\n\n", "msg_date": "Thu, 30 May 2002 16:16:43 -0400", "msg_from": "Josh Burdick <jburdick@gradient.cis.upenn.edu>", "msg_from_op": false, "msg_subject": "finding medians" }, { "msg_contents": "Josh,\n\n> At the end of this message is some code I used to find medians.\n> It's kind of a hack, but approximately works, and is intended as a \n> somewhat awkward stopgap for people who need to use medians. It \n> illustrates the limitations of the current aggregate function setup, \n> which works so nicely for avg() and stddev().\n\nActually, finding the median is one of the classic SQL problems. You can't do \nit without 2 passes through the data set, disallowing the use of traditional \naggregate functions. Joe Celko has half a chapter devoted to various methods \nof finding the median.\n\nCan I talk you into submitting your code to Techdocs? I'd love to have it \nsomewhere where it won't get buried in the mailing list archives.\n\n-- \n-Josh Berkus\n\n", "msg_date": "Thu, 30 May 2002 13:30:09 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: finding medians" }, { "msg_contents": "Josh Burdick <jburdick@gradient.cis.upenn.edu> writes:\n> illustrates the limitations of the current aggregate function setup, \n> which works so nicely for avg() and stddev().\n> I don't have any good solutions. I tried using a float4[] to store \n> each element as it's added, but I couldn't get array updates working in \n> PL/PgSQL, so that didn't help.\n> Perhaps aggregate functions could be passed an array? Or a cursor, \n> pointing at the first line? I'm not sure.\n\nI don't think that would help. The real problem here is the amount of\ninternal storage needed. AFAIK there are no exact algorithms for finding\nthe median that require less than O(N) workspace for N input items.\nYour \"hack\" with a temporary table is not a bad approach if you want to\nwork for large N.\n\nThere are algorithms out there for finding approximate medians using\nlimited workspace; it might be reasonable to transform one of these into\na Postgres aggregate function.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 May 2002 17:45:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: finding medians " } ]
[ { "msg_contents": "ACK! Sorting to find a median is criminal.\n\n\"Introduction to Algorithms\" by Thomas H. Cormen, Charles E. Leiserson,\nRonald L. Rivest\nISBN: 0262031418\nexplains the better algorithm very well.\n\nHere is a freely available C++ template (written by me) for a bunch of\nstatistics (everything *but* the selection problem):\nftp://cap.connx.com/pub/tournament_software/STATS.HPP\n\nIt uses this template for improved summation accuracy:\nftp://cap.connx.com/pub/tournament_software/Kahan.Hpp\n\nHere is an outline for selection. I wrote it in C++, but a rewrite to C\nis trivial:\n\n// Quickselect: find Kth smallest of first N items in array A\n// recursive routine finds Kth smallest in A[Low..High]\n// Etype: must have copy constructor, oeprator=, and operator<\n// Nonrecursive driver is omitted.\n\ntemplate < class Etype >\nvoid\nQuickSelect (Etype A[], int Low, int High, int k)\n{\n if (Low + Cutoff > High)\n InsertionSort (&A[Low], High - Low + 1);\n else\n {\n// Sort Low, Middle, High\n int Middle = (Low + High) / 2;\n\n if (A[Middle] < A[Low])\n Swap (A[Low], A[Middle]);\n if (A[High] < A[Low])\n Swap (A[Low], A[High]);\n if (A[High] < A[Middle])\n Swap (A[Middle], A[High]);\n\n// Place pivot at Position High-1\n Etype Pivot = A[Middle];\n Swap (A[Middle], A[High - 1]);\n\n// Begin partitioning\n int i, j;\n for (i = Low, j = High - 1;;)\n {\n while (A[++i] < Pivot);\n while (Pivot < A[--j]);\n if (i < j)\n Swap (A[i], A[j]);\n else\n break;\n }\n\n// Restore pivot\n Swap (A[i], A[High - 1]);\n\n// Recurse: only this part changes\n if (k < i)\n QuickSelect (A, Low, i - 1, k);\n else if (k > i)\n QuickSelect (A, i + 1, High, k);\n }\n}\n\n\ntemplate < class Etype >\nvoid\nQuickSelect (Etype A[], int N, int k)\n{\n QuickSelect (A, 0, N - 1, k - 1);\n}\n\nIf you want to use this stuff to improve statistics with vacuum, be my\nguest.\n", "msg_date": "Thu, 30 May 2002 13:48:28 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: finding medians" } ]
[ { "msg_contents": "Here is a program written in C that demonstrates 2 median/selection\ncomputation techniques:\nACM Algorithm 727 (implementation by Sherif Hashem) \nQuickSelect (implemented by me).\n\nSince it is written in C, it would be useful to PostgreSQL project\nwithout any fanfare.\nftp://cap.connx.com/pub/chess-engines/new-approach/727.c\n\nThe ACM agorithm 727 is an approximation.\nThe QuickSelect result is exact.\n\n", "msg_date": "Thu, 30 May 2002 14:03:26 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: finding medians" } ]
[ { "msg_contents": "Hi there. I'm yet another developer working full-time on a native windows\nport. I'm also working closely with Jan Wieck (next office). I know there is\na reluctance to modify the code base to support native win32, and I realize\nthat no decision has yet been made. However, ...\n\nA few of the identifier names used in postgres collide with WIN32 or MFC names.\n To keep my working copy of the code as close to the released source as\npossible, I do have some superficial changes that I would like to put in the\ncode base early:\n\n1. Rename to avoid structures/functions with same name:\n\ta. PROC => PGPROC\n\tb. GetUserName() => GetUserNameFromId()\n\tc. GetCurrentTime() => GetCurrentDateTime()\n\n2. Add _P to the following lex/yacc tokens to avoid collisions\n\tCONST, CHAR, DELETE, FLOAT, GROUP, IN, OUT\n\n3. Rename two local macros\n\ta. MEM_FREE => MEM_FREE_IT in backend/utils/hash/dynahash.c\n\tb. IGNORE => IGNORE_TOK in include/utils/datetime.h &\nbackend/utils/adt/datetime.c\n\nThanks,\nKatie Ward\nkward6@yahoo.com\n\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! - Official partner of 2002 FIFA World Cup\nhttp://fifaworldcup.yahoo.com\n", "msg_date": "Thu, 30 May 2002 14:33:50 -0700 (PDT)", "msg_from": "Katherine Ward <kward6@yahoo.com>", "msg_from_op": true, "msg_subject": "Small changes to facilitate Win32 port" }, { "msg_contents": "It's more likely that your changes will go through if you just submit a\npatch!\n\ncvs diff -c\n\nChris\n\n----- Original Message -----\nFrom: \"Katherine Ward\" <kward6@yahoo.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Thursday, May 30, 2002 2:33 PM\nSubject: [HACKERS] Small changes to facilitate Win32 port\n\n\n> Hi there. I'm yet another developer working full-time on a native windows\n> port. I'm also working closely with Jan Wieck (next office). I know\nthere is\n> a reluctance to modify the code base to support native win32, and I\nrealize\n> that no decision has yet been made. However, ...\n>\n> A few of the identifier names used in postgres collide with WIN32 or MFC\nnames.\n> To keep my working copy of the code as close to the released source as\n> possible, I do have some superficial changes that I would like to put in\nthe\n> code base early:\n>\n> 1. Rename to avoid structures/functions with same name:\n> a. PROC => PGPROC\n> b. GetUserName() => GetUserNameFromId()\n> c. GetCurrentTime() => GetCurrentDateTime()\n>\n> 2. Add _P to the following lex/yacc tokens to avoid collisions\n> CONST, CHAR, DELETE, FLOAT, GROUP, IN, OUT\n>\n> 3. Rename two local macros\n> a. MEM_FREE => MEM_FREE_IT in backend/utils/hash/dynahash.c\n> b. IGNORE => IGNORE_TOK in include/utils/datetime.h &\n> backend/utils/adt/datetime.c\n>\n> Thanks,\n> Katie Ward\n> kward6@yahoo.com\n>\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Yahoo! - Official partner of 2002 FIFA World Cup\n> http://fifaworldcup.yahoo.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Thu, 30 May 2002 15:06:44 -0700", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> It's more likely that your changes will go through if you just submit a\n> patch!\n\nI think the question was more directed at \"do we like these names?\",\nwhich should certainly be asked before going to the trouble of making a\npatch.\n\n>> 2. Add _P to the following lex/yacc tokens to avoid collisions\n>> CONST, CHAR, DELETE, FLOAT, GROUP, IN, OUT\n\nI'm tempted to suggest that we should stick _P on *all* the lexer token\nsymbols, rather than having an inconsistent set of names where some of\nthem have _P and some do not. Or perhaps _T (for token) would be a more\nsensible convention; I'm not sure why _P was used in the first place.\n\n>> 3. Rename two local macros\n>> a. MEM_FREE => MEM_FREE_IT in backend/utils/hash/dynahash.c\n>> b. IGNORE => IGNORE_TOK in include/utils/datetime.h &\n>> backend/utils/adt/datetime.c\n\nIt's fairly amazing that IGNORE is the only one of the datetime.h field\nnames that's bitten anyone (so far). Macros named TZ, YEAR, MONTH, DAY,\nHOUR, MINUTE, SECOND, UNITS all look like trouble waiting to happen\n(and UNKNOWN_FIELD looks like someone already had to beat a retreat from\ncalling it UNKNOWN ;-)). I'm inclined to suggest that these names\nshould be uniformly changed to DTF_FOO (DTF for \"datetime field\").\nThe macro names appearing before the field name list look like trouble\nas well --- anyone have an interest in changing them? Thomas, this is\npretty much your turf; what do you think?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 May 2002 18:25:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> It's more likely that your changes will go through if you just submit a\n> patch!\n\n I suggested to discuss it first, since it's IMHO more likely\n that the changes go through if they are commonly accepted in\n the first place.\n\n\nJan\n\n> cvs diff -c\n>\n> Chris\n>\n> ----- Original Message -----\n> From: \"Katherine Ward\" <kward6@yahoo.com>\n> To: <pgsql-hackers@postgresql.org>\n> Sent: Thursday, May 30, 2002 2:33 PM\n> Subject: [HACKERS] Small changes to facilitate Win32 port\n>\n>\n> > Hi there. I'm yet another developer working full-time on a native windows\n> > port. I'm also working closely with Jan Wieck (next office). I know\n> there is\n> > a reluctance to modify the code base to support native win32, and I\n> realize\n> > that no decision has yet been made. However, ...\n> >\n> > A few of the identifier names used in postgres collide with WIN32 or MFC\n> names.\n> > To keep my working copy of the code as close to the released source as\n> > possible, I do have some superficial changes that I would like to put in\n> the\n> > code base early:\n> >\n> > 1. Rename to avoid structures/functions with same name:\n> > a. PROC => PGPROC\n> > b. GetUserName() => GetUserNameFromId()\n> > c. GetCurrentTime() => GetCurrentDateTime()\n> >\n> > 2. Add _P to the following lex/yacc tokens to avoid collisions\n> > CONST, CHAR, DELETE, FLOAT, GROUP, IN, OUT\n> >\n> > 3. Rename two local macros\n> > a. MEM_FREE => MEM_FREE_IT in backend/utils/hash/dynahash.c\n> > b. IGNORE => IGNORE_TOK in include/utils/datetime.h &\n> > backend/utils/adt/datetime.c\n> >\n> > Thanks,\n> > Katie Ward\n> > kward6@yahoo.com\n> >\n> >\n> > __________________________________________________\n> > Do You Yahoo!?\n> > Yahoo! - Official partner of 2002 FIFA World Cup\n> > http://fifaworldcup.yahoo.com\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Fri, 31 May 2002 09:08:37 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port" }, { "msg_contents": "> >> 2. Add _P to the following lex/yacc tokens to avoid collisions\n> >> CONST, CHAR, DELETE, FLOAT, GROUP, IN, OUT\n> I'm tempted to suggest that we should stick _P on *all* the lexer token\n> symbols, rather than having an inconsistent set of names where some of\n> them have _P and some do not. Or perhaps _T (for token) would be a more\n> sensible convention; I'm not sure why _P was used in the first place.\n\n\"P\" for \"Parser\". The symbols are used past the lexer, but are isolated\nto other places in the parser, and are (or should be) stripped out\nbeyond there.\n\n> >> 3. Rename two local macros\n> >> a. MEM_FREE => MEM_FREE_IT in backend/utils/hash/dynahash.c\n> >> b. IGNORE => IGNORE_TOK in include/utils/datetime.h &\n> >> backend/utils/adt/datetime.c\n> It's fairly amazing that IGNORE is the only one of the datetime.h field\n> names that's bitten anyone (so far). Macros named TZ, YEAR, MONTH, DAY,\n> HOUR, MINUTE, SECOND, UNITS all look like trouble waiting to happen\n> (and UNKNOWN_FIELD looks like someone already had to beat a retreat from\n> calling it UNKNOWN ;-)). I'm inclined to suggest that these names\n> should be uniformly changed to DTF_FOO (DTF for \"datetime field\").\n> The macro names appearing before the field name list look like trouble\n> as well --- anyone have an interest in changing them? Thomas, this is\n> pretty much your turf; what do you think?\n\nIf the lexer/parser should have postfix qualifiers, let's use postfix\nfor other naming conventions too (or switch everything to prefix, but be\nconsistant in the conventions).\n\nNo problem with qualifying the names, though you have likely overstated\nthe case for it; \"fairly amazing\" after 6 years of use on over a dozen\nplatforms probably qualifies as a good test of reality and we aren't\nquite to the point of having to invoke miracles and magic to explain why\nit works ;)\n\nIn any case, we would certainly be open to accepting patches for the\nlimited number of cases Katherine has identified, and would welcome\npatches which are more comprehensive if they were available.\n\n - Thomas\n", "msg_date": "Fri, 31 May 2002 06:43:09 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> I'm tempted to suggest that we should stick _P on *all* the lexer token\n>> symbols, rather than having an inconsistent set of names where some of\n>> them have _P and some do not. Or perhaps _T (for token) would be a more\n>> sensible convention; I'm not sure why _P was used in the first place.\n\n> \"P\" for \"Parser\".\n\nOh, okay. I'm not intent on changing it, just was wondering what the\nmotivation was. What do you think of changing all the token symbols to\nbe FOO_P? (Or P_FOO, per your comment, but I'd just as soon leave alone\nthe ones that already have a suffix.)\n\n> The symbols are used past the lexer, but are isolated\n> to other places in the parser, and are (or should be) stripped out\n> beyond there.\n\nRight at the moment we have half a dozen cases where they leak past the\nparser, e.g. TransactionStmt. I've been intending to clean that up.\nI concur that we don't want anything past parse analysis to depend on\ntoken values, since they change anytime the keyword set changes.\n\n> If the lexer/parser should have postfix qualifiers, let's use postfix\n> for other naming conventions too (or switch everything to prefix, but be\n> consistant in the conventions).\n\nI'd settle for local consistency: if we need prefixes/suffixes on some\nof the datetime field names, let's make all of them have one. But I\ndon't feel compelled to cause a flag day over the whole source tree ;-).\nAt least not all at once.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 May 2002 11:13:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port " }, { "msg_contents": "> > \"P\" for \"Parser\".\n> Oh, okay. I'm not intent on changing it, just was wondering what the\n> motivation was. What do you think of changing all the token symbols to\n> be FOO_P? (Or P_FOO, per your comment, but I'd just as soon leave alone\n> the ones that already have a suffix.)\n\nNo problem here. I have a mild preference for suffix notation, and the\n\"P_FOO\" was *your* idea (or at least the \"DTF_FOO\" one was). Anyway,\nsuffixes are my preference, but not I'm not enthused enough about it to\nargue hard one way or the other.\n\n> > The symbols are used past the lexer, but are isolated\n> > to other places in the parser, and are (or should be) stripped out\n> > beyond there.\n> Right at the moment we have half a dozen cases where they leak past the\n> parser, e.g. TransactionStmt. I've been intending to clean that up.\n> I concur that we don't want anything past parse analysis to depend on\n> token values, since they change anytime the keyword set changes.\n\nRight.\n\n> > If the lexer/parser should have postfix qualifiers, let's use postfix\n> > for other naming conventions too (or switch everything to prefix, but be\n> > consistant in the conventions).\n> I'd settle for local consistency: if we need prefixes/suffixes on some\n> of the datetime field names, let's make all of them have one. But I\n> don't feel compelled to cause a flag day over the whole source tree ;-).\n> At least not all at once.\n\nWell, this doesn't need to be an issue or argument. We have little or no\nprecedent for prefix notation that I can recall, we have postfix\nnotation in the parser, and we don't have a \"namespace convention\" for\nother areas afaik. So if we make changes, let's do it with a convention,\nand at least extend one local convention to another local area.\n\nQuestion to all: Any objection to postfix? If so, why?\n\nAnd to answer Katherine's original questions:\n\n1) OK. (function renaming)\n\n2) OK. (\"_P\" suffix on a few more parser tokens)\n\n3) MEM_FREE_IT - OK, but is it a macro specific to something in dynhash?\nIf so, how about using something more specific than \"it\"?\nIGNORE_TOK - How about \"IGNORE_DTF\" or \"IGNORE_D\"? Let's make it a bit\nspecific to date/time stuff.\n\n - Thomas\n", "msg_date": "Fri, 31 May 2002 09:06:29 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port" }, { "msg_contents": "> Christopher Kings-Lynne wrote:\n> > It's more likely that your changes will go through if you just submit a\n> > patch!\n> \n> I suggested to discuss it first, since it's IMHO more likely\n> that the changes go through if they are commonly accepted in\n> the first place.\n\nYep - sorry, didn't pick up on that...\n\nChris\n\n\n", "msg_date": "Fri, 31 May 2002 10:21:05 -0700", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Question to all: Any objection to postfix? If so, why?\n\nWell, I suggested DTF_FOO by analogy to the DTK_FOO name set that appears\nelsewhere in that same header. If you want to rename those to FOO_DTK\nin parallel, I have no objection.\n\n> IGNORE_TOK - How about \"IGNORE_DTF\" or \"IGNORE_D\"? Let's make it a bit\n> specific to date/time stuff.\n\nAgreed. That thought was what motivated me to gripe in the first place.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 May 2002 13:22:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port " }, { "msg_contents": "Katherine Ward writes:\n\n> A few of the identifier names used in postgres collide with WIN32 or MFC names.\n\nDoes Windows and/or MFC make any kind of statement about what kinds of\nidentifiers it reserves for its own use?\n\n> \tb. GetUserName() => GetUserNameFromId()\n> \tc. GetCurrentTime() => GetCurrentDateTime()\n\n> \ta. MEM_FREE => MEM_FREE_IT in backend/utils/hash/dynahash.c\n> \tb. IGNORE => IGNORE_TOK in include/utils/datetime.h & backend/utils/adt/datetime.c\n\nIt might be better to add PG prefixes consistently if there are clashes.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 1 Jun 2002 17:55:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Hannu Krosing [mailto:hannu@tm.ee]\n> Sent: Thursday, May 30, 2002 1:17 PM\n> To: Josh Burdick\n> Cc: pgsql-hackers@postgresql.org; josh@agliodbs.com\n> Subject: Re: [HACKERS] finding medians\n> \n> \n> On Fri, 2002-05-31 at 01:16, Josh Burdick wrote:\n> > BUG: this isn't properly set up to deal with multiple users.\n> > For example, if A computes a median, then B could read the data\n> > from the median_tmp table. Possibly you could fiddle with\n> > transaction isolation levels, or add a user field to median_tmp,\n> > or something else complicated, to prevent this, but for now I'm\n> > not worrying about this.\n> \n> You could just use temp tables and indexes - they are local to\n> connection .\n> \n> create TEMP sequence median_id;\n> create TEMP table median_tmp (\n> median_id int,\n> x float4\n> );\n\nAnother pure SQL solution would be to perform an order by on the column\nof interest.\n\nUse a cursor to seek to the middle element. If the data set is odd,\nthen the median is the center element. If the data set is even, the\nmedian is the average of the two center elements.\n\nA SQL function would be pretty easy. Of course, it is not the most\nefficient way to do it. A nice thing about having the data ordered is\nthat you can then extract the statistic for any kth partition. Hence,\nyou could generate a function quantile() and call quantile (0.5) to get\nthe median. If you have a query:\nselect quantile(.1, col), quantile(.2, col), quantile(.3,col), ...\nquantile(.9, col) from sometable\nyou only have to do the sort once and then operate on the sorted data.\nFor queries like that, sorting is probably just fine, since the\nselection algorithm is only approximately linear for each single\ninstance.\n", "msg_date": "Thu, 30 May 2002 15:31:45 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: finding medians" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Thursday, May 30, 2002 3:25 PM\n> To: Christopher Kings-Lynne\n> Cc: Katherine Ward; Thomas Lockhart; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Small changes to facilitate Win32 port \n> \n> \n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > It's more likely that your changes will go through if you \n> just submit a\n> > patch!\n> \n> I think the question was more directed at \"do we like these names?\",\n> which should certainly be asked before going to the trouble \n> of making a\n> patch.\n> \n> >> 2. Add _P to the following lex/yacc tokens to avoid collisions\n> >> CONST, CHAR, DELETE, FLOAT, GROUP, IN, OUT\n> \n> I'm tempted to suggest that we should stick _P on *all* the \n> lexer token\n> symbols, rather than having an inconsistent set of names where some of\n> them have _P and some do not. Or perhaps _T (for token) \n> would be a more\n> sensible convention; I'm not sure why _P was used in the first place.\n\nBefore you do that, realize that you are violating the implementation's\nnamespace, and your code is therefore not legal ANSI/ISO C.\n\nFrom ISO/IEC 9899:1999 (E) ©ISO/IEC:\n\n7.1.3 Reserved identifiers\n1 Each header declares or defines all identifiers listed in its\nassociated subclause, and\noptionally declares or defines identifiers listed in its associated\nfuture library directions\nsubclause and identifiers which are always reserved either for any use\nor for use as file\nscope identifiers.\n- All identifiers that begin with an underscore and either an uppercase\nletter or another\nunderscore are always reserved for any use.\n- All identifiers that begin with an underscore are always reserved for\nuse as identifiers\nwith file scope in both the ordinary and tag name spaces.\n- Each macro name in any of the following subclauses (including the\nfuture library\ndirections) is reserved for use as specified if any of its associated\nheaders is included;\nunless explicitly stated otherwise (see 7.1.4).\n- All identifiers with external linkage in any of the following\nsubclauses (including the\nfuture library directions) are always reserved for use as identifiers\nwith external\nlinkage.154)\n- Each identifier with file scope listed in any of the following\nsubclauses (including the\nfuture library directions) is reserved for use as a macro name and as an\nidentifier with\nfile scope in the same name space if any of its associated headers is\nincluded.\n \n> >> 3. Rename two local macros\n> >> a. MEM_FREE => MEM_FREE_IT in backend/utils/hash/dynahash.c\n> >> b. IGNORE => IGNORE_TOK in include/utils/datetime.h &\n> >> backend/utils/adt/datetime.c\n> \n> It's fairly amazing that IGNORE is the only one of the \n> datetime.h field\n> names that's bitten anyone (so far). Macros named TZ, YEAR, \n> MONTH, DAY,\n> HOUR, MINUTE, SECOND, UNITS all look like trouble waiting to happen\n> (and UNKNOWN_FIELD looks like someone already had to beat a \n> retreat from\n> calling it UNKNOWN ;-)). I'm inclined to suggest that these names\n> should be uniformly changed to DTF_FOO (DTF for \"datetime field\").\n> The macro names appearing before the field name list look like trouble\n> as well --- anyone have an interest in changing them? Thomas, this is\n> pretty much your turf; what do you think?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n", "msg_date": "Thu, 30 May 2002 15:34:39 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Small changes to facilitate Win32 port " }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n>> I'm tempted to suggest that we should stick _P on *all* the \n>> lexer token\n>> symbols, rather than having an inconsistent set of names where some of\n>> them have _P and some do not. Or perhaps _T (for token) \n>> would be a more\n>> sensible convention; I'm not sure why _P was used in the first place.\n\n> Before you do that, realize that you are violating the implementation's\n> namespace, and your code is therefore not legal ANSI/ISO C.\n\nWhat?\n\nThe _P or _T is a suffix, not a prefix...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 May 2002 18:40:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small changes to facilitate Win32 port " } ]
[ { "msg_contents": "I'm trying to determine if database growth (with LO) that I'm seeing during\na pg_restore is fixed by the patch identified at\nhttp://archives.postgresql.org/pgsql-hackers/2002-04/msg00496.php , but when\nI attempt to restore from a 7.2.1 created dump into my newly created\n7.3devel database, I get this:\n\npg_restore: [archiver (db)] could not create large object cross-reference\ntable:\n\nI didn't find any mention of this on the hackers mail archive, so I thought\nI'd pass it on.\n\nThe dump file was created with:\npg_dump --blobs --format=c --quotes --oids --compress=5 quickview >\nquickview.dump\n\nand restored with:\npg_restore -d quickview < quickview.dump\n(although I don't think either of those are the problem, because we've used\nthose command lines successfully with 7.2 and 7.2.1 w/o problems).\n\nIf nobody else is having this problem I'll see if I can create a small test\ncase. (my dump file is 10 gigs)\n\n-ron\n", "msg_date": "Fri, 31 May 2002 14:44:54 -0700", "msg_from": "Ron Snyder <snyder@roguewave.com>", "msg_from_op": true, "msg_subject": "Can't import large objects in most recent cvs (20020531 -- approx\n\t1pm PDT)" }, { "msg_contents": "Ron Snyder <snyder@roguewave.com> writes:\n> I attempt to restore from a 7.2.1 created dump into my newly created\n> 7.3devel database, I get this:\n\n> pg_restore: [archiver (db)] could not create large object cross-reference\n> table:\n\n> I didn't find any mention of this on the hackers mail archive, so I thought\n> I'd pass it on.\n\nNews to me; and I just tested that code a couple days ago after hacking\non it for schema support. Would you look in the postmaster log to see\nexactly what error message the backend is issuing? Might help to run\npg_restore with \"PGOPTIONS=--debug_print_query=1\" so you can verify the\nexact query that's failing, too.\n\n(I've thought several times that we should clean up pg_dump and\npg_restore so that they report the failed query and backend message in\n*all* cases; right now they're pretty haphazard about it.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 May 2002 18:24:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can't import large objects in most recent cvs (20020531 -- approx\n\t1pm PDT)" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: Friday, May 31, 2002 3:24 PM\n> To: Ron Snyder\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] Can't import large objects in most \n> recent cvs (20020531 -- approx 1pm PDT) \n> \n> \n> Ron Snyder <snyder@roguewave.com> writes:\n> > I attempt to restore from a 7.2.1 created dump into my newly created\n> > 7.3devel database, I get this:\n> \n> > pg_restore: [archiver (db)] could not create large object \n> cross-reference\n> > table:\n> \n> > I didn't find any mention of this on the hackers mail \n> archive, so I thought\n> > I'd pass it on.\n> \n> News to me; and I just tested that code a couple days ago \n> after hacking\n> on it for schema support. Would you look in the postmaster log to see\n> exactly what error message the backend is issuing? Might help to run\n> pg_restore with \"PGOPTIONS=--debug_print_query=1\" so you can \n> verify the\n> exact query that's failing, too.\n\n From the client:\nCOPY \"unique_names\" WITH OIDS FROM stdin;\nLOG: query: select getdatabaseencoding()\npg_restore: LOG: query: Create Temporary Table pg_dump_blob_xref(oldOid\npg_catalog.oid, newOid pg_catalog.oid);\npg_restore: [archiver (db)] could not create large object cross-reference\ntable:\n\n From the server:\nMay 31 15:58:15 vault pgcvs[366]: [5-5] -- Name: unique_names Type: TABLE\nDATA Schema: - Owner: qvowner\nMay 31 15:58:15 vault pgcvs[366]: [5-6] -- Data Pos: 30713831 (Length 1214)\nMay 31 15:58:15 vault pgcvs[366]: [5-7] --\nMay 31 15:58:15 vault pgcvs[366]: [5-8] COPY \"unique_names\" WITH OIDS FROM\nstdin;\nMay 31 15:58:15 vault pgcvs[367]: [1] LOG: connection received:\nhost=[local]\nMay 31 15:58:15 vault pgcvs[367]: [2] LOG: connection authorized:\nuser=qvowner database=quickview\nMay 31 15:58:15 vault pgcvs[367]: [3] LOG: query: select\ngetdatabaseencoding()\nMay 31 15:58:15 vault pgcvs[367]: [4] LOG: query: Create Temporary Table\npg_dump_blob_xref(oldOid pg_catalog.oid, newOid pg_catalog.oid);\n\n(and then a later run with a higher debug level)\nMay 31 16:11:50 vault pgcvs[2135]: [77] LOG: connection received:\nhost=[local]\nMay 31 16:11:50 vault pgcvs[2135]: [78] LOG: connection authorized:\nuser=qvowner database=quickview\nMay 31 16:11:50 vault pgcvs[2135]: [79] DEBUG:\n/usr/local/pgsql-20020531/bin/postmaster child[2135]: starting with (\nMay 31 16:11:50 vault pgcvs[2135]: [80] DEBUG: ^Ipostgres\nMay 31 16:11:50 vault pgcvs[2135]: [81] DEBUG: ^I-v131072\nMay 31 16:11:50 vault pgcvs[2135]: [82] DEBUG: ^I-p\nMay 31 16:11:50 vault pgcvs[2135]: [83] DEBUG: ^Iquickview\nMay 31 16:11:50 vault pgcvs[2135]: [84] DEBUG: )\nMay 31 16:11:50 vault pgcvs[2135]: [85] DEBUG: InitPostgres\nMay 31 16:11:50 vault pgcvs[2135]: [86] DEBUG: StartTransactionCommand\nMay 31 16:11:50 vault pgcvs[2135]: [87] LOG: query: select\ngetdatabaseencoding()\nMay 31 16:11:50 vault pgcvs[2135]: [88] DEBUG: ProcessQuery\nMay 31 16:11:50 vault pgcvs[2135]: [89] DEBUG: CommitTransactionCommand\nMay 31 16:11:50 vault pgcvs[2135]: [90] DEBUG: StartTransactionCommand\nMay 31 16:11:50 vault pgcvs[2135]: [91] LOG: query: Create Temporary Table\npg_dump_blob_xref(oldOid pg_catalog.oid, newOid pg_catalog.oid);\nMay 31 16:11:50 vault pgcvs[2135]: [92] DEBUG: ProcessUtility\nMay 31 16:11:50 vault pgcvs[2135]: [93] ERROR: quickview: not authorized to\ncreate temp tables\n\nDigging a bit, I've discovered this:\n1) usesysid 1 owns the database in the old server, but all the tables are\nowned by 'qvowner' (and others).\n2) qvowner does not have dba privs\n\nMy theory is that I'm getting this last message (not authorized to create\ntemp tables) because the permissions have been tightened down.\n\nI believe that I can safely change the ownership of the database in the old\nserver to qvowner, right? And run the pg_dump and pg_restore again? Or\nshould pg_restore connect as the superuser and just change ownership\nafterwards?\n\n-ron\n\n\n \n> (I've thought several times that we should clean up pg_dump and\n> pg_restore so that they report the failed query and backend message in\n> *all* cases; right now they're pretty haphazard about it.)\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Fri, 31 May 2002 16:33:12 -0700", "msg_from": "Ron Snyder <snyder@roguewave.com>", "msg_from_op": true, "msg_subject": "Re: Can't import large objects in most recent cvs (2002" }, { "msg_contents": "Ron Snyder <snyder@roguewave.com> writes:\n> May 31 16:11:50 vault pgcvs[2135]: [91] LOG: query: Create Temporary Table\n> pg_dump_blob_xref(oldOid pg_catalog.oid, newOid pg_catalog.oid);\n> May 31 16:11:50 vault pgcvs[2135]: [93] ERROR: quickview: not authorized to\n> create temp tables\n\n> My theory is that I'm getting this last message (not authorized to create\n> temp tables) because the permissions have been tightened down.\n\nYeah. Right at the moment, new databases default to only-db-owner-has-\nany-rights, which means that others cannot create schemas or temp tables\nin that database (unless they're superusers). I'm of the opinion that\nthis is a bad default, but was waiting to see if anyone complained\nbefore starting a discussion about it.\n\nProbably we should have temp table creation allowed to all by default.\nI'm not convinced that that's a good idea for schema-creation privilege\nthough. Related issues: what should initdb set as the permissions for\ntemplate1? Would it make sense for newly created databases to copy\ntheir permission settings from the template database? (Probably not,\nsince the owner is likely to be different.) What about copying those\nper-database config settings Peter just invented?\n\nComments anyone? \n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 21:55:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Default privileges for new databases (was Re: Can't import large\n\tobjects in most recent cvs)" }, { "msg_contents": "\nTom,\n\n> Probably we should have temp table creation allowed to all by default.\n> I'm not convinced that that's a good idea for schema-creation privilege\n> though. Related issues: what should initdb set as the permissions for\n> template1? Would it make sense for newly created databases to copy\n> their permission settings from the template database? (Probably not,\n> since the owner is likely to be different.) What about copying those\n> per-database config settings Peter just invented?\n\nYes. I think there should be a not optional INITDB switch: either --secure \nor --permissive. People usually know at the time of installation whether \nthey're building a web server (secure) or a home workstation (permissive). \n\nDepending on the setting, this should set either a grant all or revoke all for \nnon-db owners as default, including such things as temp table creation.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \tjosh@agliodbs.com\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Mon, 10 Jun 2002 15:36:42 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import large\n\tobjects in most recent cvs)" }, { "msg_contents": "Josh Berkus wrote:\n> \n> Tom,\n> \n> > Probably we should have temp table creation allowed to all by default.\n> > I'm not convinced that that's a good idea for schema-creation privilege\n> > though. Related issues: what should initdb set as the permissions for\n> > template1? Would it make sense for newly created databases to copy\n> > their permission settings from the template database? (Probably not,\n> > since the owner is likely to be different.) What about copying those\n> > per-database config settings Peter just invented?\n> \n> Yes. I think there should be a not optional INITDB switch: either --secure \n> or --permissive. People usually know at the time of installation whether \n> they're building a web server (secure) or a home workstation (permissive). \n> \n> Depending on the setting, this should set either a grant all or revoke all for \n> non-db owners as default, including such things as temp table creation.\n\nI like this idea. I think we should prompt for tcp socket permission\nsetting for only the owner (Peter E's idea that I think he wants for\n7.3), default public schema permissions, temp shema permissions, stuff\nlike that. We can have initdb flags to prevent the prompting, but doing\nthis quering at initdb time seems like an ideal solution. We have\nneeded such control for a while.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 14 Jun 2002 01:04:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import" }, { "msg_contents": "Tom Lane writes:\n\n> Probably we should have temp table creation allowed to all by default.\n\nAgreed.\n\n> I'm not convinced that that's a good idea for schema-creation privilege\n> though.\n\nAgreed. (not good)\n\n> Related issues: what should initdb set as the permissions for template1?\n\nSame as above.\n\n> What about copying those per-database config settings Peter just\n> invented?\n\nYou're not supposed to put those into template1 anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 17 Jun 2002 23:19:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "Josh Berkus writes:\n\n> Yes. I think there should be a not optional INITDB switch: either --secure\n> or --permissive. People usually know at the time of installation whether\n> they're building a web server (secure) or a home workstation (permissive).\n\nIf you're on a home workstation you make yourself a superuser and be done\nwith it.\n\nAdding too many options to initdb is not a path I would prefer since\ninitdb happens mostly hidden from the user these days.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 17 Jun 2002 23:20:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "Peter Eisentraut wrote:\n> Josh Berkus writes:\n> \n> > Yes. I think there should be a not optional INITDB switch: either --secure\n> > or --permissive. People usually know at the time of installation whether\n> > they're building a web server (secure) or a home workstation (permissive).\n> \n> If you're on a home workstation you make yourself a superuser and be done\n> with it.\n> \n> Adding too many options to initdb is not a path I would prefer since\n> initdb happens mostly hidden from the user these days.\n\nWell, we have the config files for most things. I would just like to\nhave an easy way to configure things that aren't GUC parameters. That's\nwhere the initdb idea came from. Other ideas?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 17 Jun 2002 17:43:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "Folks,\n\n> Adding too many options to initdb is not a path I would prefer since\n> initdb happens mostly hidden from the user these days.\n\nWhat about adding a parameter to CREATE DATABASE, then? Like CREATE DATABASE \ndb1 WITH (SECURE)?\n\n-- \n-Josh Berkus\n\n\n", "msg_date": "Mon, 17 Jun 2002 17:10:02 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import large\n\tobjects in most recent cvs)" }, { "msg_contents": "\nHave we addressed this? I don't think so.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Ron Snyder <snyder@roguewave.com> writes:\n> > May 31 16:11:50 vault pgcvs[2135]: [91] LOG: query: Create Temporary Table\n> > pg_dump_blob_xref(oldOid pg_catalog.oid, newOid pg_catalog.oid);\n> > May 31 16:11:50 vault pgcvs[2135]: [93] ERROR: quickview: not authorized to\n> > create temp tables\n> \n> > My theory is that I'm getting this last message (not authorized to create\n> > temp tables) because the permissions have been tightened down.\n> \n> Yeah. Right at the moment, new databases default to only-db-owner-has-\n> any-rights, which means that others cannot create schemas or temp tables\n> in that database (unless they're superusers). I'm of the opinion that\n> this is a bad default, but was waiting to see if anyone complained\n> before starting a discussion about it.\n> \n> Probably we should have temp table creation allowed to all by default.\n> I'm not convinced that that's a good idea for schema-creation privilege\n> though. Related issues: what should initdb set as the permissions for\n> template1? Would it make sense for newly created databases to copy\n> their permission settings from the template database? (Probably not,\n> since the owner is likely to be different.) What about copying those\n> per-database config settings Peter just invented?\n> \n> Comments anyone? \n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 18:03:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Have we addressed this? I don't think so.\n\nNo, it's not done yet. My inclination is\n\n* Template1 has temp table creation and schema creation disabled\n(disallowed to world) by default.\n\n* CREATE DATABASE sets up new databases with temp table creation allowed\nto world and schema creation allowed to DB owner only (regardless of\nwhat the template database had). The owner can adjust this default\nafterwards if he doesn't like it.\n\nIt would be nice to lock down the public schema in template1 too, but I\nsee no good way to do that, because CREATE DATABASE can't readily fiddle\nwith protections *inside* the database --- the only games we can play\nare with the protections stored in the pg_database row itself. So\npublic's permissions are going to be inherited from the template\ndatabase, and that means template1's public has to be writable.\n\nObjections anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Aug 2002 20:14:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import large\n\tobjects in most recent cvs)" }, { "msg_contents": "\nSorry, I am confused. Why can we modify temp's permissions on CREATE\nDATABASE but not public's permissions?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Have we addressed this? I don't think so.\n> \n> No, it's not done yet. My inclination is\n> \n> * Template1 has temp table creation and schema creation disabled\n> (disallowed to world) by default.\n> \n> * CREATE DATABASE sets up new databases with temp table creation allowed\n> to world and schema creation allowed to DB owner only (regardless of\n> what the template database had). The owner can adjust this default\n> afterwards if he doesn't like it.\n> \n> It would be nice to lock down the public schema in template1 too, but I\n> see no good way to do that, because CREATE DATABASE can't readily fiddle\n> with protections *inside* the database --- the only games we can play\n> are with the protections stored in the pg_database row itself. So\n> public's permissions are going to be inherited from the template\n> database, and that means template1's public has to be writable.\n> \n> Objections anyone?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 22:27:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import" }, { "msg_contents": "Mostly because a user may explicitly create a database with wanted\npermissions, only to have this 'special code' remove them.\n\nI personally intend to immediately revoke permissions on public in\ntemplate1, to allow the database owner to grant them as needed.\n\nOn Mon, 2002-08-26 at 22:27, Bruce Momjian wrote:\n> \n> Sorry, I am confused. Why can we modify temp's permissions on CREATE\n> DATABASE but not public's permissions?\n> \n> ---------------------------------------------------------------------------\n> \n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Have we addressed this? I don't think so.\n> > \n> > No, it's not done yet. My inclination is\n> > \n> > * Template1 has temp table creation and schema creation disabled\n> > (disallowed to world) by default.\n> > \n> > * CREATE DATABASE sets up new databases with temp table creation allowed\n> > to world and schema creation allowed to DB owner only (regardless of\n> > what the template database had). The owner can adjust this default\n> > afterwards if he doesn't like it.\n> > \n> > It would be nice to lock down the public schema in template1 too, but I\n> > see no good way to do that, because CREATE DATABASE can't readily fiddle\n> > with protections *inside* the database --- the only games we can play\n> > are with the protections stored in the pg_database row itself. So\n> > public's permissions are going to be inherited from the template\n> > database, and that means template1's public has to be writable.\n> > \n> > Objections anyone?\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n", "msg_date": "26 Aug 2002 23:32:20 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "\nOh, so we don't modify public writeability of template1 because the\nadmin may want to disable write in template1 so all future databases\nwill have it disabled. I see.\n\nSo template1 is writable (yuck) only so databases created from template1\nare writeable to world by default. Is that accurate?\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> Mostly because a user may explicitly create a database with wanted\n> permissions, only to have this 'special code' remove them.\n> \n> I personally intend to immediately revoke permissions on public in\n> template1, to allow the database owner to grant them as needed.\n> \n> On Mon, 2002-08-26 at 22:27, Bruce Momjian wrote:\n> > \n> > Sorry, I am confused. Why can we modify temp's permissions on CREATE\n> > DATABASE but not public's permissions?\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Have we addressed this? I don't think so.\n> > > \n> > > No, it's not done yet. My inclination is\n> > > \n> > > * Template1 has temp table creation and schema creation disabled\n> > > (disallowed to world) by default.\n> > > \n> > > * CREATE DATABASE sets up new databases with temp table creation allowed\n> > > to world and schema creation allowed to DB owner only (regardless of\n> > > what the template database had). The owner can adjust this default\n> > > afterwards if he doesn't like it.\n> > > \n> > > It would be nice to lock down the public schema in template1 too, but I\n> > > see no good way to do that, because CREATE DATABASE can't readily fiddle\n> > > with protections *inside* the database --- the only games we can play\n> > > are with the protections stored in the pg_database row itself. So\n> > > public's permissions are going to be inherited from the template\n> > > database, and that means template1's public has to be writable.\n> > > \n> > > Objections anyone?\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 23:45:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "On Mon, 2002-08-26 at 23:45, Bruce Momjian wrote:\n> \n> Oh, so we don't modify public writeability of template1 because the\n> admin may want to disable write in template1 so all future databases\n> will have it disabled. I see.\n> \n> So template1 is writable (yuck) only so databases created from template1\n> are writeable to world by default. Is that accurate?\n\nI believe thats the crux of the issue -- but those of us who don't want\nnewly created DBs to be world writable have no issues with that :)\n\n\nCould create a template2 as the default 'copy from' template. Make it\nconnectible strictly by superusers. Template1 becomes a holding area\nfor those without a db to connect to and can be locked down.\n\n\nAnother is to enable users to connect to the server without requiring a\ndatabase. This basically removes the secondary requirement of template1\nto be the holding area for those otherwise without a home.\n\n\n> ---------------------------------------------------------------------------\n> \n> Rod Taylor wrote:\n> > Mostly because a user may explicitly create a database with wanted\n> > permissions, only to have this 'special code' remove them.\n> > \n> > I personally intend to immediately revoke permissions on public in\n> > template1, to allow the database owner to grant them as needed.\n> > \n> > On Mon, 2002-08-26 at 22:27, Bruce Momjian wrote:\n> > > \n> > > Sorry, I am confused. Why can we modify temp's permissions on CREATE\n> > > DATABASE but not public's permissions?\n> > > \n> > > ---------------------------------------------------------------------------\n> > > \n> > > Tom Lane wrote:\n> > > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > > Have we addressed this? I don't think so.\n> > > > \n> > > > No, it's not done yet. My inclination is\n> > > > \n> > > > * Template1 has temp table creation and schema creation disabled\n> > > > (disallowed to world) by default.\n> > > > \n> > > > * CREATE DATABASE sets up new databases with temp table creation allowed\n> > > > to world and schema creation allowed to DB owner only (regardless of\n> > > > what the template database had). The owner can adjust this default\n> > > > afterwards if he doesn't like it.\n> > > > \n> > > > It would be nice to lock down the public schema in template1 too, but I\n> > > > see no good way to do that, because CREATE DATABASE can't readily fiddle\n> > > > with protections *inside* the database --- the only games we can play\n> > > > are with the protections stored in the pg_database row itself. So\n> > > > public's permissions are going to be inherited from the template\n> > > > database, and that means template1's public has to be writable.\n> > > > \n> > > > Objections anyone?\n> > > > \n> > > > \t\t\tregards, tom lane\n> > > > \n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 2: you can get off all lists at once with the unregister command\n> > > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > > \n> > > \n> > > -- \n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 359-1001\n> > > + If your life is a hard drive, | 13 Roberts Road\n> > > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > > \n> > > http://archives.postgresql.org\n> > > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n\n\n", "msg_date": "26 Aug 2002 23:56:05 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "\nIt just bothers me that of all the databases that should be locked down,\nit should be template1, and it isn't by default.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> On Mon, 2002-08-26 at 23:45, Bruce Momjian wrote:\n> > \n> > Oh, so we don't modify public writeability of template1 because the\n> > admin may want to disable write in template1 so all future databases\n> > will have it disabled. I see.\n> > \n> > So template1 is writable (yuck) only so databases created from template1\n> > are writeable to world by default. Is that accurate?\n> \n> I believe thats the crux of the issue -- but those of us who don't want\n> newly created DBs to be world writable have no issues with that :)\n> \n> \n> Could create a template2 as the default 'copy from' template. Make it\n> connectible strictly by superusers. Template1 becomes a holding area\n> for those without a db to connect to and can be locked down.\n> \n> \n> Another is to enable users to connect to the server without requiring a\n> database. This basically removes the secondary requirement of template1\n> to be the holding area for those otherwise without a home.\n> \n> \n> > ---------------------------------------------------------------------------\n> > \n> > Rod Taylor wrote:\n> > > Mostly because a user may explicitly create a database with wanted\n> > > permissions, only to have this 'special code' remove them.\n> > > \n> > > I personally intend to immediately revoke permissions on public in\n> > > template1, to allow the database owner to grant them as needed.\n> > > \n> > > On Mon, 2002-08-26 at 22:27, Bruce Momjian wrote:\n> > > > \n> > > > Sorry, I am confused. Why can we modify temp's permissions on CREATE\n> > > > DATABASE but not public's permissions?\n> > > > \n> > > > ---------------------------------------------------------------------------\n> > > > \n> > > > Tom Lane wrote:\n> > > > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > > > Have we addressed this? I don't think so.\n> > > > > \n> > > > > No, it's not done yet. My inclination is\n> > > > > \n> > > > > * Template1 has temp table creation and schema creation disabled\n> > > > > (disallowed to world) by default.\n> > > > > \n> > > > > * CREATE DATABASE sets up new databases with temp table creation allowed\n> > > > > to world and schema creation allowed to DB owner only (regardless of\n> > > > > what the template database had). The owner can adjust this default\n> > > > > afterwards if he doesn't like it.\n> > > > > \n> > > > > It would be nice to lock down the public schema in template1 too, but I\n> > > > > see no good way to do that, because CREATE DATABASE can't readily fiddle\n> > > > > with protections *inside* the database --- the only games we can play\n> > > > > are with the protections stored in the pg_database row itself. So\n> > > > > public's permissions are going to be inherited from the template\n> > > > > database, and that means template1's public has to be writable.\n> > > > > \n> > > > > Objections anyone?\n> > > > > \n> > > > > \t\t\tregards, tom lane\n> > > > > \n> > > > > ---------------------------(end of broadcast)---------------------------\n> > > > > TIP 2: you can get off all lists at once with the unregister command\n> > > > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > > > \n> > > > \n> > > > -- \n> > > > Bruce Momjian | http://candle.pha.pa.us\n> > > > pgman@candle.pha.pa.us | (610) 359-1001\n> > > > + If your life is a hard drive, | 13 Roberts Road\n> > > > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > > > \n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > > \n> > > > http://archives.postgresql.org\n> > > > \n> > > \n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > > \n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > \n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 26 Aug 2002 23:58:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So template1 is writable (yuck) only so databases created from template1\n> are writeable to world by default. Is that accurate?\n\nYup.\n\nI had a probably-harebrained idea about this: the writeability of public\nis only a serious issue when it is the default creation-target schema.\nIt's likely that you'd say \"create table foo\" without reflecting about\nthe fact that you're connected to template1; much less likely that you'd\nsay \"create table public.foo\". So, what if the default per-database GUC\nsettings for template1 include setting the search_path to empty? That\nwould preclude accidental table creation in template1's public schema.\nAs long as CREATE DATABASE doesn't copy the per-database GUC settings of\nthe template database, copied databases wouldn't be similarly crippled.\n\nNow I'm not entirely convinced that CREATE DATABASE shouldn't copy the\nper-database GUC settings of the template. But at the moment it\ndoesn't, and if we're willing to institutionalize that behavior then\nit'd provide a way out.\n\nOr is that too weird?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Aug 2002 00:08:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't " }, { "msg_contents": "\nI had a good chuckle with this. It is the type of \"shoot for the moon\"\nidea I would have. Maybe I am rubbing off on you. :-)\n\nThe only problem I see with this solution is it makes admins think their\ntemplate1 is safe, when it really isn't. That seems more dangerous than\nleaving it world-writable. I don't think accidental writes into\ntemplate1 are common enough to add a possible admin confusion factor.\n\nWhat we really need is some mode on template1 that says, \"I am not\nworld-writable, but the admin hasn't made me world-non-writable, so I\nwill create new databases that are world-writable\". Does that make\nsense?\n\nI have an idea. Could we have the template1 per-database GUC settings\ncontrol the writeability of databases created from template1, sort of a\n'creation GUC setting', so we could run it on the new database once it\nis created? That way, we could make template1 public\nnon-world-writable, and put something in the template1 per-database GUC\nsetting to make databases created from template1 world-writable. If\nsomeone removes that GUC setting, the databases get created non-world\nwritable.\n\nOh, there I go again, shooting at the moon. ;-)\n\nAnother idea. Is there a GUC setting we could put in template1 that\nwould disable writing to public for world and _couldn't_ be revoked by\nthe user, except for super users?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > So template1 is writable (yuck) only so databases created from template1\n> > are writeable to world by default. Is that accurate?\n> \n> Yup.\n> \n> I had a probably-harebrained idea about this: the writeability of public\n> is only a serious issue when it is the default creation-target schema.\n> It's likely that you'd say \"create table foo\" without reflecting about\n> the fact that you're connected to template1; much less likely that you'd\n> say \"create table public.foo\". So, what if the default per-database GUC\n> settings for template1 include setting the search_path to empty? That\n> would preclude accidental table creation in template1's public schema.\n> As long as CREATE DATABASE doesn't copy the per-database GUC settings of\n> the template database, copied databases wouldn't be similarly crippled.\n> \n> Now I'm not entirely convinced that CREATE DATABASE shouldn't copy the\n> per-database GUC settings of the template. But at the moment it\n> doesn't, and if we're willing to institutionalize that behavior then\n> it'd provide a way out.\n> \n> Or is that too weird?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 27 Aug 2002 00:22:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "On Tue, 27 Aug 2002, Bruce Momjian wrote:\n\n> \n> I had a good chuckle with this. It is the type of \"shoot for the moon\"\n> idea I would have. Maybe I am rubbing off on you. :-)\n> \n> The only problem I see with this solution is it makes admins think their\n> template1 is safe, when it really isn't. That seems more dangerous than\n> leaving it world-writable. I don't think accidental writes into\n> template1 are common enough to add a possible admin confusion factor.\n> \n> What we really need is some mode on template1 that says, \"I am not\n> world-writable, but the admin hasn't made me world-non-writable, so I\n> will create new databases that are world-writable\". Does that make\n> sense?\n> \n> I have an idea. Could we have the template1 per-database GUC settings\n> control the writeability of databases created from template1, sort of a\n> 'creation GUC setting', so we could run it on the new database once it\n> is created? That way, we could make template1 public\n> non-world-writable, and put something in the template1 per-database GUC\n> setting to make databases created from template1 world-writable. If\n> someone removes that GUC setting, the databases get created non-world\n> writable.\n> \n> Oh, there I go again, shooting at the moon. ;-)\n> \n> Another idea. Is there a GUC setting we could put in template1 that\n> would disable writing to public for world and _couldn't_ be revoked by\n> the user, except for super users?\n\nI think your idea is good. Is there a chance we can have a set of very \ngross permissions based on the user and the database they are connected \nto and lives on top of the other security? I.e. UserA can READ from \ndatabaseB, and READ/WRITE from/to databaseA\n\nThen, the regular permissions live under that? Maybe we could have a \nsome system that ANDed or ORed the perms easily so it wasn't slow or \nrequired a lot of extra programming, and if we really wanted it to not get \nin the way, only have it apply to the template databases?\n\nWell, if there's any good ideas in there, let me know. :-) \n\n", "msg_date": "Tue, 27 Aug 2002 09:40:31 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't" }, { "msg_contents": "\nOK, we are rolling out schemas in 7.3. We better figure out if we have\nthe best solution for this.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Have we addressed this? I don't think so.\n> \n> No, it's not done yet. My inclination is\n> \n> * Template1 has temp table creation and schema creation disabled\n> (disallowed to world) by default.\n> \n> * CREATE DATABASE sets up new databases with temp table creation allowed\n> to world and schema creation allowed to DB owner only (regardless of\n> what the template database had). The owner can adjust this default\n> afterwards if he doesn't like it.\n> \n> It would be nice to lock down the public schema in template1 too, but I\n> see no good way to do that, because CREATE DATABASE can't readily fiddle\n> with protections *inside* the database --- the only games we can play\n> are with the protections stored in the pg_database row itself. So\n> public's permissions are going to be inherited from the template\n> database, and that means template1's public has to be writable.\n> \n> Objections anyone?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 1 Sep 2002 22:52:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import" }, { "msg_contents": "\nCan someone tell me where we are on this; exactly what writability do\nwe have in 7.3?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Have we addressed this? I don't think so.\n> \n> No, it's not done yet. My inclination is\n> \n> * Template1 has temp table creation and schema creation disabled\n> (disallowed to world) by default.\n> \n> * CREATE DATABASE sets up new databases with temp table creation allowed\n> to world and schema creation allowed to DB owner only (regardless of\n> what the template database had). The owner can adjust this default\n> afterwards if he doesn't like it.\n> \n> It would be nice to lock down the public schema in template1 too, but I\n> see no good way to do that, because CREATE DATABASE can't readily fiddle\n> with protections *inside* the database --- the only games we can play\n> are with the protections stored in the pg_database row itself. So\n> public's permissions are going to be inherited from the template\n> database, and that means template1's public has to be writable.\n> \n> Objections anyone?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 28 Sep 2002 22:07:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can someone tell me where we are on this; exactly what writability do\n> we have in 7.3?\n\nThe current code implements what I suggested in that note, viz:\ndefault permissions for new databases are\n\towner = all rights (ie, create schema and create temp)\n\tpublic = create temp right only\nbut template1 and template0 are set to\n\towner (postgres user) = all rights\n\tpublic = no rights\nby initdb.\n\nAlso, the \"public\" schema within template1 is empty but writable by\npublic. This is annoying, but at least it's easy to fix if you\nmess up --- you can DROP SCHEMA public CASCADE and then recreate\nthe schema. (Or not, if you don't want to.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 22:37:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import large\n\tobjects in most recent cvs)" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can someone tell me where we are on this; exactly what writability do\n> > we have in 7.3?\n> \n> The current code implements what I suggested in that note, viz:\n> default permissions for new databases are\n> \towner = all rights (ie, create schema and create temp)\n> \tpublic = create temp right only\n> but template1 and template0 are set to\n> \towner (postgres user) = all rights\n> \tpublic = no rights\n> by initdb.\n> \n> Also, the \"public\" schema within template1 is empty but writable by\n> public. This is annoying, but at least it's easy to fix if you\n> mess up --- you can DROP SCHEMA public CASCADE and then recreate\n> the schema. (Or not, if you don't want to.)\n\nOK, yes, this is what I thought, that public in all databases is\nworld-writable, but you can control that by dropping and recreating the\npublic schema, or altering the schema, right?\n\nHow did you get temp schemas non-world writable in template1 but not in\nthe databases, or am I confused?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 28 Sep 2002 22:47:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How did you get temp schemas non-world writable in template1 but not in\n> the databases, or am I confused?\n\nThat right is associated with the database, so we just have to control\nwhat CREATE DATABASE puts in the new pg_database row.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 22:56:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for new databases (was Re: Can't import large\n\tobjects in most recent cvs)" } ]
[ { "msg_contents": "Argh. I just realized that I gave this the wrong subject-- it should've been\n\"Can't pg_restore large objects\"\n \n> Digging a bit, I've discovered this:\n> 1) usesysid 1 owns the database in the old server, but all \n> the tables are\n> owned by 'qvowner' (and others).\n> 2) qvowner does not have dba privs\n> \n> My theory is that I'm getting this last message (not \n> authorized to create\n> temp tables) because the permissions have been tightened down.\n\nThis test case works just fine with 7.2.1, but fails with my 'checked out\ntoday' code.\nHere is my shell script test case:\n# this script assumes that the current user can connect without\n# being prompted for a password\n\ncreateuser -A -D lotest1\ncreateuser -A -D lotest2\n\ncreatedb lotest1\n\nTESTF=/tmp/pgtest$$\ncat >> $TESTF <<EOF\nThis is just a simple little file\nEOF\n\n#I don't think that this table is absolutely necessary for this test\npsql lotest1 lotest1 -c \"create table a (bah int);\"\n#now create the troublemaker table\npsql lotest1 lotest2 -c \"create table z (bah int);\"\npsql lotest1 lotest1 -c \"\\lo_import $TESTF\";\npg_dump --blobs --format=c --quotes --oids --compress=5 lotest1 >\n/tmp/lotest1.dump\npsql template1 -c \"drop database lotest1;\"\ncreatedb lotest1\npg_restore -d lotest1 < /tmp/lotest1.dump\n\n## cleanup\n\nrm $TESTF\nrm /tmp/lotest1.dump\n\ndropdb lotest1\n\ndropuser lotest1\ndropuser lotest2\n########## End of test case\n\n\n\n\n\nIf that is according to design, then migration could be very painful going\nto 7.3 because some databases could have tables owned by several different\nusers. \n\n> \n> I believe that I can safely change the ownership of the \n> database in the old\n> server to qvowner, right? And run the pg_dump and pg_restore again? Or\n> should pg_restore connect as the superuser and just change ownership\n> afterwards?\n> \n> -ron\n> \n> \n> \n> > (I've thought several times that we should clean up pg_dump and\n> > pg_restore so that they report the failed query and backend \n> message in\n> > *all* cases; right now they're pretty haphazard about it.)\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n", "msg_date": "Fri, 31 May 2002 21:51:00 -0700", "msg_from": "Ron Snyder <snyder@roguewave.com>", "msg_from_op": true, "msg_subject": "Re: Can't import large objects in most recent cvs (2002" } ]
[ { "msg_contents": "Hello, I have been trying to get the make_ctags script working. On Redhat 7.3 \n(that is all I have access to at the moment.) the script generates the \nfollowing output:\n\n[matthew@zeutrh73 src]$ pwd\n/home/matthew/src/pgsql/src\n[matthew@zeutrh73 src]$ ./tools/make_ctags\nctags: Unknown option: -d\nctags: Unknown option: -d\nctags: Unknown option: -d\nsort: open failed: tags: No such file or directory\n[matthew@zeutrh73 src]$ cat /etc/redhat-release\nRed Hat Linux release 7.3 (Valhalla)\n[matthew@zeutrh73 src]$ ctags --version\nExuberant Ctags 5.2.2, Copyright (C) 1996-2001 Darren Hiebert\n Compiled: Feb 26 2002, 04:51:30\n Addresses: <dhiebert@users.sourceforge.net>, http://ctags.sourceforge.net\n Optional compiled features: +wildcards, +regex\n\nThe ./tags file created is not created, so all the symlinks created throught \nthe source tree are broken.\n\nThe make_ctags script runs without error if I change line 5\n\nfrom: \n\t-type f -name '*.[chyl]' -print|xargs ctags -d -t -a -f tags\nto:\t\n\t-type f -name '*.[chyl]' -print|xargs ctags -a -f tags\n\nThe man page for ctags does not list -d or -t as valid options.\n\nAm I doing somthing wrong? Something with the version of ctags provided by \nRedhat?\n\nAlso when I attempt to use the tags file what was created by the modified \nmake_ctags script, I get the following errors from vi:\n\nE432: Tags file not sorted: tags\nE426: tag not found: BUFFER_LOCK_UNLOCK\n\nMatthew\n", "msg_date": "Sun, 2 Jun 2002 04:12:12 -0400", "msg_from": "\"Mattew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": true, "msg_subject": "make_ctags problem" }, { "msg_contents": "FreeBSD man page for ctags:\n\n -d Create tags for #defines that do not take arguments; #defines\n that take arguments are tagged automatically.\n\n -t Create tags for typedefs, structs, unions, and enums.\n\n\nChris\n\n----- Original Message -----\nFrom: \"Mattew T. O'Connor\" <matthew@zeut.net>\nTo: <pgsql-hackers@postgresql.org>\nSent: Sunday, June 02, 2002 1:12 AM\nSubject: [HACKERS] make_ctags problem\n\n\n> Hello, I have been trying to get the make_ctags script working. On Redhat\n7.3\n> (that is all I have access to at the moment.) the script generates the\n> following output:\n>\n> [matthew@zeutrh73 src]$ pwd\n> /home/matthew/src/pgsql/src\n> [matthew@zeutrh73 src]$ ./tools/make_ctags\n> ctags: Unknown option: -d\n> ctags: Unknown option: -d\n> ctags: Unknown option: -d\n> sort: open failed: tags: No such file or directory\n> [matthew@zeutrh73 src]$ cat /etc/redhat-release\n> Red Hat Linux release 7.3 (Valhalla)\n> [matthew@zeutrh73 src]$ ctags --version\n> Exuberant Ctags 5.2.2, Copyright (C) 1996-2001 Darren Hiebert\n> Compiled: Feb 26 2002, 04:51:30\n> Addresses: <dhiebert@users.sourceforge.net>,\nhttp://ctags.sourceforge.net\n> Optional compiled features: +wildcards, +regex\n>\n> The ./tags file created is not created, so all the symlinks created\nthrought\n> the source tree are broken.\n>\n> The make_ctags script runs without error if I change line 5\n>\n> from:\n> -type f -name '*.[chyl]' -print|xargs ctags -d -t -a -f tags\n> to:\n> -type f -name '*.[chyl]' -print|xargs ctags -a -f tags\n>\n> The man page for ctags does not list -d or -t as valid options.\n>\n> Am I doing somthing wrong? Something with the version of ctags provided\nby\n> Redhat?\n>\n> Also when I attempt to use the tags file what was created by the modified\n> make_ctags script, I get the following errors from vi:\n>\n> E432: Tags file not sorted: tags\n> E426: tag not found: BUFFER_LOCK_UNLOCK\n>\n> Matthew\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Sun, 2 Jun 2002 11:03:41 -0700", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: make_ctags problem" } ]
[ { "msg_contents": "\tHi all!\n\n\tFirst of all, thanks for your fine job and the answers to a problem\n(which I thought it was a PG bug) I posted some days ago. I've solved it!\n\n\tNow I've got another strange issue to post...\n\n\tBriefing: the program runs daily and this (please see below) error\ndoes not always happen, actually this was the first time this week.\n\n\tI hope you can help me?... If I'm doing something wrong, please let\nme know...\n\n\tThanks in advance! :-)\n\n\n[JDBC driver error report]\norg.sourceforge.jxutil.sql.I18nSQLException: sqlError[tuplesort: unexpected end of data]\n\tat org.sourceforge.jxdbcon.postgresql.PGExceptions.sqlError(Unknown Source)\n\tat org.sourceforge.jxdbcon.postgresql.PGErrors.throwError(Unknown Source)\n\tat org.sourceforge.jxdbcon.postgresql.PGResult.checkException(Unknown Source)\n\tat org.sourceforge.jxdbcon.postgresql.PGExecResult.checkException(Unknown Source)\n\tat org.sourceforge.jxdbcon.postgresql.PGConnection.executeSQL(Unknown Source)\n\tat org.sourceforge.jxdbcon.postgresql.PGStatement.execute(Unknown Source)\n\tat org.sourceforge.jxdbcon.AbstractStatement.executeQuery(Unknown Source)\n(...)\n\n[PostgreSQL log excerpt]\n(...)\n(comment: the query in question)\nJun 2 05:35:48 srv31 postgres[2986]: [57277] DEBUG: query: SELECT fill_warehouse()\nJun 2 05:35:48 srv31 postgres[2986]: [57278-1] DEBUG: query: insert into warehouse (uri, expression, n, relevance, spid_measure, size, title, sample) select d.uri,\nJun 2 05:35:48 srv31 postgres[2986]: [57278-2] dn.expression, n.n, dn.relevance, d.spid_measure, d.size, d.title, dn.sample from document as d inner join (document_n_gram as\nJun 2 05:35:48 srv31 postgres[2986]: [57278-3] dn inner join n_gram as n on (dn.expression = n.expression)) on (d.uri = dn.uri) order by dn.expression asc, n.n asc\nJun 2 06:37:08 srv31 postgres[11485]: [3317] DEBUG: recycled transaction log file 00000018000000DA\nJun 2 06:37:08 srv31 postgres[11485]: [3318] DEBUG: recycled transaction log file 00000018000000DB\n(commment: this is a local monitor server)\nJun 2 06:39:39 srv31 postgres[11488]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 06:44:38 srv31 postgres[11498]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 06:49:39 srv31 postgres[11505]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 06:54:39 srv31 postgres[11515]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 06:59:38 srv31 postgres[11522]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 07:04:39 srv31 postgres[11541]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 07:09:39 srv31 postgres[11548]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 07:14:38 srv31 postgres[11558]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 07:19:38 srv31 postgres[11565]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 07:24:39 srv31 postgres[11575]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\n(comment: the error)\nJun 2 06:29:37 srv31 postgres[2986]: [57279] ERROR: tuplesort: unexpected end of data\nJun 2 06:29:37 srv31 postgres[2986]: [57280] NOTICE: Error occurred while executing PL/pgSQL function fill_warehouse\nJun 2 06:29:37 srv31 postgres[2986]: [57281] NOTICE: line 2 at SQL statement\nJun 2 07:29:39 srv31 postgres[11582]: [3317] FATAL 1: No pg_hba.conf entry for host 193.136.120.30, user root, database template1\nJun 2 06:29:40 srv31 postgres[2986]: [57282] DEBUG: query: COMMIT\nJun 2 06:29:40 srv31 postgres[2986]: [57283] DEBUG: ProcessUtility: COMMIT\nJun 2 06:29:41 srv31 postgres[2985]: [48998] DEBUG: pq_recvbuf: unexpected EOF on client connection\nJun 2 06:29:41 srv31 postgres[2982]: [3335] DEBUG: pq_recvbuf: unexpected EOF on client connection\nJun 2 06:29:41 srv31 postgres[2986]: [57284] DEBUG: pq_recvbuf: unexpected EOF on client connection\n(...)\n\n[System info]\nRed Hat Linux release 7.2 (Enigma)\nKernel 2.4.9-13\npostgresql-libs-7.2.1-2PGDG\npostgresql-server-7.2.1-2PGDG\npostgresql-7.2.1-2PGDG\n\n-- \n o__\t\tBem haja,\n _.>/ _\t\t\tNunoACHenriques\n (_) \\(_)\t\t\t~~~~~~~~~~~~~~~\n\t\t\t\thttp://students.fct.unl.pt/users/nuno/\n\n", "msg_date": "Sun, 2 Jun 2002 17:20:52 +0100 (WEST)", "msg_from": "NunoACHenriques <nach@fct.unl.pt>", "msg_from_op": true, "msg_subject": "tuplesort: unexpected end of data" }, { "msg_contents": "NunoACHenriques <nach@fct.unl.pt> writes:\n> Jun 2 06:29:37 srv31 postgres[2986]: [57279] ERROR: tuplesort: unexpected end of data\n\nHmm. This is an internal consistency check in the sort code. Perhaps\nyou've found a bug, but there's not enough info here to do much. Can\nyou provide the EXPLAIN plan for the query that's triggering the error?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jun 2002 15:58:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuplesort: unexpected end of data " }, { "msg_contents": "\tHi!\n\n\tI should say that this error is a non-deterministic one. It happened \nonce/twenty...\n\n-------------------------------explain info---------------------------------\nspid=> explain insert into warehouse_tmp (uri, expression, n, relevance, spid_measure, size, title, sample)\nspid-> select d.uri, dn.expression, n.n, dn.relevance, d.spid_measure, d.size, d.title, dn.sample\nspid-> from document as d\nspid-> inner join (document_n_gram as dn\nspid(> inner join n_gram as n\nspid(> on (dn.expression = n.expression))\nspid-> on (d.uri = dn.uri)\nspid-> order by dn.expression asc, n.n asc ;\nNOTICE: QUERY PLAN:\n\nSubquery Scan *SELECT* (cost=3895109.07..3895109.07 rows=1009271 width=886)\n -> Sort (cost=3895109.07..3895109.07 rows=1009271 width=886)\n -> Hash Join (cost=1155071.81..2115045.12 rows=1009271 width=886)\n -> Merge Join (cost=1154294.92..1170599.85 rows=1009271 width=588)\n -> Sort (cost=1001390.67..1001390.67 rows=1009271 width=439)\n -> Seq Scan on document_n_gram dn (cost=0.00..49251.71 rows=1009271 width=439)\n -> Sort (cost=152904.25..152904.25 rows=466345 width=149)\n -> Seq Scan on n_gram n (cost=0.00..12795.45 rows=466345 width=149)\n -> Hash (cost=767.71..767.71 rows=3671 width=298)\n -> Seq Scan on document d (cost=0.00..767.71 rows=3671 width=298)\n\nEXPLAIN\nspid=> \n----------------------------------------------------------------------------\n\n\tIf you need more info, just ask. I'll be glad to contribute to a so \ninteresting project! :-)\n\n-- \n o__\t\tBem haja,\n _.>/ _\t\t\tNunoACHenriques\n (_) \\(_)\t\t\t~~~~~~~~~~~~~~~\n\t\t\t\thttp://students.fct.unl.pt/users/nuno/\n\nOn Sun, 9 Jun 2002, Tom Lane wrote:\n\n>NunoACHenriques <nach@fct.unl.pt> writes:\n>> Jun 2 06:29:37 srv31 postgres[2986]: [57279] ERROR: tuplesort: unexpected end of data\n>\n>Hmm. This is an internal consistency check in the sort code. Perhaps\n>you've found a bug, but there's not enough info here to do much. Can\n>you provide the EXPLAIN plan for the query that's triggering the error?\n>\n>\t\t\tregards, tom lane\n>\n\n", "msg_date": "Sun, 9 Jun 2002 21:19:57 +0100 (WEST)", "msg_from": "NunoACHenriques <nach@fct.unl.pt>", "msg_from_op": true, "msg_subject": "Re: tuplesort: unexpected end of data " }, { "msg_contents": "NunoACHenriques <nach@fct.unl.pt> writes:\n> \tI should say that this error is a non-deterministic one. It happened \n> once/twenty...\n\nIs the data in the tables changing constantly? If you can repeat the\nsame query on the same data and get varying results, then we're\ndealing with something odder than I suspected.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jun 2002 16:40:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuplesort: unexpected end of data " }, { "msg_contents": "\tHi!\n\nOn Sun, 9 Jun 2002, Tom Lane wrote:\n\n>Is the data in the tables changing constantly? If you can repeat the\n>same query on the same data and get varying results, then we're\n>dealing with something odder than I suspected.\n>\n>\t\t\tregards, tom lane\n>\n\n\tNot constantly, once a day.\n\n\tI cannot repeat the same query on the same data, but the app does the \nsame query over similar data (most static Web docs with little variation of \nabout 50 docs in 7000) but diferent to PG because of the drop/create \ntables before the new fill...\n\n-- \n o__\t\tBem haja,\n _.>/ _\t\t\tNunoACHenriques\n (_) \\(_)\t\t\t~~~~~~~~~~~~~~~\n\t\t\t\thttp://students.fct.unl.pt/users/nuno/\n\n\n", "msg_date": "Sun, 9 Jun 2002 21:51:43 +0100 (WEST)", "msg_from": "NunoACHenriques <nach@fct.unl.pt>", "msg_from_op": true, "msg_subject": "Re: tuplesort: unexpected end of data " }, { "msg_contents": "NunoACHenriques <nach@fct.unl.pt> writes:\n> On Sun, 9 Jun 2002, Tom Lane wrote:\n>> Is the data in the tables changing constantly?\n\n> \tNot constantly, once a day.\n\nCan't you set up a situation where the failure is reproducible, then?\nOn a day where you get the failure, dump the database and see if\nyou can load the data into a fresh database and reproduce the problem.\n\nI spent some time looking at the tuplesort code and could not see any\nreason for this failure. All that code has been fairly stable since\nit was written for 7.0, and AFAIR no one else has reported this error\nmessage. So either you've got a quite-unusual corner case that's\ntickling a previously unseen bug, or you've got flaky hardware that for\nsome reason is manifesting in this way.\n\nI don't necessarily believe the flaky-hardware theory, but I can't\nmake much progress on the bug theory without a test case to look at.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jun 2002 21:10:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuplesort: unexpected end of data " }, { "msg_contents": "On Sun, 9 Jun 2002, Tom Lane wrote:\n\n>Can't you set up a situation where the failure is reproducible, then?\n>On a day where you get the failure, dump the database and see if\n>you can load the data into a fresh database and reproduce the problem.\n>\n\tOk, I will do that...\n\n>I don't necessarily believe the flaky-hardware theory, but I can't\n>make much progress on the bug theory without a test case to look at.\n>\n\tNeither I believe it because the machine is well tested (including a\n24h memtest). But there is something I can't get of my mind: once a day my\napp \"forces\" PG to \"play\" with some 3GB of disk data in a ext2 fs. It is\nknown that sometimes ext2 corrupts data...\n\n\tThanks for the effort! :-)\n\n-- \n o__\t\tBem haja,\n _.>/ _\t\t\tNunoACHenriques\n (_) \\(_)\t\t\t~~~~~~~~~~~~~~~\n\t\t\t\thttp://students.fct.unl.pt/users/nuno/\n\n\n\n", "msg_date": "Mon, 10 Jun 2002 14:39:01 +0100 (WEST)", "msg_from": "NunoACHenriques <nach@fct.unl.pt>", "msg_from_op": true, "msg_subject": "Re: tuplesort: unexpected end of data " }, { "msg_contents": "NunoACHenriques <nach@fct.unl.pt> writes:\n> \tNeither I believe it because the machine is well tested (including a\n> 24h memtest). But there is something I can't get of my mind: once a day my\n> app \"forces\" PG to \"play\" with some 3GB of disk data in a ext2 fs. It is\n> known that sometimes ext2 corrupts data...\n\nUnless you've got things set up so that the temporary files created for\nthe sorting step are in the ext2 partition, this doesn't seem like it\ncould be the source of the problem. A more plausible theory is that a\nsegment of main RAM is bad, but you happen not to use that part of RAM\nexcept under heavy load (ie, while running this daily batch job).\n\nOr it could just be a bug. If you can get a reproducible test case I'll\nbe happy to dig into it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jun 2002 10:03:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuplesort: unexpected end of data " }, { "msg_contents": "\tHi!\n\n\tA different error today:\n[MemoryContextAlloc: invalid request size 4294967295]\n\n\tThis is a more often (twice a week) error and I don't understand \nwhy?...\n\n\tI'm verifying the machine: fsck (with bad blocks chk), ... but no\nhardware problems untill now.\n\n--------------------------------info------------------------------------\n$ pg_config --version\nPostgreSQL 7.2.1\n\n$ cat /etc/redhat-release \nRed Hat Linux release 7.2 (Enigma)\n\n$/var/log/pgsql (excerpt)\n(...)\nJun 11 03:06:04 srv31 postgres[13914]: [3403] ERROR: cannot open segment 1 of relation n_gram (target block 528325): No such file or directory\n(...)\nJun 11 04:26:12 srv31 postgres[14972]: [3317] DEBUG: recycled transaction log file 00000020000000F6\nJun 11 04:26:12 srv31 postgres[14972]: [3318] DEBUG: recycled transaction log file 00000020000000F7\nJun 11 04:26:12 srv31 postgres[14972]: [3319] DEBUG: recycled transaction log file 00000020000000F8\nJun 11 04:28:56 srv31 postgres[14983]: [3317] DEBUG: recycled transaction log file 00000020000000F9\nJun 11 04:28:56 srv31 postgres[14983]: [3318] DEBUG: recycled transaction log file 00000020000000FA\nJun 11 04:28:56 srv31 postgres[14983]: [3319] DEBUG: recycled transaction log file 00000020000000FB\nJun 11 03:29:07 srv31 postgres[13913]: [3383] ERROR: MemoryContextAlloc: invalid request size 4294967295\nJun 11 03:29:07 srv31 postgres[13913]: [3384] NOTICE: Error occurred while executing PL/pgSQL function set_n_gram\nJun 11 03:29:07 srv31 postgres[13913]: [3385] NOTICE: line 9 at select into variables\n(...)\n------------------------------------------------------------------------\n\n--\n o__\t\tBem haja,\n _.>/ _\t\t\tNunoACHenriques\n (_) \\(_)\t\t\t~~~~~~~~~~~~~~~\n\t\t\t\thttp://students.fct.unl.pt/users/nuno/\n\n", "msg_date": "Tue, 11 Jun 2002 11:53:26 +0100 (WEST)", "msg_from": "NunoACHenriques <nach@fct.unl.pt>", "msg_from_op": true, "msg_subject": "Re: tuplesort: unexpected end of data " }, { "msg_contents": "NunoACHenriques <nach@fct.unl.pt> writes:\n> \tA different error today:\n> [MemoryContextAlloc: invalid request size 4294967295]\n\nThis could be a variant of the same problem: instead of getting a zero\ntuple length from the sort temp file, we're reading a -1 tuple length.\nStill no way to tell if it's a hardware glitch or a software bug.\n(If the latter, presumably the code is getting out of step about its\nread position in the temp file --- but how?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jun 2002 09:39:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuplesort: unexpected end of data " } ]